Problem Solving Model for Science Learning
NASA Astrophysics Data System (ADS)
Alberida, H.; Lufri; Festiyed; Barlian, E.
2018-04-01
This research aims to develop problem solving model for science learning in junior high school. The learning model was developed using the ADDIE model. An analysis phase includes curriculum analysis, analysis of students of SMP Kota Padang, analysis of SMP science teachers, learning analysis, as well as the literature review. The design phase includes product planning a science-learning problem-solving model, which consists of syntax, reaction principle, social system, support system, instructional impact and support. Implementation of problem-solving model in science learning to improve students' science process skills. The development stage consists of three steps: a) designing a prototype, b) performing a formative evaluation and c) a prototype revision. Implementation stage is done through a limited trial. A limited trial was conducted on 24 and 26 August 2015 in Class VII 2 SMPN 12 Padang. The evaluation phase was conducted in the form of experiments at SMPN 1 Padang, SMPN 12 Padang and SMP National Padang. Based on the development research done, the syntax model problem solving for science learning at junior high school consists of the introduction, observation, initial problems, data collection, data organization, data analysis/generalization, and communicating.
NASA Technical Reports Server (NTRS)
Zubko, V.; Dwek, E.; Arendt, R. G.; Oegerle, William (Technical Monitor)
2001-01-01
We present new interstellar dust models that are consistent with both, the FUV to near-IR extinction and infrared (IR) emission measurements from the diffuse interstellar medium. The models are characterized by different dust compositions and abundances. The problem we solve consists of determining the size distribution of the various dust components of the model. This problem is a typical ill-posed inversion problem which we solve using the regularization approach. We reproduce the Li Draine (2001, ApJ, 554, 778) results, however their model requires an excessive amount of interstellar silicon (48 ppM of hydrogen compared to the 36 ppM available for an ISM of solar composition) to be locked up in dust. We found that dust models consisting of PAHs, amorphous silicate, graphite, and composite grains made up from silicates, organic refractory, and water ice, provide an improved fit to the extinction and IR emission measurements, while still requiring a subsolar amount of silicon to be in the dust. This research was supported by NASA Astrophysical Theory Program NRA 99-OSS-01.
Decoding Problem Gamblers' Signals: A Decision Model for Casino Enterprises.
Ifrim, Sandra
2015-12-01
The aim of the present study is to offer a validated decision model for casino enterprises. The model enables those users to perform early detection of problem gamblers and fulfill their ethical duty of social cost minimization. To this end, the interpretation of casino customers' nonverbal communication is understood as a signal-processing problem. Indicators of problem gambling recommended by Delfabbro et al. (Identifying problem gamblers in gambling venues: final report, 2007) are combined with Viterbi algorithm into an interdisciplinary model that helps decoding signals emitted by casino customers. Model output consists of a historical path of mental states and cumulated social costs associated with a particular client. Groups of problem and non-problem gamblers were simulated to investigate the model's diagnostic capability and its cost minimization ability. Each group consisted of 26 subjects and was subsequently enlarged to 100 subjects. In approximately 95% of the cases, mental states were correctly decoded for problem gamblers. Statistical analysis using planned contrasts revealed that the model is relatively robust to the suppression of signals performed by casino clientele facing gambling problems as well as to misjudgments made by staff regarding the clients' mental states. Only if the last mentioned source of error occurs in a very pronounced manner, i.e. judgment is extremely faulty, cumulated social costs might be distorted.
A dependency-based modelling mechanism for problem solving
NASA Technical Reports Server (NTRS)
London, P.
1978-01-01
The paper develops a technique of dependency net modeling which relies on an explicit representation of justifications for beliefs held by the problem solver. Using these justifications, the modeling mechanism is able to determine the relevant lines of inference to pursue during problem solving. Three particular problem-solving difficulties which may be handled by the dependency-based technique are discussed: (1) subgoal violation detection, (2) description binding, and (3) maintaining a consistent world model.
Theory of the decision/problem state
NASA Technical Reports Server (NTRS)
Dieterly, D. L.
1980-01-01
A theory of the decision-problem state was introduced and elaborated. Starting with the basic model of a decision-problem condition, an attempt was made to explain how a major decision-problem may consist of subsets of decision-problem conditions composing different condition sequences. In addition, the basic classical decision-tree model was modified to allow for the introduction of a series of characteristics that may be encountered in an analysis of a decision-problem state. The resulting hierarchical model reflects the unique attributes of the decision-problem state. The basic model of a decision-problem condition was used as a base to evolve a more complex model that is more representative of the decision-problem state and may be used to initiate research on decision-problem states.
Applying an Information Problem-Solving Model to Academic Reference Work: Findings and Implications.
ERIC Educational Resources Information Center
Cottrell, Janet R.; Eisenberg, Michael B.
2001-01-01
Examines the usefulness of the Eisenberg-Berkowitz Information Problem-Solving model as a categorization for academic reference encounters. Major trends in the data include a high proportion of questions about location and access of sources, lack of synthesis or production activities, and consistent presence of system problems that impede the…
Dijkstra, Maria T M; Beersma, Bianca; Cornelissen, Roosmarijn A W M
2012-07-01
To test and extend the emerging Activity Reduces Conflict-Associated Strain (ARCAS) model, we predicted that the relationship between task conflict and employee strain would be weakened to the extent that people experience high organization-based self-esteem (OBSE). A survey among Dutch employees demonstrated that, consistent with the model, the conflict-employee strain relationship was weaker the higher employees' OBSE and the more they engaged in active problem-solving conflict management. Our data also revealed that higher levels of OBSE were related to more problem-solving conflict management. Moreover, consistent with the ARCAS model, we could confirm a conditional mediation model in which organization-based self-esteem through its relationship with problem-solving conflict management weakened the relationship between task conflict and employee strain. Potential applications of the results are discussed.
Collaborative problem solving with a total quality model.
Volden, C M; Monnig, R
1993-01-01
A collaborative problem-solving system committed to the interests of those involved complies with the teachings of the total quality management movement in health care. Deming espoused that any quality system must become an integral part of routine activities. A process that is used consistently in dealing with problems, issues, or conflicts provides a mechanism for accomplishing total quality improvement. The collaborative problem-solving process described here results in quality decision-making. This model incorporates Ishikawa's cause-and-effect (fishbone) diagram, Moore's key causes of conflict, and the steps of the University of North Dakota Conflict Resolution Center's collaborative problem solving model.
Graph Coloring Used to Model Traffic Lights.
ERIC Educational Resources Information Center
Williams, John
1992-01-01
Two scheduling problems, one involving setting up an examination schedule and the other describing traffic light problems, are modeled as colorings of graphs consisting of a set of vertices and edges. The chromatic number, the least number of colors necessary for coloring a graph, is employed in the solutions. (MDH)
Chen, Diane; Drabick, Deborah A G; Burgers, Darcy E
2015-12-01
Peer rejection and deviant peer affiliation are linked consistently to the development and maintenance of conduct problems. Two proposed models may account for longitudinal relations among these peer processes and conduct problems: the (a) sequential mediation model, in which peer rejection in childhood and deviant peer affiliation in adolescence mediate the link between early externalizing behaviors and more serious adolescent conduct problems; and (b) parallel process model, in which peer rejection and deviant peer affiliation are considered independent processes that operate simultaneously to increment risk for conduct problems. In this review, we evaluate theoretical models and evidence for associations among conduct problems and (a) peer rejection and (b) deviant peer affiliation. We then consider support for the sequential mediation and parallel process models. Next, we propose an integrated model incorporating both the sequential mediation and parallel process models. Future research directions and implications for prevention and intervention efforts are discussed.
Chen, Diane; Drabick, Deborah A. G.; Burgers, Darcy E.
2015-01-01
Peer rejection and deviant peer affiliation are linked consistently to the development and maintenance of conduct problems. Two proposed models may account for longitudinal relations among these peer processes and conduct problems: the (a) sequential mediation model, in which peer rejection in childhood and deviant peer affiliation in adolescence mediate the link between early externalizing behaviors and more serious adolescent conduct problems; and (b) parallel process model, in which peer rejection and deviant peer affiliation are considered independent processes that operate simultaneously to increment risk for conduct problems. In this review, we evaluate theoretical models and evidence for associations among conduct problems and (a) peer rejection and (b) deviant peer affiliation. We then consider support for the sequential mediation and parallel process models. Next, we propose an integrated model incorporating both the sequential mediation and parallel process models. Future research directions and implications for prevention and intervention efforts are discussed. PMID:25410430
Clarification process: Resolution of decision-problem conditions
NASA Technical Reports Server (NTRS)
Dieterly, D. L.
1980-01-01
A model of a general process which occurs in both decisionmaking and problem-solving tasks is presented. It is called the clarification model and is highly dependent on information flow. The model addresses the possible constraints of individual indifferences and experience in achieving success in resolving decision-problem conditions. As indicated, the application of the clarification process model is only necessary for certain classes of the basic decision-problem condition. With less complex decision problem conditions, certain phases of the model may be omitted. The model may be applied across a wide range of decision problem conditions. The model consists of two major components: (1) the five-phase prescriptive sequence (based on previous approaches to both concepts) and (2) the information manipulation function (which draws upon current ideas in the areas of information processing, computer programming, memory, and thinking). The two components are linked together to provide a structure that assists in understanding the process of resolving problems and making decisions.
Martin, Monica J.; Conger, Rand D.; Schofield, Thomas J.; Dogan, Shannon J.; Widaman, Keith F.; Donnellan, M. Brent; Neppl, Tricia K.
2010-01-01
The current multigenerational study evaluates the utility of the Interactionist Model of Socioeconomic Influence on human development (IMSI) in explaining problem behaviors across generations. The IMSI proposes that the association between socioeconomic status (SES) and human development involves a dynamic interplay that includes both social causation (SES influences human development) and social selection (individual characteristics affect SES). As part of the developmental cascade proposed by the IMSI, the findings from this investigation showed that G1 adolescent problem behavior predicted later G1 SES, family stress, and parental emotional investments, as well as the next generation of children's problem behavior. These results are consistent with a social selection view. Consistent with the social causation perspective, we found a significant relation between G1 SES and family stress, and in turn, family stress predicted G2 problem behavior. Finally, G1 adult SES predicted both material and emotional investments in the G2 child. In turn, emotional investments predicted G2 problem behavior, as did material investments. Some of the predicted pathways varied by G1 parent gender. The results are consistent with the view that processes of both social selection and social causation account for the association between SES and human development. PMID:20576188
Effectiveness of discovery learning model on mathematical problem solving
NASA Astrophysics Data System (ADS)
Herdiana, Yunita; Wahyudin, Sispiyati, Ririn
2017-08-01
This research is aimed to describe the effectiveness of discovery learning model on mathematical problem solving. This research investigate the students' problem solving competency before and after learned by using discovery learning model. The population used in this research was student in grade VII in one of junior high school in West Bandung Regency. From nine classes, class VII B were randomly selected as the sample of experiment class, and class VII C as control class, which consist of 35 students every class. The method in this research was quasi experiment. The instrument in this research is pre-test, worksheet and post-test about problem solving of mathematics. Based on the research, it can be conclude that the qualification of problem solving competency of students who gets discovery learning model on level 80%, including in medium category and it show that discovery learning model effective to improve mathematical problem solving.
Havârneanu, Grigore M; Burkhardt, Jean-Marie; Silla, Anne
2017-12-01
Suicides and trespassing accidents result in more than 3800 fatalities in Europe, representing 88% of all fatalities occurring within the EU railway system. This paper presents a problem-solving model, which consists of a multistep approach structuring the analysis of a suicide or trespass-related problem on the railways. First, we present the method used to design, evaluate and improve the problem-solving model. Then we describe the model in detail: it comprises six steps with several subsequent actions, and each action is approached through a checklist of prompting questions and possible answers. At the end, we discuss the added value of this model for decision makers and its usability in the selection of optimal prevention measures.
NASA Astrophysics Data System (ADS)
Zirconia, A.; Supriyanti, F. M. T.; Supriatna, A.
2018-04-01
This study aims to determine generic science skills enhancement of students through implementation of IDEAL problem-solving model on genetic information course. Method of this research was mixed method, with pretest-posttest nonequivalent control group design. Subjects of this study were chemistry students enrolled in biochemistry course, consisted of 22 students in the experimental class and 19 students in control class. The instrument in this study was essayed involves 6 indicators generic science skills such as indirect observation, causality thinking, logical frame, self-consistent thinking, symbolic language, and developing concept. The results showed that genetic information course using IDEAL problem-solving model have been enhancing generic science skills in low category with
Validation of tsunami inundation model TUNA-RP using OAR-PMEL-135 benchmark problem set
NASA Astrophysics Data System (ADS)
Koh, H. L.; Teh, S. Y.; Tan, W. K.; Kh'ng, X. Y.
2017-05-01
A standard set of benchmark problems, known as OAR-PMEL-135, is developed by the US National Tsunami Hazard Mitigation Program for tsunami inundation model validation. Any tsunami inundation model must be tested for its accuracy and capability using this standard set of benchmark problems before it can be gainfully used for inundation simulation. The authors have previously developed an in-house tsunami inundation model known as TUNA-RP. This inundation model solves the two-dimensional nonlinear shallow water equations coupled with a wet-dry moving boundary algorithm. This paper presents the validation of TUNA-RP against the solutions provided in the OAR-PMEL-135 benchmark problem set. This benchmark validation testing shows that TUNA-RP can indeed perform inundation simulation with accuracy consistent with that in the tested benchmark problem set.
A consistent transported PDF model for treating differential molecular diffusion
NASA Astrophysics Data System (ADS)
Wang, Haifeng; Zhang, Pei
2016-11-01
Differential molecular diffusion is a fundamentally significant phenomenon in all multi-component turbulent reacting or non-reacting flows caused by the different rates of molecular diffusion of energy and species concentrations. In the transported probability density function (PDF) method, the differential molecular diffusion can be treated by using a mean drift model developed by McDermott and Pope. This model correctly accounts for the differential molecular diffusion in the scalar mean transport and yields a correct DNS limit of the scalar variance production. The model, however, misses the molecular diffusion term in the scalar variance transport equation, which yields an inconsistent prediction of the scalar variance in the transported PDF method. In this work, a new model is introduced to remedy this problem that can yield a consistent scalar variance prediction. The model formulation along with its numerical implementation is discussed, and the model validation is conducted in a turbulent mixing layer problem.
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.
2009-01-01
In current practice, it is often difficult to draw firm conclusions about turbulence model accuracy when performing multi-code CFD studies ostensibly using the same model because of inconsistencies in model formulation or implementation in different codes. This paper describes an effort to improve the consistency, verification, and validation of turbulence models within the aerospace community through a website database of verification and validation cases. Some of the variants of two widely-used turbulence models are described, and two independent computer codes (one structured and one unstructured) are used in conjunction with two specific versions of these models to demonstrate consistency with grid refinement for several representative problems. Naming conventions, implementation consistency, and thorough grid resolution studies are key factors necessary for success.
The US EPA has a plan to leverage recent advances in meteorological modeling to develop a "Next-Generation" air quality modeling system that will allow consistent modeling of problems from global to local scale. The meteorological model of choice is the Model for Predic...
ERIC Educational Resources Information Center
McGarrity, DeShawn N.
2013-01-01
Society is faced with more complex problems than in the past because of rapid advancements in technology. These complex problems require multi-dimensional problem-solving abilities that are consistent with higher-order thinking skills. Bok (2006) posits that over 90% of U.S. faculty members consider critical thinking skills as essential for…
A formally verified algorithm for interactive consistency under a hybrid fault model
NASA Technical Reports Server (NTRS)
Lincoln, Patrick; Rushby, John
1993-01-01
Consistent distribution of single-source data to replicated computing channels is a fundamental problem in fault-tolerant system design. The 'Oral Messages' (OM) algorithm solves this problem of Interactive Consistency (Byzantine Agreement) assuming that all faults are worst-cass. Thambidurai and Park introduced a 'hybrid' fault model that distinguished three fault modes: asymmetric (Byzantine), symmetric, and benign; they also exhibited, along with an informal 'proof of correctness', a modified version of OM. Unfortunately, their algorithm is flawed. The discipline of mechanically checked formal verification eventually enabled us to develop a correct algorithm for Interactive Consistency under the hybrid fault model. This algorithm withstands $a$ asymmetric, $s$ symmetric, and $b$ benign faults simultaneously, using $m+1$ rounds, provided $n is greater than 2a + 2s + b + m$, and $m\\geg a$. We present this algorithm, discuss its subtle points, and describe its formal specification and verification in PVS. We argue that formal verification systems such as PVS are now sufficiently effective that their application to fault-tolerance algorithms should be considered routine.
NASA Astrophysics Data System (ADS)
Werner, K.; Liu, F. M.; Ostapchenko, S.; Pierog, T.
2004-11-01
After discussing conceptual problems with the conventional string model, we present a new approach, based on a theoretically consistent multiple scattering formalism. First results for proton-proton scattering at 158 GeV are discussed.
SPACE PROPULSION SYSTEM PHASED-MISSION PROBABILITY ANALYSIS USING CONVENTIONAL PRA METHODS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis Smith; James Knudsen
As part of a series of papers on the topic of advance probabilistic methods, a benchmark phased-mission problem has been suggested. This problem consists of modeling a space mission using an ion propulsion system, where the mission consists of seven mission phases. The mission requires that the propulsion operate for several phases, where the configuration changes as a function of phase. The ion propulsion system itself consists of five thruster assemblies and a single propellant supply, where each thruster assembly has one propulsion power unit and two ion engines. In this paper, we evaluate the probability of mission failure usingmore » the conventional methodology of event tree/fault tree analysis. The event tree and fault trees are developed and analyzed using Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE). While the benchmark problem is nominally a "dynamic" problem, in our analysis the mission phases are modeled in a single event tree to show the progression from one phase to the next. The propulsion system is modeled in fault trees to account for the operation; or in this case, the failure of the system. Specifically, the propulsion system is decomposed into each of the five thruster assemblies and fed into the appropriate N-out-of-M gate to evaluate mission failure. A separate fault tree for the propulsion system is developed to account for the different success criteria of each mission phase. Common-cause failure modeling is treated using traditional (i.e., parametrically) methods. As part of this paper, we discuss the overall results in addition to the positive and negative aspects of modeling dynamic situations with non-dynamic modeling techniques. One insight from the use of this conventional method for analyzing the benchmark problem is that it requires significant manual manipulation to the fault trees and how they are linked into the event tree. The conventional method also requires editing the resultant cut sets to obtain the correct results. While conventional methods may be used to evaluate a dynamic system like that in the benchmark, the level of effort required may preclude its use on real-world problems.« less
Trait Affectivity and Nonreferred Adolescent Conduct Problems
ERIC Educational Resources Information Center
Loney, Bryan R.; Lima, Elizabeth N.; Butler, Melanie A.
2006-01-01
This study examined for profiles of positive trait affectivity (PA) and negative trait affectivity (NA) associated with adolescent conduct problems. Prior trait affectivity research has been relatively biased toward the assessment of adults and internalizing symptomatology. Consistent with recent developmental modeling of antisocial behavior, this…
Solution to the sign problem in a frustrated quantum impurity model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hann, Connor T., E-mail: connor.hann@yale.edu; Huffman, Emilie; Chandrasekharan, Shailesh
2017-01-15
In this work we solve the sign problem of a frustrated quantum impurity model consisting of three quantum spin-half chains interacting through an anti-ferromagnetic Heisenberg interaction at one end. We first map the model into a repulsive Hubbard model of spin-half fermions hopping on three independent one dimensional chains that interact through a triangular hopping at one end. We then convert the fermion model into an inhomogeneous one dimensional model and express the partition function as a weighted sum over fermion worldline configurations. By imposing a pairing of fermion worldlines in half the space we show that all negative weightmore » configurations can be eliminated. This pairing naturally leads to the original frustrated quantum spin model at half filling and thus solves its sign problem.« less
Manothum, Aniruth; Rukijkanpanich, Jittra; Thawesaengskulthai, Damrong; Thampitakkul, Boonwa; Chaikittiporn, Chalermchai; Arphorn, Sara
2009-01-01
The purpose of this study was to evaluate the implementation of an Occupational Health and Safety Management Model for informal sector workers in Thailand. The studied model was characterized by participatory approaches to preliminary assessment, observation of informal business practices, group discussion and participation, and the use of environmental measurements and samples. This model consisted of four processes: capacity building, risk analysis, problem solving, and monitoring and control. The participants consisted of four local labor groups from different regions, including wood carving, hand-weaving, artificial flower making, and batik processing workers. The results demonstrated that, as a result of applying the model, the working conditions of the informal sector workers had improved to meet necessary standards. This model encouraged the use of local networks, which led to cooperation within the groups to create appropriate technologies to solve their problems. The authors suggest that this model could effectively be applied elsewhere to improve informal sector working conditions on a broader scale.
Comment on self-consistent model of black hole formation and evaporation
NASA Astrophysics Data System (ADS)
Ho, Pei-Ming
2015-08-01
In an earlier work, Kawai et al. proposed a model of black-hole formation and evaporation, in which the geometry of a collapsing shell of null dust is studied, including consistently the back reaction of its Hawking radiation. In this note, we illuminate the implications of their work, focusing on the resolution of the information loss paradox and the problem of the firewall.
NASA Astrophysics Data System (ADS)
Mushlihuddin, R.; Nurafifah; Irvan
2018-01-01
The student’s low ability in mathematics problem solving proved to the less effective of a learning process in the classroom. Effective learning was a learning that affects student’s math skills, one of which is problem-solving abilities. Problem-solving capability consisted of several stages: understanding the problem, planning the settlement, solving the problem as planned, re-examining the procedure and the outcome. The purpose of this research was to know: (1) was there any influence of PBL model in improving ability Problem solving of student math in a subject of vector analysis?; (2) was the PBL model effective in improving students’ mathematical problem-solving skills in vector analysis courses? This research was a quasi-experiment research. The data analysis techniques performed from the test stages of data description, a prerequisite test is the normality test, and hypothesis test using the ANCOVA test and Gain test. The results showed that: (1) there was an influence of PBL model in improving students’ math problem-solving abilities in vector analysis courses; (2) the PBL model was effective in improving students’ problem-solving skills in vector analysis courses with a medium category.
ERIC Educational Resources Information Center
Erturk Kara, H. Gozde
2017-01-01
Aim of this study is to examine the factors that affect children's behavioral problems and present the relationships between children's behavioral problems, social skills and teacher child relationship. Relational screening model was preferred for this study. Study group consisted of 53, 36-48 months of age children who studied at early childhood…
Information-Processing Models and Curriculum Design
ERIC Educational Resources Information Center
Calfee, Robert C.
1970-01-01
"This paper consists of three sections--(a) the relation of theoretical analyses of learning to curriculum design, (b) the role of information-processing models in analyses of learning processes, and (c) selected examples of the application of information-processing models to curriculum design problems." (Author)
Students' Models of Curve Fitting: A Models and Modeling Perspective
ERIC Educational Resources Information Center
Gupta, Shweta
2010-01-01
The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…
Thermodynamically consistent model calibration in chemical kinetics
2011-01-01
Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC) method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints) into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new models. Furthermore, TCMC can provide dimensionality reduction, better estimation performance, and lower computational complexity, and can help to alleviate the problem of data overfitting. PMID:21548948
Kimura, Shuhei; Sato, Masanao; Okada-Hatakeyama, Mariko
2013-01-01
The inference of a genetic network is a problem in which mutual interactions among genes are inferred from time-series of gene expression levels. While a number of models have been proposed to describe genetic networks, this study focuses on a mathematical model proposed by Vohradský. Because of its advantageous features, several researchers have proposed the inference methods based on Vohradský's model. When trying to analyze large-scale networks consisting of dozens of genes, however, these methods must solve high-dimensional non-linear function optimization problems. In order to resolve the difficulty of estimating the parameters of the Vohradský's model, this study proposes a new method that defines the problem as several two-dimensional function optimization problems. Through numerical experiments on artificial genetic network inference problems, we showed that, although the computation time of the proposed method is not the shortest, the method has the ability to estimate parameters of Vohradský's models more effectively with sufficiently short computation times. This study then applied the proposed method to an actual inference problem of the bacterial SOS DNA repair system, and succeeded in finding several reasonable regulations. PMID:24386175
Smith, Douglas C; Hall, James A; Jang, Mijin; Arndt, Stephan
2009-01-01
This study evaluated whether adherence to the Strengths-Oriented Referral for Teens (SORT) model, a motivational interviewing (MI)-consistent intervention addressing ambivalence about attending treatment, positively predicted adolescents' initial-session attendance. Therapist adherence was rated in 54 audiotaped SORT sessions by coders who were blind to treatment-entry status. Higher adherence scores reflected greater use of MI and solution focused language, discussion of client strengths, and dialogue with families on treatment need and options. Therapist adherence during adolescent segments interacted with adolescent problem perception. Predicted probabilities of attending initial sessions increased for low-problem-perception adolescents at increasingly higher therapist adherence. Although replication studies are needed, the SORT model of providing MI-consistent debriefing following initial assessments appears to be a promising approach for increasing treatment entry. Initial support for the treatment-matching hypothesis was found for substance-misusing adolescents contemplating treatment entry.
Attractor learning in synchronized chaotic systems in the presence of unresolved scales
NASA Astrophysics Data System (ADS)
Wiegerinck, W.; Selten, F. M.
2017-12-01
Recently, supermodels consisting of an ensemble of interacting models, synchronizing on a common solution, have been proposed as an alternative to the common non-interactive multi-model ensembles in order to improve climate predictions. The connection terms in the interacting ensemble are to be optimized based on the data. The supermodel approach has been successfully demonstrated in a number of simulation experiments with an assumed ground truth and a set of good, but imperfect models. The supermodels were optimized with respect to their short-term prediction error. Nevertheless, they produced long-term climatological behavior that was close to the long-term behavior of the assumed ground truth, even in cases where the long-term behavior of the imperfect models was very different. In these supermodel experiments, however, a perfect model class scenario was assumed, in which the ground truth and imperfect models belong to the same model class and only differ in parameter setting. In this paper, we consider the imperfect model class scenario, in which the ground truth model class is more complex than the model class of imperfect models due to unresolved scales. We perform two supermodel experiments in two toy problems. The first one consists of a chaotically driven Lorenz 63 oscillator ground truth and two Lorenz 63 oscillators with constant forcings as imperfect models. The second one is more realistic and consists of a global atmosphere model as ground truth and imperfect models that have perturbed parameters and reduced spatial resolution. In both problems, we find that supermodel optimization with respect to short-term prediction error can lead to a long-term climatological behavior that is worse than that of the imperfect models. However, we also show that attractor learning can remedy this problem, leading to supermodels with long-term behavior superior to the imperfect models.
Hastings, Paul D; Helm, Jonathan; Mills, Rosemary S L; Serbin, Lisa A; Stack, Dale M; Schwartzman, Alex E
2015-07-01
This investigation evaluated a multilevel model of dispositional and environmental factors contributing to the development of internalizing problems from preschool-age to school-age. In a sample of 375 families (185 daughters, 190 sons) drawn from three independent samples, preschoolers' behavioral inhibition, cortisol and gender were examined as moderators of the links between mothers' negative parenting behavior, negative emotional characteristics, and socioeconomic status when children were 3.95 years, and their internalizing problems when they were 8.34 years. Children's dispositional characteristics moderated all associations between these environmental factors and mother-reported internalizing problems in patterns that were consistent with either diathesis-stress or differential-susceptibility models of individual-environment interaction, and with gender models of developmental psychopathology. Greater inhibition and lower socioeconomic status were directly predictive of more teacher reported internalizing problems. These findings highlight the importance of using multilevel models within a bioecological framework to understand the complex pathways through which internalizing difficulties develop.
Adolescents' Emotion Regulation Strategies, Self-Concept, and Internalizing Problems
ERIC Educational Resources Information Center
Hsieh, Manying; Stright, Anne Dopkins
2012-01-01
This study examined the relationships among adolescents' emotion regulation strategies (suppression and cognitive reappraisal), self-concept, and internalizing problems using structural equation modeling. The sample consisted of 438 early adolescents (13 to 15 years old) in Taiwan, including 215 boys and 223 girls. For both boys and girls,…
Social Skills, Problem Behaviors and Classroom Management in Inclusive Preschool Settings
ERIC Educational Resources Information Center
Karakaya, Esra G.; Tufan, Mumin
2018-01-01
This study aimed to determine preschool teachers' classroom management skills and investigate the relationships between teachers' classroom management skills and inclusion students' social skills and problem behaviors. Relational screening model was used as the research method. Study group consisted of 42 pre-school teachers working in Kocaeli…
Contributions to Statistical Problems Related to Microarray Data
ERIC Educational Resources Information Center
Hong, Feng
2009-01-01
Microarray is a high throughput technology to measure the gene expression. Analysis of microarray data brings many interesting and challenging problems. This thesis consists three studies related to microarray data. First, we propose a Bayesian model for microarray data and use Bayes Factors to identify differentially expressed genes. Second, we…
Raykos, Bronwyn C; McEvoy, Peter M; Fursland, Anthea
2017-09-01
The present study evaluated the relative clinical validity of two interpersonal models of the maintenance of eating disorders, IPT-ED (Rieger et al., ) and the interpersonal model of binge eating (Wilfley, MacKenzie, Welch, Ayres, & Weissman, ; Wilfley, Pike, & Striegel-Moore, ). While both models propose an indirect relationship between interpersonal problems and eating disorder symptoms via negative affect, IPT-ED specifies negative social evaluation as the key interpersonal problem, and places greater emphasis on the role of low self-esteem as an intermediate variable between negative social evaluation and eating pathology. Treatment-seeking individuals (N = 306) with a diagnosed eating disorder completed measures of socializing problems, generic interpersonal problems, self-esteem, eating disorder symptoms, and negative affect (depression and anxiety). Structural equation models were run for both models. Consistent with IPT-ED, a significant indirect pathway was found from socializing problems to eating disorder symptoms via low self-esteem and anxiety symptoms. There was also a direct pathway from low self-esteem to eating disorder symptoms. Using a socializing problems factor in the model resulted in a significantly better fit than a generic interpersonal problems factor. Inconsistent with both interpersonal models, the direct pathway from socializing problems to eating disorder symptoms was not supported. Interpersonal models that included self-esteem and focused on socializing problems (rather than generic interpersonal problems) explained more variance in eating disorder symptoms. Future experimental, prospective, and treatment studies are required to strengthen the case that these pathways are causal. © 2017 Wiley Periodicals, Inc.
Evolving Non-Dominated Parameter Sets for Computational Models from Multiple Experiments
NASA Astrophysics Data System (ADS)
Lane, Peter C. R.; Gobet, Fernand
2013-03-01
Creating robust, reproducible and optimal computational models is a key challenge for theorists in many sciences. Psychology and cognitive science face particular challenges as large amounts of data are collected and many models are not amenable to analytical techniques for calculating parameter sets. Particular problems are to locate the full range of acceptable model parameters for a given dataset, and to confirm the consistency of model parameters across different datasets. Resolving these problems will provide a better understanding of the behaviour of computational models, and so support the development of general and robust models. In this article, we address these problems using evolutionary algorithms to develop parameters for computational models against multiple sets of experimental data; in particular, we propose the `speciated non-dominated sorting genetic algorithm' for evolving models in several theories. We discuss the problem of developing a model of categorisation using twenty-nine sets of data and models drawn from four different theories. We find that the evolutionary algorithms generate high quality models, adapted to provide a good fit to all available data.
Pansharpening via coupled triple factorization dictionary learning
Skau, Erik; Wohlberg, Brendt; Krim, Hamid; ...
2016-03-01
Data fusion is the operation of integrating data from different modalities to construct a single consistent representation. Here, this paper proposes variations of coupled dictionary learning through an additional factorization. One variation of this model is applicable to the pansharpening data fusion problem. Real world pansharpening data was applied to train and test our proposed formulation. The results demonstrate that the data fusion model can successfully be applied to the pan-sharpening problem.
Invariant characteristics of self-organization modes in Belousov reaction modeling
NASA Astrophysics Data System (ADS)
Glyzin, S. D.; Goryunov, V. E.; Kolesov, A. Yu
2018-01-01
We consider the problem of mathematical modeling of oxidation-reduction oscillatory chemical reactions based on the mechanism of Belousov reaction. The process of the main components interaction in such reaction can be interpreted by a phenomenologically similar to it “predator-prey” model. Thereby, we consider a parabolic boundary value problem consisting of three Volterra-type equations, which is a mathematical model of this reaction. We carry out a local study of the neighborhood of the system’s non-trivial equilibrium state and construct the normal form of the considering system. Finally, we do a numerical analysis of the coexisting chaotic oscillatory modes of the boundary value problem in a flat area, which have different nature and occur as the diffusion coefficient decreases.
Mathematical form models of tree trunks
Rudolfs Ozolins
2000-01-01
Assortment structure analysis of tree trunks is a characteristic and proper problem that can be solved by using mathematical modeling and standard computer programs. Mathematical form model of tree trunks consists of tapering curve equations and their parameters. Parameters for nine species were obtained by processing measurements of 2,794 model trees and studying the...
The GeoClaw software for depth-averaged flows with adaptive refinement
Berger, M.J.; George, D.L.; LeVeque, R.J.; Mandli, Kyle T.
2011-01-01
Many geophysical flow or wave propagation problems can be modeled with two-dimensional depth-averaged equations, of which the shallow water equations are the simplest example. We describe the GeoClaw software that has been designed to solve problems of this nature, consisting of open source Fortran programs together with Python tools for the user interface and flow visualization. This software uses high-resolution shock-capturing finite volume methods on logically rectangular grids, including latitude-longitude grids on the sphere. Dry states are handled automatically to model inundation. The code incorporates adaptive mesh refinement to allow the efficient solution of large-scale geophysical problems. Examples are given illustrating its use for modeling tsunamis and dam-break flooding problems. Documentation and download information is available at www.clawpack.org/geoclaw. ?? 2011.
NASA Astrophysics Data System (ADS)
Rabbani, Masoud; Montazeri, Mona; Farrokhi-Asl, Hamed; Rafiei, Hamed
2016-12-01
Mixed-model assembly lines are increasingly accepted in many industrial environments to meet the growing trend of greater product variability, diversification of customer demands, and shorter life cycles. In this research, a new mathematical model is presented considering balancing a mixed-model U-line and human-related issues, simultaneously. The objective function consists of two separate components. The first part of the objective function is related to balance problem. In this part, objective functions are minimizing the cycle time, minimizing the number of workstations, and maximizing the line efficiencies. The second part is related to human issues and consists of hiring cost, firing cost, training cost, and salary. To solve the presented model, two well-known multi-objective evolutionary algorithms, namely non-dominated sorting genetic algorithm and multi-objective particle swarm optimization, have been used. A simple solution representation is provided in this paper to encode the solutions. Finally, the computational results are compared and analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olsen, Seth, E-mail: seth.olsen@uq.edu.au
2015-01-28
This paper reviews basic results from a theory of the a priori classical probabilities (weights) in state-averaged complete active space self-consistent field (SA-CASSCF) models. It addresses how the classical probabilities limit the invariance of the self-consistency condition to transformations of the complete active space configuration interaction (CAS-CI) problem. Such transformations are of interest for choosing representations of the SA-CASSCF solution that are diabatic with respect to some interaction. I achieve the known result that a SA-CASSCF can be self-consistently transformed only within degenerate subspaces of the CAS-CI ensemble density matrix. For uniformly distributed (“microcanonical”) SA-CASSCF ensembles, self-consistency is invariant tomore » any unitary CAS-CI transformation that acts locally on the ensemble support. Most SA-CASSCF applications in current literature are microcanonical. A problem with microcanonical SA-CASSCF models for problems with “more diabatic than adiabatic” states is described. The problem is that not all diabatic energies and couplings are self-consistently resolvable. A canonical-ensemble SA-CASSCF strategy is proposed to solve the problem. For canonical-ensemble SA-CASSCF, the equilibrated ensemble is a Boltzmann density matrix parametrized by its own CAS-CI Hamiltonian and a Lagrange multiplier acting as an inverse “temperature,” unrelated to the physical temperature. Like the convergence criterion for microcanonical-ensemble SA-CASSCF, the equilibration condition for canonical-ensemble SA-CASSCF is invariant to transformations that act locally on the ensemble CAS-CI density matrix. The advantage of a canonical-ensemble description is that more adiabatic states can be included in the support of the ensemble without running into convergence problems. The constraint on the dimensionality of the problem is relieved by the introduction of an energy constraint. The method is illustrated with a complete active space valence-bond (CASVB) analysis of the charge/bond resonance electronic structure of a monomethine cyanine: Michler’s hydrol blue. The diabatic CASVB representation is shown to vary weakly for “temperatures” corresponding to visible photon energies. Canonical-ensemble SA-CASSCF enables the resolution of energies and couplings for all covalent and ionic CASVB structures contributing to the SA-CASSCF ensemble. The CASVB solution describes resonance of charge- and bond-localized electronic structures interacting via bridge resonance superexchange. The resonance couplings can be separated into channels associated with either covalent charge delocalization or chemical bonding interactions, with the latter significantly stronger than the former.« less
Olsen, Seth
2015-01-28
This paper reviews basic results from a theory of the a priori classical probabilities (weights) in state-averaged complete active space self-consistent field (SA-CASSCF) models. It addresses how the classical probabilities limit the invariance of the self-consistency condition to transformations of the complete active space configuration interaction (CAS-CI) problem. Such transformations are of interest for choosing representations of the SA-CASSCF solution that are diabatic with respect to some interaction. I achieve the known result that a SA-CASSCF can be self-consistently transformed only within degenerate subspaces of the CAS-CI ensemble density matrix. For uniformly distributed ("microcanonical") SA-CASSCF ensembles, self-consistency is invariant to any unitary CAS-CI transformation that acts locally on the ensemble support. Most SA-CASSCF applications in current literature are microcanonical. A problem with microcanonical SA-CASSCF models for problems with "more diabatic than adiabatic" states is described. The problem is that not all diabatic energies and couplings are self-consistently resolvable. A canonical-ensemble SA-CASSCF strategy is proposed to solve the problem. For canonical-ensemble SA-CASSCF, the equilibrated ensemble is a Boltzmann density matrix parametrized by its own CAS-CI Hamiltonian and a Lagrange multiplier acting as an inverse "temperature," unrelated to the physical temperature. Like the convergence criterion for microcanonical-ensemble SA-CASSCF, the equilibration condition for canonical-ensemble SA-CASSCF is invariant to transformations that act locally on the ensemble CAS-CI density matrix. The advantage of a canonical-ensemble description is that more adiabatic states can be included in the support of the ensemble without running into convergence problems. The constraint on the dimensionality of the problem is relieved by the introduction of an energy constraint. The method is illustrated with a complete active space valence-bond (CASVB) analysis of the charge/bond resonance electronic structure of a monomethine cyanine: Michler's hydrol blue. The diabatic CASVB representation is shown to vary weakly for "temperatures" corresponding to visible photon energies. Canonical-ensemble SA-CASSCF enables the resolution of energies and couplings for all covalent and ionic CASVB structures contributing to the SA-CASSCF ensemble. The CASVB solution describes resonance of charge- and bond-localized electronic structures interacting via bridge resonance superexchange. The resonance couplings can be separated into channels associated with either covalent charge delocalization or chemical bonding interactions, with the latter significantly stronger than the former.
Numerical methods in Markov chain modeling
NASA Technical Reports Server (NTRS)
Philippe, Bernard; Saad, Youcef; Stewart, William J.
1989-01-01
Several methods for computing stationary probability distributions of Markov chains are described and compared. The main linear algebra problem consists of computing an eigenvector of a sparse, usually nonsymmetric, matrix associated with a known eigenvalue. It can also be cast as a problem of solving a homogeneous singular linear system. Several methods based on combinations of Krylov subspace techniques are presented. The performance of these methods on some realistic problems are compared.
Bicycles, Birds, Bats and Balloons: New Applications for Algebra Classes.
ERIC Educational Resources Information Center
Yoshiwara, Bruce; Yoshiwara, Kathy
This collection of activities is intended to enhance the teaching of college algebra through the use of modeling. The problems use real data and involve the representation and interpretation of the data. The concepts addressed include rates of change, linear and quadratic regression, and functions. The collection consists of eight problems, four…
An ultra-weak sector, the strong CP problem and the pseudo-Goldstone dilaton
Allison, Kyle; Hill, Christopher T.; Ross, Graham G.
2014-12-29
In the context of a Coleman–Weinberg mechanism for the Higgs boson mass, we address the strong CP problem. We show that a DFSZ-like invisible axion model with a gauge-singlet complex scalar field S, whose couplings to the Standard Model are naturally ultra-weak, can solve the strong CP problem and simultaneously generate acceptable electroweak symmetry breaking. The ultra-weak couplings of the singlet S are associated with underlying approximate shift symmetries that act as custodial symmetries and maintain technical naturalness. The model also contains a very light pseudo-Goldstone dilaton that is consistent with cosmological Polonyi bounds, and the axion can be themore » dark matter of the universe. As a result, we further outline how a SUSY version of this model, which may be required in the context of Grand Unification, can avoid introducing a hierarchy problem.« less
An ultra-weak sector, the strong CP problem and the pseudo-Goldstone dilaton
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allison, Kyle; Hill, Christopher T.; Ross, Graham G.
In the context of a Coleman–Weinberg mechanism for the Higgs boson mass, we address the strong CP problem. We show that a DFSZ-like invisible axion model with a gauge-singlet complex scalar field S, whose couplings to the Standard Model are naturally ultra-weak, can solve the strong CP problem and simultaneously generate acceptable electroweak symmetry breaking. The ultra-weak couplings of the singlet S are associated with underlying approximate shift symmetries that act as custodial symmetries and maintain technical naturalness. The model also contains a very light pseudo-Goldstone dilaton that is consistent with cosmological Polonyi bounds, and the axion can be themore » dark matter of the universe. As a result, we further outline how a SUSY version of this model, which may be required in the context of Grand Unification, can avoid introducing a hierarchy problem.« less
Piehler, Timothy F; Bloomquist, Michael L; August, Gerald J; Gewirtz, Abigail H; Lee, Susanne S; Lee, Wendy S C
2014-01-01
A culturally diverse sample of formerly homeless youth (ages 6-12) and their families (n = 223) participated in a cluster randomized controlled trial of the Early Risers conduct problems prevention program in a supportive housing setting. Parents provided 4 annual behaviorally-based ratings of executive functioning (EF) and conduct problems, including at baseline, over 2 years of intervention programming, and at a 1-year follow-up assessment. Using intent-to-treat analyses, a multilevel latent growth model revealed that the intervention group demonstrated reduced growth in conduct problems over the 4 assessment points. In order to examine mediation, a multilevel parallel process latent growth model was used to simultaneously model growth in EF and growth in conduct problems along with intervention status as a covariate. A significant mediational process emerged, with participation in the intervention promoting growth in EF, which predicted negative growth in conduct problems. The model was consistent with changes in EF fully mediating intervention-related changes in youth conduct problems over the course of the study. These findings highlight the critical role that EF plays in behavioral change and lends further support to its importance as a target in preventive interventions with populations at risk for conduct problems.
On a problem of reconstruction of a discontinuous function by its Radon transform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Derevtsov, Evgeny Yu.; Maltseva, Svetlana V.; Svetov, Ivan E.
A problem of reconstruction of a discontinuous function by its Radon transform is considered. One of the approaches to the numerical solution for the problem consists in the next sequential steps: a visualization of a set of breaking points; an identification of this set; a determination of jump values; an elimination of discontinuities. We consider three of listed problems except the problem of jump values. The problems are investigated by mathematical modeling using numerical experiments. The results of simulation are satisfactory and allow to hope for the further development of the approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marzouk, Youssef; Fast P.; Kraus, M.
2006-01-01
Terrorist attacks using an aerosolized pathogen preparation have gained credibility as a national security concern after the anthrax attacks of 2001. The ability to characterize such attacks, i.e., to estimate the number of people infected, the time of infection, and the average dose received, is important when planning a medical response. We address this question of characterization by formulating a Bayesian inverse problem predicated on a short time-series of diagnosed patients exhibiting symptoms. To be of relevance to response planning, we limit ourselves to 3-5 days of data. In tests performed with anthrax as the pathogen, we find that thesemore » data are usually sufficient, especially if the model of the outbreak used in the inverse problem is an accurate one. In some cases the scarcity of data may initially support outbreak characterizations at odds with the true one, but with sufficient data the correct inferences are recovered; in other words, the inverse problem posed and its solution methodology are consistent. We also explore the effect of model error-situations for which the model used in the inverse problem is only a partially accurate representation of the outbreak; here, the model predictions and the observations differ by more than a random noise. We find that while there is a consistent discrepancy between the inferred and the true characterizations, they are also close enough to be of relevance when planning a response.« less
Ghanbari, J; Naghdabadi, R
2009-07-22
We have used a hierarchical multiscale modeling scheme for the analysis of cortical bone considering it as a nanocomposite. This scheme consists of definition of two boundary value problems, one for macroscale, and another for microscale. The coupling between these scales is done by using the homogenization technique. At every material point in which the constitutive model is needed, a microscale boundary value problem is defined using a macroscopic kinematical quantity and solved. Using the described scheme, we have studied elastic properties of cortical bone considering its nanoscale microstructural constituents with various mineral volume fractions. Since the microstructure of bone consists of mineral platelet with nanometer size embedded in a protein matrix, it is similar to the microstructure of soft matrix nanocomposites reinforced with hard nanostructures. Considering a representative volume element (RVE) of the microstructure of bone as the microscale problem in our hierarchical multiscale modeling scheme, the global behavior of bone is obtained under various macroscopic loading conditions. This scheme may be suitable for modeling arbitrary bone geometries subjected to a variety of loading conditions. Using the presented method, mechanical properties of cortical bone including elastic moduli and Poisson's ratios in two major directions and shear modulus is obtained for different mineral volume fractions.
Joh, Ju Youn; Kim, Sun; Park, Jun Li; Kim, Yeon Pyo
2013-05-01
The Family Adaptability and Cohesion Evaluation Scale (FACES) III using the circumplex model has been widely used in investigating family function. However, the criticism of the curvilinear hypothesis of the circumplex model has always been from an empirical point of view. This study examined the relationship between adolescent adaptability, cohesion, and adolescent problem behaviors, and especially testing the consistency of the curvilinear hypotheses with FACES III. We used the data from 398 adolescent participants who were in middle school. A self-reported questionnaire was used to evaluate the FACES III and Youth Self Report. According to the level of family adaptability, significant differences were evident in internalizing problems (P = 0.014). But, in externalizing problems, the results were not significant (P = 0.305). Also, according to the level of family cohesion, significant differences were in internalizing problems (P = 0.002) and externalizing problems (P = 0.004). The relationship between the dimensions of adaptability, cohesion and adolescent problem behaviors was not curvilinear. In other words, adolescents with high adaptability and high cohesion showed low problem behaviors.
Joh, Ju Youn; Kim, Sun; Park, Jun Li
2013-01-01
Background The Family Adaptability and Cohesion Evaluation Scale (FACES) III using the circumplex model has been widely used in investigating family function. However, the criticism of the curvilinear hypothesis of the circumplex model has always been from an empirical point of view. This study examined the relationship between adolescent adaptability, cohesion, and adolescent problem behaviors, and especially testing the consistency of the curvilinear hypotheses with FACES III. Methods We used the data from 398 adolescent participants who were in middle school. A self-reported questionnaire was used to evaluate the FACES III and Youth Self Report. Results According to the level of family adaptability, significant differences were evident in internalizing problems (P = 0.014). But, in externalizing problems, the results were not significant (P = 0.305). Also, according to the level of family cohesion, significant differences were in internalizing problems (P = 0.002) and externalizing problems (P = 0.004). Conclusion The relationship between the dimensions of adaptability, cohesion and adolescent problem behaviors was not curvilinear. In other words, adolescents with high adaptability and high cohesion showed low problem behaviors. PMID:23730484
A set partitioning reformulation for the multiple-choice multidimensional knapsack problem
NASA Astrophysics Data System (ADS)
Voß, Stefan; Lalla-Ruiz, Eduardo
2016-05-01
The Multiple-choice Multidimensional Knapsack Problem (MMKP) is a well-known ?-hard combinatorial optimization problem that has received a lot of attention from the research community as it can be easily translated to several real-world problems arising in areas such as allocating resources, reliability engineering, cognitive radio networks, cloud computing, etc. In this regard, an exact model that is able to provide high-quality feasible solutions for solving it or being partially included in algorithmic schemes is desirable. The MMKP basically consists of finding a subset of objects that maximizes the total profit while observing some capacity restrictions. In this article a reformulation of the MMKP as a set partitioning problem is proposed to allow for new insights into modelling the MMKP. The computational experimentation provides new insights into the problem itself and shows that the new model is able to improve on the best of the known results for some of the most common benchmark instances.
Consistency functional map propagation for repetitive patterns
NASA Astrophysics Data System (ADS)
Wang, Hao
2017-09-01
Repetitive patterns appear frequently in both man-made and natural environments. Automatically and robustly detecting such patterns from an image is a challenging problem. We study repetitive pattern alignment by embedding segmentation cue with a functional map model. However, this model cannot tackle the repetitive patterns directly due to the large photometric and geometric variations. Thus, a consistency functional map propagation (CFMP) algorithm that extends the functional map with dynamic propagation is proposed to address this issue. This propagation model is acquired in two steps. The first one aligns the patterns from a local region, transferring segmentation functions among patterns. It can be cast as an L norm optimization problem. The latter step updates the template segmentation for the next round of pattern discovery by merging the transferred segmentation functions. Extensive experiments and comparative analyses have demonstrated an encouraging performance of the proposed algorithm in detection and segmentation of repetitive patterns.
Analysis and Reduction of Complex Networks Under Uncertainty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghanem, Roger G
2014-07-31
This effort was a collaboration with Youssef Marzouk of MIT, Omar Knio of Duke University (at the time at Johns Hopkins University) and Habib Najm of Sandia National Laboratories. The objective of this effort was to develop the mathematical and algorithmic capacity to analyze complex networks under uncertainty. Of interest were chemical reaction networks and smart grid networks. The statements of work for USC focused on the development of stochastic reduced models for uncertain networks. The USC team was led by Professor Roger Ghanem and consisted of one graduate student and a postdoc. The contributions completed by the USC teammore » consisted of 1) methodology and algorithms to address the eigenvalue problem, a problem of significance in the stability of networks under stochastic perturbations, 2) methodology and algorithms to characterize probability measures on graph structures with random flows. This is an important problem in characterizing random demand (encountered in smart grid) and random degradation (encountered in infrastructure systems), as well as modeling errors in Markov Chains (with ubiquitous relevance !). 3) methodology and algorithms for treating inequalities in uncertain systems. This is an important problem in the context of models for material failure and network flows under uncertainty where conditions of failure or flow are described in the form of inequalities between the state variables.« less
Aeroservoelastic Uncertainty Model Identification from Flight Data
NASA Technical Reports Server (NTRS)
Brenner, Martin J.
2001-01-01
Uncertainty modeling is a critical element in the estimation of robust stability margins for stability boundary prediction and robust flight control system development. There has been a serious deficiency to date in aeroservoelastic data analysis with attention to uncertainty modeling. Uncertainty can be estimated from flight data using both parametric and nonparametric identification techniques. The model validation problem addressed in this paper is to identify aeroservoelastic models with associated uncertainty structures from a limited amount of controlled excitation inputs over an extensive flight envelope. The challenge to this problem is to update analytical models from flight data estimates while also deriving non-conservative uncertainty descriptions consistent with the flight data. Multisine control surface command inputs and control system feedbacks are used as signals in a wavelet-based modal parameter estimation procedure for model updates. Transfer function estimates are incorporated in a robust minimax estimation scheme to get input-output parameters and error bounds consistent with the data and model structure. Uncertainty estimates derived from the data in this manner provide an appropriate and relevant representation for model development and robust stability analysis. This model-plus-uncertainty identification procedure is applied to aeroservoelastic flight data from the NASA Dryden Flight Research Center F-18 Systems Research Aircraft.
Reconstruction of Consistent 3d CAD Models from Point Cloud Data Using a Priori CAD Models
NASA Astrophysics Data System (ADS)
Bey, A.; Chaine, R.; Marc, R.; Thibault, G.; Akkouche, S.
2011-09-01
We address the reconstruction of 3D CAD models from point cloud data acquired in industrial environments, using a pre-existing 3D model as an initial estimate of the scene to be processed. Indeed, this prior knowledge can be used to drive the reconstruction so as to generate an accurate 3D model matching the point cloud. We more particularly focus our work on the cylindrical parts of the 3D models. We propose to state the problem in a probabilistic framework: we have to search for the 3D model which maximizes some probability taking several constraints into account, such as the relevancy with respect to the point cloud and the a priori 3D model, and the consistency of the reconstructed model. The resulting optimization problem can then be handled using a stochastic exploration of the solution space, based on the random insertion of elements in the configuration under construction, coupled with a greedy management of the conflicts which efficiently improves the configuration at each step. We show that this approach provides reliable reconstructed 3D models by presenting some results on industrial data sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woods, Nathan; Menikoff, Ralph
2017-02-03
Equilibrium thermodynamics underpins many of the technologies used throughout theoretical physics, yet verification of the various theoretical models in the open literature remains challenging. EOSlib provides a single, consistent, verifiable implementation of these models, in a single, easy-to-use software package. It consists of three parts: a software library implementing various published equation-of-state (EOS) models; a database of fitting parameters for various materials for these models; and a number of useful utility functions for simplifying thermodynamic calculations such as computing Hugoniot curves or Riemann problem solutions. Ready availability of this library will enable reliable code-to- code testing of equation-of-state implementations, asmore » well as a starting point for more rigorous verification work. EOSlib also provides a single, consistent API for its analytic and tabular EOS models, which simplifies the process of comparing models for a particular application.« less
On the optimization of electromagnetic geophysical data: Application of the PSO algorithm
NASA Astrophysics Data System (ADS)
Godio, A.; Santilano, A.
2018-01-01
Particle Swarm optimization (PSO) algorithm resolves constrained multi-parameter problems and is suitable for simultaneous optimization of linear and nonlinear problems, with the assumption that forward modeling is based on good understanding of ill-posed problem for geophysical inversion. We apply PSO for solving the geophysical inverse problem to infer an Earth model, i.e. the electrical resistivity at depth, consistent with the observed geophysical data. The method doesn't require an initial model and can be easily constrained, according to external information for each single sounding. The optimization process to estimate the model parameters from the electromagnetic soundings focuses on the discussion of the objective function to be minimized. We discuss the possibility to introduce in the objective function vertical and lateral constraints, with an Occam-like regularization. A sensitivity analysis allowed us to check the performance of the algorithm. The reliability of the approach is tested on synthetic, real Audio-Magnetotelluric (AMT) and Long Period MT data. The method appears able to solve complex problems and allows us to estimate the a posteriori distribution of the model parameters.
An Inverse Problem for a Class of Conditional Probability Measure-Dependent Evolution Equations
Mirzaev, Inom; Byrne, Erin C.; Bortz, David M.
2016-01-01
We investigate the inverse problem of identifying a conditional probability measure in measure-dependent evolution equations arising in size-structured population modeling. We formulate the inverse problem as a least squares problem for the probability measure estimation. Using the Prohorov metric framework, we prove existence and consistency of the least squares estimates and outline a discretization scheme for approximating a conditional probability measure. For this scheme, we prove general method stability. The work is motivated by Partial Differential Equation (PDE) models of flocculation for which the shape of the post-fragmentation conditional probability measure greatly impacts the solution dynamics. To illustrate our methodology, we apply the theory to a particular PDE model that arises in the study of population dynamics for flocculating bacterial aggregates in suspension, and provide numerical evidence for the utility of the approach. PMID:28316360
Revitalizing Adversary Evaluation: Deep Dark Deficits or Muddled Mistaken Musings
ERIC Educational Resources Information Center
Thurston, Paul
1978-01-01
The adversary evaluation model consists of utilizing the judicial process as a metaphor for educational evaluation. In this article, previous criticism of the model is addressed and its fundamental problems are detailed. It is speculated that the model could be improved by borrowing ideas from other legal forms of inquiry. (Author/GC)
Inverse problem for multispecies ferromagneticlike mean-field models in phase space with many states
NASA Astrophysics Data System (ADS)
Fedele, Micaela; Vernia, Cecilia
2017-10-01
In this paper we solve the inverse problem for the Curie-Weiss model and its multispecies version when multiple thermodynamic states are present as in the low temperature phase where the phase space is clustered. The inverse problem consists of reconstructing the model parameters starting from configuration data generated according to the distribution of the model. We demonstrate that, without taking into account the presence of many states, the application of the inversion procedure produces very poor inference results. To overcome this problem, we use the clustering algorithm. When the system has two symmetric states of positive and negative magnetizations, the parameter reconstruction can also be obtained with smaller computational effort simply by flipping the sign of the magnetizations from positive to negative (or vice versa). The parameter reconstruction fails when the system undergoes a phase transition: In that case we give the correct inversion formulas for the Curie-Weiss model and we show that they can be used to measure how close the system gets to being critical.
NASA Astrophysics Data System (ADS)
Lufri, L.; Fitri, R.; Yogica, R.
2018-04-01
The purpose of this study is to produce a learning model based on problem solving and meaningful learning standards by expert assessment or validation for the course of Animal Development. This research is a development research that produce the product in the form of learning model, which consist of sub product, namely: the syntax of learning model and student worksheets. All of these products are standardized through expert validation. The research data is the level of validity of all sub products obtained using questionnaire, filled by validators from various field of expertise (field of study, learning strategy, Bahasa). Data were analysed using descriptive statistics. The result of the research shows that the problem solving and meaningful learning model has been produced. Sub products declared appropriate by expert include the syntax of learning model and student worksheet.
REVIEWS OF TOPICAL PROBLEMS: Physics of pulsar magnetospheres
NASA Astrophysics Data System (ADS)
Beskin, Vasilii S.; Gurevich, Aleksandr V.; Istomin, Yakov N.
1986-10-01
A self-consistent model of the magnetosphere of a pulsar is constructed. This model is based on a successive solution of the equations describing global properties of the magnetosphere and on a comparison of the basic predictions of the developed theory and observational data.
Heuristic for Critical Machine Based a Lot Streaming for Two-Stage Hybrid Production Environment
NASA Astrophysics Data System (ADS)
Vivek, P.; Saravanan, R.; Chandrasekaran, M.; Pugazhenthi, R.
2017-03-01
Lot streaming in Hybrid flowshop [HFS] is encountered in many real world problems. This paper deals with a heuristic approach for Lot streaming based on critical machine consideration for a two stage Hybrid Flowshop. The first stage has two identical parallel machines and the second stage has only one machine. In the second stage machine is considered as a critical by valid reasons these kind of problems is known as NP hard. A mathematical model developed for the selected problem. The simulation modelling and analysis were carried out in Extend V6 software. The heuristic developed for obtaining optimal lot streaming schedule. The eleven cases of lot streaming were considered. The proposed heuristic was verified and validated by real time simulation experiments. All possible lot streaming strategies and possible sequence under each lot streaming strategy were simulated and examined. The heuristic consistently yielded optimal schedule consistently in all eleven cases. The identification procedure for select best lot streaming strategy was suggested.
ERIC Educational Resources Information Center
Natsuaki, Misaki N.; Ge, Xiaojia; Reiss, David; Neiderhiser, Jenae M.
2009-01-01
This study investigated the prospective links between sibling aggression and the development of externalizing problems using a multilevel modeling approach with a genetically sensitive design. The sample consisted of 780 adolescents (390 sibling pairs) who participated in 2 waves of the Nonshared Environment in Adolescent Development project.…
ERIC Educational Resources Information Center
Soenens, Bart; Vansteenkiste, Maarten; Luyckx, Koen; Goossens, Luc
2006-01-01
Parental monitoring, assessed as (perceived) parental knowledge of the child's behavior, has been established as a consistent predictor of problem behavior. However, recent research indicates that parental knowledge has more to do with adolescents' self-disclosure than with parents' active monitoring. Although these findings may suggest that…
ERIC Educational Resources Information Center
Syahputra, Edi; Surya, Edy
2017-01-01
This paper is a summary study of team Postgraduate on 11th grade. The objective of this study is to develop a learning model based on problem solving which can construct high-order thinking on the learning mathematics in SMA/MA. The subject of dissemination consists of Students of 11th grade in SMA/MA in 3 kabupaten/kota in North Sumatera, namely:…
Consistent Partial Least Squares Path Modeling via Regularization.
Jung, Sunho; Park, JaeHong
2018-01-01
Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.
NASA Astrophysics Data System (ADS)
Willemse, Tim A. C.
We introduce the concept of consistent correlations for parameterised Boolean equation systems (PBESs), motivated largely by the laborious proofs of correctness required for most manipulations in this setting. Consistent correlations focus on relating the equations that occur in PBESs, rather than their solutions. For a fragment of PBESs, consistent correlations are shown to coincide with a recently introduced form of bisimulation. Finally, we show that bisimilarity on processes induces consistent correlations on PBESs encoding model checking problems. We apply our theory to two example manipulations from the literature.
Benchmark problems for numerical implementations of phase field models
Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; ...
2016-10-01
Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verifymore » new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.« less
Linking Family Characteristics with Poor Peer Relations: The Mediating Role of Conduct Problems
Bierman, Karen Linn; Smoot, David L.
2012-01-01
Parent, teacher, and peer ratings were collected for 75 grade school boys to test the hypothesis that certain family interaction patterns would be associated with poor peer relations. Path analyses provided support for a mediational model, in which punitive and ineffective discipline was related to child conduct problems in home and school settings which, in turn, predicted poor peer relations. Further analyses suggested that distinct subgroups of boys could be identified who exhibited conduct problems at home only, at school only, in both settings, or in neither setting. Boys who exhibited cross-situational conduct problems were more likely to experience multiple concurrent problems (e.g., in both home and school settings) and were more likely than any other group to experience poor peer relations. However, only about one-third of the boys with poor peer relations in this sample exhibited problem profiles consistent with the proposed model (e.g., experienced high rates of punitive/ineffective home discipline and exhibited conduct problems in home and school settings), suggesting that the proposed model reflects one common (but not exclusive) pathway to poor peer relations. PMID:1865049
A variational technique for smoothing flight-test and accident data
NASA Technical Reports Server (NTRS)
Bach, R. E., Jr.
1980-01-01
The problem of determining aircraft motions along a trajectory is solved using a variational algorithm that generates unmeasured states and forcing functions, and estimates instrument bias and scale-factor errors. The problem is formulated as a nonlinear fixed-interval smoothing problem, and is solved as a sequence of linear two-point boundary value problems, using a sweep method. The algorithm has been implemented for use in flight-test and accident analysis. Aircraft motions are assumed to be governed by a six-degree-of-freedom kinematic model; forcing functions consist of body accelerations and winds, and the measurement model includes aerodynamic and radar data. Examples of the determination of aircraft motions from typical flight-test and accident data are presented.
Finite Element Modeling of the World Federation's Second MFL Benchmark Problem
NASA Astrophysics Data System (ADS)
Zeng, Zhiwei; Tian, Yong; Udpa, Satish; Udpa, Lalita
2004-02-01
This paper presents results obtained by simulating the second magnetic flux leakage benchmark problem proposed by the World Federation of NDE Centers. The geometry consists of notches machined on the internal and external surfaces of a rotating steel pipe that is placed between two yokes that are part of a magnetic circuit energized by an electromagnet. The model calculates the radial component of the leaked field at specific positions. The nonlinear material property of the ferromagnetic pipe is taken into account in simulating the problem. The velocity effect caused by the rotation of the pipe is, however, ignored for reasons of simplicity.
Interfacial Micromechanics in Fibrous Composites: Design, Evaluation, and Models
Lei, Zhenkun; Li, Xuan; Qin, Fuyong; Qiu, Wei
2014-01-01
Recent advances of interfacial micromechanics in fiber reinforced composites using micro-Raman spectroscopy are given. The faced mechanical problems for interface design in fibrous composites are elaborated from three optimization ways: material, interface, and computation. Some reasons are depicted that the interfacial evaluation methods are difficult to guarantee the integrity, repeatability, and consistency. Micro-Raman study on the fiber interface failure behavior and the main interface mechanical problems in fibrous composites are summarized, including interfacial stress transfer, strength criterion of interface debonding and failure, fiber bridging, frictional slip, slip transition, and friction reloading. The theoretical models of above interface mechanical problems are given. PMID:24977189
A Chaotic Ordered Hierarchies Consistency Analysis Performance Evaluation Model
NASA Astrophysics Data System (ADS)
Yeh, Wei-Chang
2013-02-01
The Hierarchies Consistency Analysis (HCA) is proposed by Guh in-cooperated along with some case study on a Resort to reinforce the weakness of Analytical Hierarchy Process (AHP). Although the results obtained enabled aid for the Decision Maker to make more reasonable and rational verdicts, the HCA itself is flawed. In this paper, our objective is to indicate the problems of HCA, and then propose a revised method called chaotic ordered HCA (COH in short) which can avoid problems. Since the COH is based upon Guh's method, the Decision Maker establishes decisions in a way similar to that of the original method.
ERIC Educational Resources Information Center
Chaiyadejkamjorn, Natsuchawirang; Soonthonrojana, Wimonrat; Sangkhaphanthanon, Thanya
2017-01-01
The research aimed to construct an instructional model for creative writing for Mattayomsueksa Three students (Grade 9), to develop the model according to a criterion of 80/80, and to examine the results of the model in use. The research methodology consisted of three phases: phase one studied the current states, problems and needs for teaching…
Bayesian evidence computation for model selection in non-linear geoacoustic inference problems.
Dettmer, Jan; Dosso, Stan E; Osler, John C
2010-12-01
This paper applies a general Bayesian inference approach, based on Bayesian evidence computation, to geoacoustic inversion of interface-wave dispersion data. Quantitative model selection is carried out by computing the evidence (normalizing constants) for several model parameterizations using annealed importance sampling. The resulting posterior probability density estimate is compared to estimates obtained from Metropolis-Hastings sampling to ensure consistent results. The approach is applied to invert interface-wave dispersion data collected on the Scotian Shelf, off the east coast of Canada for the sediment shear-wave velocity profile. Results are consistent with previous work on these data but extend the analysis to a rigorous approach including model selection and uncertainty analysis. The results are also consistent with core samples and seismic reflection measurements carried out in the area.
Inverse problems and computational cell metabolic models: a statistical approach
NASA Astrophysics Data System (ADS)
Calvetti, D.; Somersalo, E.
2008-07-01
In this article, we give an overview of the Bayesian modelling of metabolic systems at the cellular and subcellular level. The models are based on detailed description of key biochemical reactions occurring in tissue, which may in turn be compartmentalized into cytosol and mitochondria, and of transports between the compartments. The classical deterministic approach which models metabolic systems as dynamical systems with Michaelis-Menten kinetics, is replaced by a stochastic extension where the model parameters are interpreted as random variables with an appropriate probability density. The inverse problem of cell metabolism in this setting consists of estimating the density of the model parameters. After discussing some possible approaches to solving the problem, we address the issue of how to assess the reliability of the predictions of a stochastic model by proposing an output analysis in terms of model uncertainties. Visualization modalities for organizing the large amount of information provided by the Bayesian dynamic sensitivity analysis are also illustrated.
Communications network design and costing model technical manual
NASA Technical Reports Server (NTRS)
Logan, K. P.; Somes, S. S.; Clark, C. A.
1983-01-01
This computer model provides the capability for analyzing long-haul trunking networks comprising a set of user-defined cities, traffic conditions, and tariff rates. Networks may consist of all terrestrial connectivity, all satellite connectivity, or a combination of terrestrial and satellite connectivity. Network solutions provide the least-cost routes between all cities, the least-cost network routing configuration, and terrestrial and satellite service cost totals. The CNDC model allows analyses involving three specific FCC-approved tariffs, which are uniquely structured and representative of most existing service connectivity and pricing philosophies. User-defined tariffs that can be variations of these three tariffs are accepted as input to the model and allow considerable flexibility in network problem specification. The resulting model extends the domain of network analysis from traditional fixed link cost (distance-sensitive) problems to more complex problems involving combinations of distance and traffic-sensitive tariffs.
A Solution to ``Too Big to Fail''
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2016-10-01
Its a tricky business to reconcile simulations of our galaxys formation with our current observations of the Milky Way and its satellites. In a recent study, scientists have addressed one discrepancy between simulations and observations: the so-called to big to fail problem.From Missing Satellites to Too Big to FailThe favored model of the universe is the lambda-cold-dark-matter (CDM) cosmological model. This model does a great job of correctly predicting the large-scale structure of the universe, but there are still a few problems with it on smaller scales.Hubble image of UGC 5497, a dwarf galaxy associated with Messier 81. In the missing satellite problem, simulations of galaxy formation predict that there should be more such satellite galaxies than we observe. [ESA/NASA]The first is the missing satellites problem: CDM cosmology predicts that galaxies like the Milky Way should have significantly more satellite galaxies than we observe. A proposed solution to this problem is the argument that there may exist many more satellites than weve observed, but these dwarf galaxies have had their stars stripped from them during tidal interactions which prevents us from being able to see them.This solution creates a new problem, though: the too big to fail problem. This problem states that many of the satellites predicted by CDM cosmology are simply so massive that theres no way they couldnt have visible stars. Another way of looking at it: the observed satellites of the Milky Way are not massive enough to be consistent with predictions from CDM.Artists illustration of a supernova, a type of stellar feedback that can modify the dark-matter distribution of a satellite galaxy. [NASA/CXC/M. Weiss]Density Profiles and Tidal StirringLed by Mihai Tomozeiu (University of Zurich), a team of scientists has published a study in which they propose a solution to the too big to fail problem. By running detailed cosmological zoom simulations of our galaxys formation, Tomozeiu and collaborators modeled the dark matter and the stellar content of the galaxy, tracking the formation and evolution of dark-matter subhalos.Based on the results of their simulations, the team argues that the too big to fail problem can be resolved by combining two effects:Stellar feedback in a satellite galaxy can modify its dark-matter distribution, lowering the dark-matter density in the galaxys center and creating a shallower density profile. Satellites with such shallow density profiles evolve differently than those typically modeled, which have a high concentration of dark matter in their centers.After these satellites fall into the Milky Ways potential, tidal effects such as shocks and stripping modify the mass distribution of both the dark matter and the baryons even further.Each curve represents a simulated satellites circular velocity (which corresponds to its total mass) at z=0. Left: results using typical dark-matter density profiles. Right: results using the shallower profiles expected when stellar feedback is included. Results from the shallower profiles are consistent with observed Milky-Way satellites(black crosses). [Adapted from Tomozeiu et al. 2016]A Match to ObservationsTomozeiu and collaborators found that when they used traditional density profiles to model the satellites, the satellites at z=0 in the simulation were much larger than those we observe around the Milky Way consistent with the too big to fail problem.When the team used shallower density profiles and took into account tidal effects, however, the simulations produced a distribution of satellites at z=0 that is consistent with what we observe.This study provides a tidy potential solution to the too big to fail problem, further strengthening the support for CDM cosmology.CitationMihai Tomozeiu et al 2016 ApJ 827 L15. doi:10.3847/2041-8205/827/1/L15
Retrofitting and the mu Problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, Daniel; Weigand, Timo; /SLAC /Stanford U., Phys. Dept.
2010-08-26
One of the challenges of supersymmetry (SUSY) breaking and mediation is generating a {mu} term consistent with the requirements of electro-weak symmetry breaking. The most common approach to the problem is to generate the {mu} term through a SUSY breaking F-term. Often these models produce unacceptably large B{mu} terms as a result. We will present an alternate approach, where the {mu} term is generated directly by non-perturtative effects. The same non-perturbative effect will also retrofit the model of SUSY breaking in such a way that {mu} is at the same scale as masses of the Standard Model superpartners. Because themore » {mu} term is not directly generated by SUSY breaking effects, there is no associated B{mu} problem. These results are demonstrated in a toy model where a stringy instanton generates {mu}.« less
Study of stability of the difference scheme for the model problem of the gaslift process
NASA Astrophysics Data System (ADS)
Temirbekov, Nurlan; Turarov, Amankeldy
2017-09-01
The paper studies a model of the gaslift process where the motion in a gas-lift well is described by partial differential equations. The system describing the studied process consists of equations of motion, continuity, equations of thermodynamic state, and hydraulic resistance. A two-layer finite-difference Lax-Vendroff scheme is constructed for the numerical solution of the problem. The stability of the difference scheme for the model problem is investigated using the method of a priori estimates, the order of approximation is investigated, the algorithm for numerical implementation of the gaslift process model is given, and the graphs are presented. The development and investigation of difference schemes for the numerical solution of systems of equations of gas dynamics makes it possible to obtain simultaneously exact and monotonic solutions.
NASA Astrophysics Data System (ADS)
Hetmaniok, Edyta; Hristov, Jordan; Słota, Damian; Zielonka, Adam
2017-05-01
The paper presents the procedure for solving the inverse problem for the binary alloy solidification in a two-dimensional space. This is a continuation of some previous works of the authors investigating a similar problem but in the one-dimensional domain. Goal of the problem consists in identification of the heat transfer coefficient on boundary of the region and in reconstruction of the temperature distribution inside the considered region in case when the temperature measurements in selected points of the alloy are known. Mathematical model of the problem is based on the heat conduction equation with the substitute thermal capacity and with the liquidus and solidus temperatures varying in dependance on the concentration of the alloy component. For describing this concentration the Scheil model is used. Investigated procedure involves also the parallelized Ant Colony Optimization algorithm applied for minimizing a functional expressing the error of approximate solution.
Constrained Versions of DEDICOM for Use in Unsupervised Part-Of-Speech Tagging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunlavy, Daniel; Chew, Peter A.
This reports describes extensions of DEDICOM (DEcomposition into DIrectional COMponents) data models [3] that incorporate bound and linear constraints. The main purpose of these extensions is to investigate the use of improved data models for unsupervised part-of-speech tagging, as described by Chew et al. [2]. In that work, a single domain, two-way DEDICOM model was computed on a matrix of bigram fre- quencies of tokens in a corpus and used to identify parts-of-speech as an unsupervised approach to that problem. An open problem identi ed in that work was the com- putation of a DEDICOM model that more closely resembledmore » the matrices used in a Hidden Markov Model (HMM), speci cally through post-processing of the DEDICOM factor matrices. The work reported here consists of the description of several models that aim to provide a direct solution to that problem and a way to t those models. The approach taken here is to incorporate the model requirements as bound and lin- ear constrains into the DEDICOM model directly and solve the data tting problem as a constrained optimization problem. This is in contrast to the typical approaches in the literature, where the DEDICOM model is t using unconstrained optimization approaches, and model requirements are satis ed as a post-processing step.« less
The Effect of Time on Difficulty of Learning (The Case of Problem Solving with Natural Numbers)
ERIC Educational Resources Information Center
Kaya, Deniz; Kesan, Cenk
2017-01-01
The main purpose of this study is to determine the time-dependent learning difficulty of "solving problems that require making four operations with natural numbers" of the sixth grade students. The study, adopting the scanning model, consisted of a total of 140 students, including 69 female and 71 male students at the sixth grade. Data…
A multi-product green supply chain under government supervision with price and demand uncertainty
NASA Astrophysics Data System (ADS)
Hafezalkotob, Ashkan; Zamani, Soma
2018-05-01
In this paper, a bi-level game-theoretic model is proposed to investigate the effects of governmental financial intervention on green supply chain. This problem is formulated as a bi-level program for a green supply chain that produces various products with different environmental pollution levels. The problem is also regard uncertainties in market demand and sale price of raw materials and products. The model is further transformed into a single-level nonlinear programming problem by replacing the lower-level optimization problem with its Karush-Kuhn-Tucker optimality conditions. Genetic algorithm is applied as a solution methodology to solve nonlinear programming model. Finally, to investigate the validity of the proposed method, the computational results obtained through genetic algorithm are compared with global optimal solution attained by enumerative method. Analytical results indicate that the proposed GA offers better solutions in large size problems. Also, we conclude that financial intervention by government consists of green taxation and subsidization is an effective method to stabilize green supply chain members' performance.
Equilibria of perceptrons for simple contingency problems.
Dawson, Michael R W; Dupuis, Brian
2012-08-01
The contingency between cues and outcomes is fundamentally important to theories of causal reasoning and to theories of associative learning. Researchers have computed the equilibria of Rescorla-Wagner models for a variety of contingency problems, and have used these equilibria to identify situations in which the Rescorla-Wagner model is consistent, or inconsistent, with normative models of contingency. Mathematical analyses that directly compare artificial neural networks to contingency theory have not been performed, because of the assumed equivalence between the Rescorla-Wagner learning rule and the delta rule training of artificial neural networks. However, recent results indicate that this equivalence is not as straightforward as typically assumed, suggesting a strong need for mathematical accounts of how networks deal with contingency problems. One such analysis is presented here, where it is proven that the structure of the equilibrium for a simple network trained on a basic contingency problem is quite different from the structure of the equilibrium for a Rescorla-Wagner model faced with the same problem. However, these structural differences lead to functionally equivalent behavior. The implications of this result for the relationships between associative learning, contingency theory, and connectionism are discussed.
Agatha: Disentangling period signals from correlated noise in a periodogram framework
NASA Astrophysics Data System (ADS)
Feng, F.; Tuomi, M.; Jones, H. R. A.
2018-04-01
Agatha is a framework of periodograms to disentangle periodic signals from correlated noise and to solve the two-dimensional model selection problem: signal dimension and noise model dimension. These periodograms are calculated by applying likelihood maximization and marginalization and combined in a self-consistent way. Agatha can be used to select the optimal noise model and to test the consistency of signals in time and can be applied to time series analyses in other astronomical and scientific disciplines. An interactive web implementation of the software is also available at http://agatha.herts.ac.uk/.
Tobacco exposure and maternal psychopathology: Impact on toddler problem behavior.
Godleski, Stephanie A; Eiden, Rina D; Schuetze, Pamela; Colder, Craig R; Huestis, Marilyn A
Prenatal exposure to tobacco has consistently predicted later problem behavior for children. However, little is known about developmental mechanisms underlying this association. We examined a conceptual model for the association between prenatal tobacco exposure and child problem behavior in toddlerhood via indirect paths through fetal growth, maternal depression, and maternal aggressive disposition in early infancy and via maternal warmth and sensitivity and infant negative affect in later infancy. The sample consisted of 258 mother-child dyads recruited during pregnancy and assessed periodically at 2, 9, and 16months of child age. Pathways via maternal depression and infant negative affect to toddler problem behavior were significant. Further, combined tobacco and marijuana exposure during pregnancy and reduced fetal growth also demonstrated important associations with infant negative affect and subsequent problem behavior. These results highlight the importance of considering the role of maternal negative affect and poor fetal growth as risk factors in the context of prenatal exposure. Copyright © 2016. Published by Elsevier Inc.
Thermodynamic criteria for estimating the kinetic parameters of catalytic reactions
NASA Astrophysics Data System (ADS)
Mitrichev, I. I.; Zhensa, A. V.; Kol'tsova, E. M.
2017-01-01
Kinetic parameters are estimated using two criteria in addition to the traditional criterion that considers the consistency between experimental and modeled conversion data: thermodynamic consistency and the consistency with entropy production (i.e., the absolute rate of the change in entropy due to exchange with the environment is consistent with the rate of entropy production in the steady state). A special procedure is developed and executed on a computer to achieve the thermodynamic consistency of a set of kinetic parameters with respect to both the standard entropy of a reaction and the standard enthalpy of a reaction. A problem of multi-criterion optimization, reduced to a single-criterion problem by summing weighted values of the three criteria listed above, is solved. Using the reaction of NO reduction with CO on a platinum catalyst as an example, it is shown that the set of parameters proposed by D.B. Mantri and P. Aghalayam gives much worse agreement with experimental values than the set obtained on the basis of three criteria: the sum of the squares of deviations for conversion, the thermodynamic consistency, and the consistency with entropy production.
Interval-valued distributed preference relation and its application to group decision making
Liu, Yin; Xue, Min; Chang, Wenjun; Yang, Shanlin
2018-01-01
As an important way to help express the preference relation between alternatives, distributed preference relation (DPR) can represent the preferred, non-preferred, indifferent, and uncertain degrees of one alternative over another simultaneously. DPR, however, is unavailable in some situations where a decision maker cannot provide the precise degrees of one alternative over another due to lack of knowledge, experience, and data. In this paper, to address this issue, we propose interval-valued DPR (IDPR) and present its properties of validity and normalization. Through constructing two optimization models, an IDPR matrix is transformed into a score matrix to facilitate the comparison between any two alternatives. The properties of the score matrix are analyzed. To guarantee the rationality of the comparisons between alternatives derived from the score matrix, the additive consistency of the score matrix is developed. In terms of these, IDPR is applied to model and solve multiple criteria group decision making (MCGDM) problem. Particularly, the relationship between the parameters for the consistency of the score matrix associated with each decision maker and those for the consistency of the score matrix associated with the group of decision makers is analyzed. A manager selection problem is investigated to demonstrate the application of IDPRs to MCGDM problems. PMID:29889871
Stevanović, Dejan; Lakić, Aneta; Damnjanović, Maja
2011-08-01
The aim of this study was to evaluate the general measurement properties of the Serbian version of the Pediatric Quality of Life Inventory™ Version 4.0 Generic Core Scales (PedsQL™) self-report versions for children and adolescents (8-18 years). The PedsQL™ was completed by 238 children and adolescents. The version was descriptively analyzed first. Afterward, internal consistency and construct and convergent validity were analyzed using the classic test theory psychometrical procedures. The PedsQL™ scale score means ranged 70.65-88.34, with the total score was 80.74. Scale internal consistency reliability determined by Cronbach's coefficient was above 0.7 for all except the School, 0.65, and Emotional Functioning Scale, 0.69. The statistics assessing the adequacy of the model in confirmatory factor analysis revealed poor model fit for the current structure of the PedsQL™. Finally, the PedsQL™ total and psychosocial health showed convincing negative correlations with emotional and conduct problems, hyperactivity/inattention, and peer relationship problems. The Serbian PedsQL™ scales have appropriate internal consistency reliability, sufficient for group evaluations, and good convergent validity against psychological constructs. However, there are problems regarding its current construct validity (factorial validity).
NASA Astrophysics Data System (ADS)
Parker, Robert L.; Booker, John R.
1996-12-01
The properties of the log of the admittance in the complex frequency plane lead to an integral representation for one-dimensional magnetotelluric (MT) apparent resistivity and impedance phase similar to that found previously for complex admittance. The inverse problem of finding a one-dimensional model for MT data can then be solved using the same techniques as for complex admittance, with similar results. For instance, the one-dimensional conductivity model that minimizes the χ2 misfit statistic for noisy apparent resistivity and phase is a series of delta functions. One of the most important applications of the delta function solution to the inverse problem for complex admittance has been answering the question of whether or not a given set of measurements is consistent with the modeling assumption of one-dimensionality. The new solution allows this test to be performed directly on standard MT data. Recently, it has been shown that induction data must pass the same one-dimensional consistency test if they correspond to the polarization in which the electric field is perpendicular to the strike of two-dimensional structure. This greatly magnifies the utility of the consistency test. The new solution also allows one to compute the upper and lower bounds permitted on phase or apparent resistivity at any frequency given a collection of MT data. Applications include testing the mutual consistency of apparent resistivity and phase data and placing bounds on missing phase or resistivity data. Examples presented demonstrate detection and correction of equipment and processing problems and verification of compatibility with two-dimensional B-polarization for MT data after impedance tensor decomposition and for continuous electromagnetic profiling data.
Detection of faults and software reliability analysis
NASA Technical Reports Server (NTRS)
Knight, J. C.
1987-01-01
Specific topics briefly addressed include: the consistent comparison problem in N-version system; analytic models of comparison testing; fault tolerance through data diversity; and the relationship between failures caused by automatically seeded faults.
Hong-Seng, Gan; Sayuti, Khairil Amir; Karim, Ahmad Helmy Abdul
2017-01-01
Existing knee cartilage segmentation methods have reported several technical drawbacks. In essence, graph cuts remains highly susceptible to image noise despite extended research interest; active shape model is often constraint by the selection of training data while shortest path have demonstrated shortcut problem in the presence of weak boundary, which is a common problem in medical images. The aims of this study is to investigate the capability of random walks as knee cartilage segmentation method. Experts would scribble on knee cartilage image to initialize random walks segmentation. Then, reproducibility of the method is assessed against manual segmentation by using Dice Similarity Index. The evaluation consists of normal cartilage and diseased cartilage sections which is divided into whole and single cartilage categories. A total of 15 normal images and 10 osteoarthritic images were included. The results showed that random walks method has demonstrated high reproducibility in both normal cartilage (observer 1: 0.83±0.028 and observer 2: 0.82±0.026) and osteoarthritic cartilage (observer 1: 0.80±0.069 and observer 2: 0.83±0.029). Besides, results from both experts were found to be consistent with each other, suggesting the inter-observer variation is insignificant (Normal: P=0.21; Diseased: P=0.15). The proposed segmentation model has overcame technical problems reported by existing semi-automated techniques and demonstrated highly reproducible and consistent results against manual segmentation method.
Assimilation approach to measuring organizational change from pre- to post-intervention
Moore, Scott C; Osatuke, Katerine; Howe, Steven R
2014-01-01
AIM: To present a conceptual and measurement strategy that allows to objectively, sensitively evaluate intervention progress based on data of participants’ perceptions of presenting problems. METHODS: We used as an example an organization development intervention at a United States Veterans Affairs medical center. Within a year, the intervention addressed the hospital’s initially serious problems and multiple stakeholders (employees, management, union representatives) reported satisfaction with progress made. Traditional quantitative outcome measures, however, failed to capture the strong positive impact consistently reported by several types of stakeholders in qualitative interviews. To address the paradox, full interview data describing the medical center pre- and post- intervention were examined applying a validated theoretical framework from another discipline: Psychotherapy research. The Assimilation model is a clinical-developmental theory that describes empirically grounded change levels in problematic experiences, e.g., problems reported by participants. The model, measure Assimilation of Problematic Experiences Scale (APES), and rating procedure have been previously applied across various populations and problem types, mainly in clinical but also in non-clinical settings. We applied the APES to the transcribed qualitative data of intervention participants’ interviews, using the method closely replicating prior assimilation research (the process whereby trained clinicians familiar with the Assimilation model work with full, transcribed interview data to assign the APES ratings). The APES ratings summarized levels of progress which was defined as participants’ assimilation level of problematic experiences, and compared from pre- to post-intervention. RESULTS: The results were consistent with participants’ own reported perceptions of the intervention impact. Increase in APES levels from pre- to post-intervention suggested improvement, missed in the previous quantitative measures (the Maslach Burnout Inventory and the Work Environment Scale). The progress specifically consisted of participants’ moving from the APES stages where the problematic experience was avoided, to the APES stages where awareness and attention to the problems were steadily sustained, although the problems were not yet fully processed or resolved. These results explain why the conventional outcome measures failed to reflect the intervention progress; they narrowly defined progress as resolution of the presenting problems and alleviation of symptomatic distress. In the Assimilation model, this definition only applies to a sub-segment of the change continuum, specifically the latest APES stages. The model defines progress as change in psychological processes used in response to the problem, i.e., a growing ability to deal with problematic issues non-defensively, manifested differently depending on APES stages. At early stages, progress is an increased ability to face the problem rather than turning away. At later APES stages, progress involves naming, understanding and successfully addressing the problem. The assimilation approach provides a broader developmental context compared to exclusively symptom, problem-, or behavior- focused approaches that typically inform outcome measurement in interpersonally based interventions. In our data, this made the difference between reflecting (APES) vs missing (Maslach Burnout Inventory, Work Environment Scale) the pre-post change that was strongly perceived by the intervention recipients. CONCLUSION: The results illustrated a working solution to the challenge of objectively evaluating progress in subjectively experienced problems. This approach informs measuring change in psychologically based interventions. PMID:24660141
Ladd, Gary W
2006-01-01
Findings yielded a comprehensive portrait of the predictive relations among children's aggressive or withdrawn behaviors, peer rejection, and psychological maladjustment across the 5-12 age period. Examination of peer rejection in different variable contexts and across repeated intervals throughout childhood revealed differences in the timing, strength, and consistency of this risk factor as a distinct (additive) predictor of externalizing versus internalizing problems. In conjunction with aggressive behavior, peer rejection proved to be a stronger additive predictor of externalizing problems during early rather than later childhood. Relative to withdrawn behavior, rejection's efficacy as a distinct predictor of internalizing problems was significant early in childhood and increased progressively thereafter. These additive path models fit the data better than did disorder-driven or transactional models.
Inverse problem of HIV cell dynamics using Genetic Algorithms
NASA Astrophysics Data System (ADS)
González, J. A.; Guzmán, F. S.
2017-01-01
In order to describe the cell dynamics of T-cells in a patient infected with HIV, we use a flavour of Perelson's model. This is a non-linear system of Ordinary Differential Equations that describes the evolution of healthy, latently infected, infected T-cell concentrations and the free viral cells. Different parameters in the equations give different dynamics. Considering the concentration of these types of cells is known for a particular patient, the inverse problem consists in estimating the parameters in the model. We solve this inverse problem using a Genetic Algorithm (GA) that minimizes the error between the solutions of the model and the data from the patient. These errors depend on the parameters of the GA, like mutation rate and population, although a detailed analysis of this dependence will be described elsewhere.
NASA Astrophysics Data System (ADS)
Bartels, A.; Bartel, T.; Canadija, M.; Mosler, J.
2015-09-01
This paper deals with the thermomechanical coupling in dissipative materials. The focus lies on finite strain plasticity theory and the temperature increase resulting from plastic deformation. For this type of problem, two fundamentally different modeling approaches can be found in the literature: (a) models based on thermodynamical considerations and (b) models based on the so-called Taylor-Quinney factor. While a naive straightforward implementation of thermodynamically consistent approaches usually leads to an over-prediction of the temperature increase due to plastic deformation, models relying on the Taylor-Quinney factor often violate fundamental physical principles such as the first and the second law of thermodynamics. In this paper, a thermodynamically consistent framework is elaborated which indeed allows the realistic prediction of the temperature evolution. In contrast to previously proposed frameworks, it is based on a fully three-dimensional, finite strain setting and it naturally covers coupled isotropic and kinematic hardening - also based on non-associative evolution equations. Considering a variationally consistent description based on incremental energy minimization, it is shown that the aforementioned problem (thermodynamical consistency and a realistic temperature prediction) is essentially equivalent to correctly defining the decomposition of the total energy into stored and dissipative parts. Interestingly, this decomposition shows strong analogies to the Taylor-Quinney factor. In this respect, the Taylor-Quinney factor can be well motivated from a physical point of view. Furthermore, certain intervals for this factor can be derived in order to guarantee that fundamental physically principles are fulfilled a priori. Representative examples demonstrate the predictive capabilities of the final constitutive modeling framework.
Cloud Computing for Complex Performance Codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appel, Gordon John; Hadgu, Teklu; Klein, Brandon Thorin
This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kogalovskii, M.R.
This paper presents a review of problems related to statistical database systems, which are wide-spread in various fields of activity. Statistical databases (SDB) are referred to as databases that consist of data and are used for statistical analysis. Topics under consideration are: SDB peculiarities, properties of data models adequate for SDB requirements, metadata functions, null-value problems, SDB compromise protection problems, stored data compression techniques, and statistical data representation means. Also examined is whether the present Database Management Systems (DBMS) satisfy the SDB requirements. Some actual research directions in SDB systems are considered.
Beyond the Standard Model: The pragmatic approach to the gauge hierarchy problem
NASA Astrophysics Data System (ADS)
Mahbubani, Rakhi
The current favorite solution to the gauge hierarchy problem, the Minimal Supersymmetric Standard Model (MSSM), is looking increasingly fine tuned as recent results from LEP-II have pushed it to regions of its parameter space where a light higgs seems unnatural. Given this fact it seems sensible to explore other approaches to this problem; we study three alternatives here. The first is a Little Higgs theory, in which the Higgs particle is realized as the pseudo-Goldstone boson of an approximate global chiral symmetry and so is naturally light. We analyze precision electroweak observables in the Minimal Moose model, one example of such a theory, and look for regions in its parameter space that are consistent with current limits on these. It is also possible to find a solution within a supersymmetric framework by adding to the MSSM superpotential a lambdaSHuH d term and UV completing with new strong dynamics under which S is a composite before lambda becomes non-perturbative. This allows us to increase the MSSM tree level higgs mass bound to a value that alleviates the supersymmetric fine-tuning problem with elementary higgs fields, maintaining gauge coupling unification in a natural way. Finally we try an entirely different tack, in which we do not attempt to solve the hierarchy problem, but rather assume that the tuning of the higgs can be explained in some unnatural way, from environmental considerations for instance. With this philosophy in mind we study in detail the low-energy phenomenology of the minimal extension to the Standard Model with a dark matter candidate and gauge coupling unification, consisting of additional fermions with the quantum numbers of SUSY higgsinos, and a singlet.
High dimensional linear regression models under long memory dependence and measurement error
NASA Astrophysics Data System (ADS)
Kaul, Abhishek
This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the dimensionality can grow exponentially with the sample size. In the fixed dimensional setting we provide the oracle properties associated with the proposed estimators. In the high dimensional setting, we provide bounds for the statistical error associated with the estimation, that hold with asymptotic probability 1, thereby providing the ℓ1-consistency of the proposed estimator. We also establish the model selection consistency in terms of the correctly estimated zero components of the parameter vector. A simulation study that investigates the finite sample accuracy of the proposed estimator is also included in this chapter.
The CHIC Model: A Global Model for Coupled Binary Data
ERIC Educational Resources Information Center
Wilderjans, Tom; Ceulemans, Eva; Van Mechelen, Iven
2008-01-01
Often problems result in the collection of coupled data, which consist of different N-way N-mode data blocks that have one or more modes in common. To reveal the structure underlying such data, an integrated modeling strategy, with a single set of parameters for the common mode(s), that is estimated based on the information in all data blocks, may…
Flattening the inflaton potential beyond minimal gravity
NASA Astrophysics Data System (ADS)
Lee, Hyun Min
2018-01-01
We review the status of the Starobinsky-like models for inflation beyond minimal gravity and discuss the unitarity problem due to the presence of a large non-minimal gravity coupling. We show that the induced gravity models allow for a self-consistent description of inflation and discuss the implications of the inflaton couplings to the Higgs field in the Standard Model.
A Comparison of a Relational and Nested-Relational IDEF0 Data Model
1990-03-01
develop, some of the problems inherent iu the hierarchical 5 model were circumvented by the more sophisticated network model. Like the hierarchical model...uetwork database consists of a collection of records connected via links. Unlike the hierarchical model, the network model allows arbitrary graphs as...opposed to trees. Thus, each node may have everal owners and may, in turn, own any number of other records. The network model provides a lchanism by
A computer model of solar panel-plasma interactions
NASA Technical Reports Server (NTRS)
Cooke, D. L.; Freeman, J. W.
1980-01-01
High power solar arrays for satellite power systems are presently being planned with dimensions of kilometers, and with tens of kilovolts distributed over their surface. Such systems face many plasma interaction problems, such as power leakage to the plasma, particle focusing, and anomalous arcing. These effects cannot be adequately modeled without detailed knowledge of the plasma sheath structure and space charge effects. Laboratory studies of 1 by 10 meter solar array in a simulated low Earth orbit plasma are discussed. The plasma screening process is discussed, program theory is outlined, and a series of calibration models is presented. These models are designed to demonstrate that PANEL is capable of accurate self consistant space charge calculations. Such models include PANEL predictions for the Child-Langmuir diode problem.
Solving multi-leader-common-follower games.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leyffer, S.; Munson, T.; Mathematics and Computer Science
Multi-leader-common-follower games arise when modelling two or more competitive firms, the leaders, that commit to their decisions prior to another group of competitive firms, the followers, that react to the decisions made by the leaders. These problems lead in a natural way to equilibrium problems with equilibrium constraints (EPECs). We develop a characterization of the solution sets for these problems and examine a variety of nonlinear optimization and nonlinear complementarity formulations of EPECs. We distinguish two broad cases: problems where the leaders can cost-differentiate and problems with price-consistent followers. We demonstrate the practical viability of our approach by solving amore » range of medium-sized test problems.« less
Bogg, Tim; Finn, Peter R.
2011-01-01
Two samples with heterogeneous prevalence of externalizing psychopathology were used to investigate the structure of self-regulatory models of behavioral disinhibition and cognitive capacity. Consistent with expectations, structural equation modeling in the first sample (N = 541) showed a hierarchical model with three lower-order factors of impulsive sensation-seeking, anti-sociality/unconventionality, and lifetime externalizing problem counts, with a behavioral disinhibition superfactor best accounted for the pattern of covariation among six disinhibited personality trait indicators and four externalizing problem indicators. The structure was replicated in a second sample (N = 463) and showed that the behavioral disinhibition superfactor, and not the lower-order impulsive sensation-seeking, anti-sociality/unconventionality, and externalizing problem factors, was associated with lower IQ, reduced short-term memory capacity, and reduced working memory capacity. The results provide a systemic and meaningful integration of major self-regulatory influences during a developmentally important stage of life. PMID:20433626
Neilson, Peter D; Neilson, Megan D
2005-09-01
Adaptive model theory (AMT) is a computational theory that addresses the difficult control problem posed by the musculoskeletal system in interaction with the environment. It proposes that the nervous system creates motor maps and task-dependent synergies to solve the problems of redundancy and limited central resources. These lead to the adaptive formation of task-dependent feedback/feedforward controllers able to generate stable, noninteractive control and render nonlinear interactions unobservable in sensory-motor relationships. AMT offers a unified account of how the nervous system might achieve these solutions by forming internal models. This is presented as the design of a simulator consisting of neural adaptive filters based on cerebellar circuitry. It incorporates a new network module that adaptively models (in real time) nonlinear relationships between inputs with changing and uncertain spectral and amplitude probability density functions as is the case for sensory and motor signals.
A solution to the static frame validation challenge problem using Bayesian model selection
Grigoriu, M. D.; Field, R. V.
2007-12-23
Within this paper, we provide a solution to the static frame validation challenge problem (see this issue) in a manner that is consistent with the guidelines provided by the Validation Challenge Workshop tasking document. The static frame problem is constructed such that variability in material properties is known to be the only source of uncertainty in the system description, but there is ignorance on the type of model that best describes this variability. Hence both types of uncertainty, aleatoric and epistemic, are present and must be addressed. Our approach is to consider a collection of competing probabilistic models for themore » material properties, and calibrate these models to the information provided; models of different levels of complexity and numerical efficiency are included in the analysis. A Bayesian formulation is used to select the optimal model from the collection, which is then used for the regulatory assessment. Lastly, bayesian credible intervals are used to provide a measure of confidence to our regulatory assessment.« less
An interactive graphics system to facilitate finite element structural analysis
NASA Technical Reports Server (NTRS)
Burk, R. C.; Held, F. H.
1973-01-01
The characteristics of an interactive graphics systems to facilitate the finite element method of structural analysis are described. The finite element model analysis consists of three phases: (1) preprocessing (model generation), (2) problem solution, and (3) postprocessing (interpretation of results). The advantages of interactive graphics to finite element structural analysis are defined.
How the Accountability Model and Teacher-Student Relationships Impact Drop Out
ERIC Educational Resources Information Center
Harris, Kristi
2017-01-01
Limited research results have been provided on the influence of schools on dropout prevention, which consist of the accountability system and how teachers' attitudes affect students' decisions to dropout. The specific problem of interest was how the accountability model and teacher-student relationships plays a vital role in student dropout. The…
Evaluating the Pedagogical Potential of Hybrid Models
ERIC Educational Resources Information Center
Levin, Tzur; Levin, Ilya
2013-01-01
The paper examines how the use of hybrid models--that consist of the interacting continuous and discrete processes--may assist in teaching system thinking. We report an experiment in which undergraduate students were asked to choose between a hybrid and a continuous solution for a number of control problems. A correlation has been found between…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, Canhai; Xu, Zhijie; Pan, Wenxiao
2016-01-01
To quantify the predictive confidence of a solid sorbent-based carbon capture design, a hierarchical validation methodology—consisting of basic unit problems with increasing physical complexity coupled with filtered model-based geometric upscaling has been developed and implemented. This paper describes the computational fluid dynamics (CFD) multi-phase reactive flow simulations and the associated data flows among different unit problems performed within the said hierarchical validation approach. The bench-top experiments used in this calibration and validation effort were carefully designed to follow the desired simple-to-complex unit problem hierarchy, with corresponding data acquisition to support model parameters calibrations at each unit problem level. A Bayesianmore » calibration procedure is employed and the posterior model parameter distributions obtained at one unit-problem level are used as prior distributions for the same parameters in the next-tier simulations. Overall, the results have demonstrated that the multiphase reactive flow models within MFIX can be used to capture the bed pressure, temperature, CO2 capture capacity, and kinetics with quantitative accuracy. The CFD modeling methodology and associated uncertainty quantification techniques presented herein offer a solid framework for estimating the predictive confidence in the virtual scale up of a larger carbon capture device.« less
Geometric Hitting Set for Segments of Few Orientations
Fekete, Sandor P.; Huang, Kan; Mitchell, Joseph S. B.; ...
2016-01-13
Here we study several natural instances of the geometric hitting set problem for input consisting of sets of line segments (and rays, lines) having a small number of distinct slopes. These problems model path monitoring (e.g., on road networks) using the fewest sensors (the \\hitting points"). We give approximation algorithms for cases including (i) lines of 3 slopes in the plane, (ii) vertical lines and horizontal segments, (iii) pairs of horizontal/vertical segments. Lastly, we give hardness and hardness of approximation results for these problems. We prove that the hitting set problem for vertical lines and horizontal rays is polynomially solvable.
NASA Astrophysics Data System (ADS)
Divakov, D.; Sevastianov, L.; Nikolaev, N.
2017-01-01
The paper deals with a numerical solution of the problem of waveguide propagation of polarized light in smoothly-irregular transition between closed regular waveguides using the incomplete Galerkin method. This method consists in replacement of variables in the problem of reduction of the Helmholtz equation to the system of differential equations by the Kantorovich method and in formulation of the boundary conditions for the resulting system. The formulation of the boundary problem for the ODE system is realized in computer algebra system Maple. The stated boundary problem is solved using Maples libraries of numerical methods.
Cummings, E Mark; Schermerhorn, Alice C; Merrilees, Christine E; Goeke-Morey, Marcie C; Shirlow, Peter; Cairns, Ed
2010-07-01
Moving beyond simply documenting that political violence negatively impacts children, we tested a social-ecological hypothesis for relations between political violence and child outcomes. Participants were 700 mother-child (M = 12.1 years, SD = 1.8) dyads from 18 working-class, socially deprived areas in Belfast, Northern Ireland, including single- and two-parent families. Sectarian community violence was associated with elevated family conflict and children's reduced security about multiple aspects of their social environment (i.e., family, parent-child relations, and community), with links to child adjustment problems and reductions in prosocial behavior. By comparison, and consistent with expectations, links with negative family processes, child regulatory problems, and child outcomes were less consistent for nonsectarian community violence. Support was found for a social-ecological model for relations between political violence and child outcomes among both single- and two-parent families, with evidence that emotional security and adjustment problems were more negatively affected in single-parent families. The implications for understanding social ecologies of political violence and children's functioning are discussed.
Land, Sander; Gurev, Viatcheslav; Arens, Sander; Augustin, Christoph M; Baron, Lukas; Blake, Robert; Bradley, Chris; Castro, Sebastian; Crozier, Andrew; Favino, Marco; Fastl, Thomas E; Fritz, Thomas; Gao, Hao; Gizzi, Alessio; Griffith, Boyce E; Hurtado, Daniel E; Krause, Rolf; Luo, Xiaoyu; Nash, Martyn P; Pezzuto, Simone; Plank, Gernot; Rossi, Simone; Ruprecht, Daniel; Seemann, Gunnar; Smith, Nicolas P; Sundnes, Joakim; Rice, J Jeremy; Trayanova, Natalia; Wang, Dafang; Jenny Wang, Zhinuo; Niederer, Steven A
2015-12-08
Models of cardiac mechanics are increasingly used to investigate cardiac physiology. These models are characterized by a high level of complexity, including the particular anisotropic material properties of biological tissue and the actively contracting material. A large number of independent simulation codes have been developed, but a consistent way of verifying the accuracy and replicability of simulations is lacking. To aid in the verification of current and future cardiac mechanics solvers, this study provides three benchmark problems for cardiac mechanics. These benchmark problems test the ability to accurately simulate pressure-type forces that depend on the deformed objects geometry, anisotropic and spatially varying material properties similar to those seen in the left ventricle and active contractile forces. The benchmark was solved by 11 different groups to generate consensus solutions, with typical differences in higher-resolution solutions at approximately 0.5%, and consistent results between linear, quadratic and cubic finite elements as well as different approaches to simulating incompressible materials. Online tools and solutions are made available to allow these tests to be effectively used in verification of future cardiac mechanics software.
Cummings, E. Mark; Schermerhorn, Alice C.; Merrilees, Christine E.; Goeke-Morey, Marcie C.; Shirlow, Peter; Cairns, Ed
2013-01-01
Moving beyond simply documenting that political violence negatively impacts children, a social ecological hypothesis for relations between political violence and child outcomes was tested. Participants were 700 mother-child (M=12.1years, SD=1.8) dyads from 18 working class, socially deprived areas in Belfast, Northern Ireland, including single- and two-parent families. Sectarian community violence was associated with elevated family conflict and children’s reduced security about multiple aspects of their social environment (i.e., family, parent-child relations, and community), with links to child adjustment problems and reductions in prosocial behavior. By comparison, and consistent with expectations, links with negative family processes, child regulatory problems and child outcomes were less consistent for nonsectarian community violence. Support was found for a social ecological model for relations between political violence and child outcomes among both single and two parent families, with evidence that emotional security and adjustment problems were more negatively affected in single-parent families. The implications for understanding social ecologies of political violence and children’s functioning are discussed. PMID:20604605
Consistent Partial Least Squares Path Modeling via Regularization
Jung, Sunho; Park, JaeHong
2018-01-01
Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present. PMID:29515491
Dynamics and Self-consistent Chaos in a Mean Field Hamiltonian Model
NASA Astrophysics Data System (ADS)
del-Castillo-Negrete, Diego
We study a mean field Hamiltonian model that describes the collective dynamics of marginally stable fluids and plasmas in the finite N and N-> infty kinetic limit (where N is the number of particles). The linear stability of equilibria in the kinetic model is studied as well as the initial value problem including Landau damping . Numerical simulations show the existence of coherent, rotating dipole states. We approximate the dipole as two macroparticles and show that the N=2 limit has a family of rotating integrable solutions that provide an accurate description of the dynamics. We discuss the role of self-consistent Hamiltonian chaos in the formation of coherent structures, and discuss a mechanism of "violent" mixing caused by a self-consistent elliptic-hyperbolic bifurcation in phase space.
Cai, Yong; Li, Rui; Zhu, Jingfen; Na, Li; He, Yaping; Redmon, Pam; Qiao, Yun; Ma, Jin
2015-01-01
Smoking among youths is a worldwide problem, particularly in China. Many endogenous and environmental factors influence smokers' intentions to smoke; therefore, a comprehensive model is needed to understand the significance and relationship of predictors. This study aimed to develop a prediction model based on problem-behavior theory (PBT) to interpret intentions to smoke among Chinese youths. We conducted a cross-sectional study of 26,675 adolescents from junior, senior, and vocational high schools in Shanghai, China. Data on smoking status, smoking knowledge, attitude toward smoking, parents' and peers' smoking, and media exposure to smoking were collected from students. A structural equation model was used to assess the developed prediction model. The experimental smoking rate and current smoking rate among the students were 11.0% and 3%, respectively. Our constructed model showed an acceptable fit to the data (comparative fit index = 0.987, root-mean-square error of approximation = 0.034). Intention to smoke was predicted by perceived environment (β = 0.455, P < 0.001) system consisting of peer smoking (β = 0.599, P < 0.001), parent smoking (β = 0.152, P < 0.001), and media exposure to smoking (β = 0.226, P < 0.001), and behavior system (β = 0.487, P < 0.001) consisting of tobacco experimentation (β = 0.663, P < 0.001) and current smoking (β = 0.755, P < 0.001). Smoking intention was irrelevant for personality system in students (β = -0.113, P>0.05) which consisted of acceptance of tobacco use (β = 0.668, P < 0.001) and academic performance (β = 0.171, P < 0.001). The PBT-based model we developed provides a good understanding of the predictors of intentions to smoke and it suggests future interventions among youths should focus on components in perceived environment and behavior systems, and take into account the moderating effects of personality system.
Adaptive Greedy Dictionary Selection for Web Media Summarization.
Cong, Yang; Liu, Ji; Sun, Gan; You, Quanzeng; Li, Yuncheng; Luo, Jiebo
2017-01-01
Initializing an effective dictionary is an indispensable step for sparse representation. In this paper, we focus on the dictionary selection problem with the objective to select a compact subset of basis from original training data instead of learning a new dictionary matrix as dictionary learning models do. We first design a new dictionary selection model via l 2,0 norm. For model optimization, we propose two methods: one is the standard forward-backward greedy algorithm, which is not suitable for large-scale problems; the other is based on the gradient cues at each forward iteration and speeds up the process dramatically. In comparison with the state-of-the-art dictionary selection models, our model is not only more effective and efficient, but also can control the sparsity. To evaluate the performance of our new model, we select two practical web media summarization problems: 1) we build a new data set consisting of around 500 users, 3000 albums, and 1 million images, and achieve effective assisted albuming based on our model and 2) by formulating the video summarization problem as a dictionary selection issue, we employ our model to extract keyframes from a video sequence in a more flexible way. Generally, our model outperforms the state-of-the-art methods in both these two tasks.
NASA Astrophysics Data System (ADS)
Huang, Maosong; Qu, Xie; Lü, Xilin
2017-11-01
By solving a nonlinear complementarity problem for the consistency condition, an improved implicit stress return iterative algorithm for a generalized over-nonlocal strain softening plasticity was proposed, and the consistent tangent matrix was obtained. The proposed algorithm was embodied into existing finite element codes, and it enables the nonlocal regularization of ill-posed boundary value problem caused by the pressure independent and dependent strain softening plasticity. The algorithm was verified by the numerical modeling of strain localization in a plane strain compression test. The results showed that a fast convergence can be achieved and the mesh-dependency caused by strain softening can be effectively eliminated. The influences of hardening modulus and material characteristic length on the simulation were obtained. The proposed algorithm was further used in the simulations of the bearing capacity of a strip footing; the results are mesh-independent, and the progressive failure process of the soil was well captured.
A Measure Approximation for Distributionally Robust PDE-Constrained Optimization Problems
Kouri, Drew Philip
2017-12-19
In numerous applications, scientists and engineers acquire varied forms of data that partially characterize the inputs to an underlying physical system. This data is then used to inform decisions such as controls and designs. Consequently, it is critical that the resulting control or design is robust to the inherent uncertainties associated with the unknown probabilistic characterization of the model inputs. Here in this work, we consider optimal control and design problems constrained by partial differential equations with uncertain inputs. We do not assume a known probabilistic model for the inputs, but rather we formulate the problem as a distributionally robustmore » optimization problem where the outer minimization problem determines the control or design, while the inner maximization problem determines the worst-case probability measure that matches desired characteristics of the data. We analyze the inner maximization problem in the space of measures and introduce a novel measure approximation technique, based on the approximation of continuous functions, to discretize the unknown probability measure. Finally, we prove consistency of our approximated min-max problem and conclude with numerical results.« less
A Longitudinal Empirical Investigation of the Pathways Model of Problem Gambling.
Allami, Youssef; Vitaro, Frank; Brendgen, Mara; Carbonneau, René; Lacourse, Éric; Tremblay, Richard E
2017-12-01
The pathways model of problem gambling suggests the existence of three developmental pathways to problem gambling, each differentiated by a set of predisposing biopsychosocial characteristics: behaviorally conditioned (BC), emotionally vulnerable (EV), and biologically vulnerable (BV) gamblers. This study examined the empirical validity of the Pathways Model among adolescents followed up to early adulthood. A prospective-longitudinal design was used, thus overcoming limitations of past studies that used concurrent or retrospective designs. Two samples were used: (1) a population sample of French-speaking adolescents (N = 1033) living in low socio-economic status (SES) neighborhoods from the Greater Region of Montreal (Quebec, Canada), and (2) a population sample of adolescents (N = 3017), representative of French-speaking students in Quebec. Only participants with at-risk or problem gambling by mid-adolescence or early adulthood were included in the main analysis (n = 180). Latent Profile Analyses were conducted to identify the optimal number of profiles, in accordance with participants' scores on a set of variables prescribed by the Pathways Model and measured during early adolescence: depression, anxiety, impulsivity, hyperactivity, antisocial/aggressive behavior, and drug problems. A four-profile model fit the data best. Three profiles differed from each other in ways consistent with the Pathways Model (i.e., BC, EV, and BV gamblers). A fourth profile emerged, resembling a combination of EV and BV gamblers. Four profiles of at-risk and problem gamblers were identified. Three of these profiles closely resemble those suggested by the Pathways Model.
An evolutionary morphological approach for software development cost estimation.
Araújo, Ricardo de A; Oliveira, Adriano L I; Soares, Sergio; Meira, Silvio
2012-08-01
In this work we present an evolutionary morphological approach to solve the software development cost estimation (SDCE) problem. The proposed approach consists of a hybrid artificial neuron based on framework of mathematical morphology (MM) with algebraic foundations in the complete lattice theory (CLT), referred to as dilation-erosion perceptron (DEP). Also, we present an evolutionary learning process, called DEP(MGA), using a modified genetic algorithm (MGA) to design the DEP model, because a drawback arises from the gradient estimation of morphological operators in the classical learning process of the DEP, since they are not differentiable in the usual way. Furthermore, an experimental analysis is conducted with the proposed model using five complex SDCE problems and three well-known performance metrics, demonstrating good performance of the DEP model to solve SDCE problems. Copyright © 2012 Elsevier Ltd. All rights reserved.
Belief Propagation Algorithm for Portfolio Optimization Problems
2015-01-01
The typical behavior of optimal solutions to portfolio optimization problems with absolute deviation and expected shortfall models using replica analysis was pioneeringly estimated by S. Ciliberti et al. [Eur. Phys. B. 57, 175 (2007)]; however, they have not yet developed an approximate derivation method for finding the optimal portfolio with respect to a given return set. In this study, an approximation algorithm based on belief propagation for the portfolio optimization problem is presented using the Bethe free energy formalism, and the consistency of the numerical experimental results of the proposed algorithm with those of replica analysis is confirmed. Furthermore, the conjecture of H. Konno and H. Yamazaki, that the optimal solutions with the absolute deviation model and with the mean-variance model have the same typical behavior, is verified using replica analysis and the belief propagation algorithm. PMID:26305462
Diagnosing and dealing with multicollinearity.
Schroeder, M A
1990-04-01
The purpose of this article was to increase nurse researchers' awareness of the effects of collinear data in developing theoretical models for nursing practice. Collinear data distort the true value of the estimates generated from ordinary least-squares analysis. Theoretical models developed to provide the underpinnings of nursing practice need not be abandoned, however, because they fail to produce consistent estimates over repeated applications. It is also important to realize that multicollinearity is a data problem, not a problem associated with misspecification of a theorectical model. An investigator must first be aware of the problem, and then it is possible to develop an educated solution based on the degree of multicollinearity, theoretical considerations, and sources of error associated with alternative, biased, least-square regression techniques. Decisions based on theoretical and statistical considerations will further the development of theory-based nursing practice.
Belief Propagation Algorithm for Portfolio Optimization Problems.
Shinzato, Takashi; Yasuda, Muneki
2015-01-01
The typical behavior of optimal solutions to portfolio optimization problems with absolute deviation and expected shortfall models using replica analysis was pioneeringly estimated by S. Ciliberti et al. [Eur. Phys. B. 57, 175 (2007)]; however, they have not yet developed an approximate derivation method for finding the optimal portfolio with respect to a given return set. In this study, an approximation algorithm based on belief propagation for the portfolio optimization problem is presented using the Bethe free energy formalism, and the consistency of the numerical experimental results of the proposed algorithm with those of replica analysis is confirmed. Furthermore, the conjecture of H. Konno and H. Yamazaki, that the optimal solutions with the absolute deviation model and with the mean-variance model have the same typical behavior, is verified using replica analysis and the belief propagation algorithm.
Hwang, Yen-Ting; Frierson, Dargan M. W.
2013-01-01
The double-Intertropical Convergence Zone (ITCZ) problem, in which excessive precipitation is produced in the Southern Hemisphere tropics, which resembles a Southern Hemisphere counterpart to the strong Northern Hemisphere ITCZ, is perhaps the most significant and most persistent bias of global climate models. In this study, we look to the extratropics for possible causes of the double-ITCZ problem by performing a global energetic analysis with historical simulations from a suite of global climate models and comparing with satellite observations of the Earth’s energy budget. Our results show that models with more energy flux into the Southern Hemisphere atmosphere (at the top of the atmosphere and at the surface) tend to have a stronger double-ITCZ bias, consistent with recent theoretical studies that suggest that the ITCZ is drawn toward heating even outside the tropics. In particular, we find that cloud biases over the Southern Ocean explain most of the model-to-model differences in the amount of excessive precipitation in Southern Hemisphere tropics, and are suggested to be responsible for this aspect of the double-ITCZ problem in most global climate models. PMID:23493552
Natsuaki, Misaki N.; Ge, Xiaojia; Reiss, David; Neiderhiser, Jenae M.
2011-01-01
This study investigated the prospective links between sibling aggression and the development of externalizing problems using a multilevel modeling approach with a genetically sensitive design. The sample consisted of 780 adolescents (390 sibling pairs) who participated in two waves of the Nonshared Environment for Adolescent Development (NEAD) project. Sibling pairs with varying degree of genetic relatedness, including monozygotic twins, dizygotic twins, full siblings, half siblings, and genetically unrelated siblings, were included. The results showed that sibling aggression at Time 1 was significantly associated with the focal child’s externalizing problems at Time 2 after accounting for the intra-class correlations between siblings. Sibling aggression remained significant in predicting subsequent externalizing problems even after controlling for the levels of pre-existing externalizing problems and mothers’ punitive parenting. This pattern of results was fairly robust across models using different informants. The findings provide converging evidence for the unique contribution of sibling aggression in understanding changes in externalizing problems during adolescence. PMID:19586176
A Variational Assimilation Method for Satellite and Conventional Data: a Revised Basic Model 2B
NASA Technical Reports Server (NTRS)
Achtemeier, Gary L.; Scott, Robert W.; Chen, J.
1991-01-01
A variational objective analysis technique that modifies observations of temperature, height, and wind on the cyclone scale to satisfy the five 'primitive' model forecast equations is presented. This analysis method overcomes all of the problems that hindered previous versions, such as over-determination, time consistency, solution method, and constraint decoupling. A preliminary evaluation of the method shows that it converges rapidly, the divergent part of the wind is strongly coupled in the solution, fields of height and temperature are well-preserved, and derivative quantities such as vorticity and divergence are improved. Problem areas are systematic increases in the horizontal velocity components, and large magnitudes of the local tendencies of the horizontal velocity components. The preliminary evaluation makes note of these problems but detailed evaluations required to determine the origin of these problems await future research.
Replica analysis for the duality of the portfolio optimization problem
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2016-11-01
In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.
Replica analysis for the duality of the portfolio optimization problem.
Shinzato, Takashi
2016-11-01
In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.
AdaBoost-based on-line signature verifier
NASA Astrophysics Data System (ADS)
Hongo, Yasunori; Muramatsu, Daigo; Matsumoto, Takashi
2005-03-01
Authentication of individuals is rapidly becoming an important issue. The authors previously proposed a Pen-input online signature verification algorithm. The algorithm considers a writer"s signature as a trajectory of pen position, pen pressure, pen azimuth, and pen altitude that evolve over time, so that it is dynamic and biometric. Many algorithms have been proposed and reported to achieve accuracy for on-line signature verification, but setting the threshold value for these algorithms is a problem. In this paper, we introduce a user-generic model generated by AdaBoost, which resolves this problem. When user- specific models (one model for each user) are used for signature verification problems, we need to generate the models using only genuine signatures. Forged signatures are not available because imposters do not give forged signatures for training in advance. However, we can make use of another's forged signature in addition to the genuine signatures for learning by introducing a user generic model. And Adaboost is a well-known classification algorithm, making final decisions depending on the sign of the output value. Therefore, it is not necessary to set the threshold value. A preliminary experiment is performed on a database consisting of data from 50 individuals. This set consists of western-alphabet-based signatures provide by a European research group. In this experiment, our algorithm gives an FRR of 1.88% and an FAR of 1.60%. Since no fine-tuning was done, this preliminary result looks very promising.
Remarks on non-singular black holes
NASA Astrophysics Data System (ADS)
Frolov, Valeri P.
2018-01-01
We briefly discuss non-singular black hole models, with the main focus on the properties of non-singular evaporating black holes. Such black holes possess an apparent horizon, however the event horizon may be absent. In such a case, the information from the black hole interior may reach the external observer after the complete evaporation of the black hole. This model might be used for the resolution of the information loss puzzle. However, as we demonstrate, in a general case the quantum radiation emitted from the black hole interior, calculated in the given black hole background, is very large. This outburst of the radiation is exponentially large for models with the redshift function α = 1. We show that it can be suppressed by including a non-trivial redshift function. However, even this suppression is not enough to guarantee self-consistency of the model. This problem is a manifestation of a general problem, known as the "mass inflation". We briefly comment on possible ways to overcome this problem in the models of non-singular evaporating black holes.
Modelling uncertainty with generalized credal sets: application to conjunction and decision
NASA Astrophysics Data System (ADS)
Bronevich, Andrey G.; Rozenberg, Igor N.
2018-01-01
To model conflict, non-specificity and contradiction in information, upper and lower generalized credal sets are introduced. Any upper generalized credal set is a convex subset of plausibility measures interpreted as lower probabilities whose bodies of evidence consist of singletons and a certain event. Analogously, contradiction is modelled in the theory of evidence by a belief function that is greater than zero at empty set. Based on generalized credal sets, we extend the conjunctive rule for contradictory sources of information, introduce constructions like natural extension in the theory of imprecise probabilities and show that the model of generalized credal sets coincides with the model of imprecise probabilities if the profile of a generalized credal set consists of probability measures. We give ways how the introduced model can be applied to decision problems.
A Multi-Stage Reverse Logistics Network Problem by Using Hybrid Priority-Based Genetic Algorithm
NASA Astrophysics Data System (ADS)
Lee, Jeong-Eun; Gen, Mitsuo; Rhee, Kyong-Gu
Today remanufacturing problem is one of the most important problems regarding to the environmental aspects of the recovery of used products and materials. Therefore, the reverse logistics is gaining become power and great potential for winning consumers in a more competitive context in the future. This paper considers the multi-stage reverse Logistics Network Problem (m-rLNP) while minimizing the total cost, which involves reverse logistics shipping cost and fixed cost of opening the disassembly centers and processing centers. In this study, we first formulate the m-rLNP model as a three-stage logistics network model. Following for solving this problem, we propose a Genetic Algorithm pri (GA) with priority-based encoding method consisting of two stages, and introduce a new crossover operator called Weight Mapping Crossover (WMX). Additionally also a heuristic approach is applied in the 3rd stage to ship of materials from processing center to manufacturer. Finally numerical experiments with various scales of the m-rLNP models demonstrate the effectiveness and efficiency of our approach by comparing with the recent researches.
The Association between Preschool Children's Social Functioning and Their Emergent Academic Skills.
Arnold, David H; Kupersmidt, Janis B; Voegler-Lee, Mary Ellen; Marshall, Nastassja
2012-01-01
This study examined the relationship between social functioning and emergent academic development in a sample of 467 preschool children (M = 55.9 months old, SD = 3.8). Teachers reported on children's aggression, attention problems, and prosocial skills. Preliteracy, language, and early mathematics skills were assessed with standardized tests. Better social functioning was associated with stronger academic development. Attention problems were related to poorer academic development controlling for aggression and social skills, pointing to the importance of attention in these relations. Children's social skills were related to academic development controlling for attention and aggression problems, consistent with models suggesting that children's social strengths and difficulties are independently related to their academic development. Support was not found for the hypothesis that these relationships would be stronger in boys than in girls. Some relationships were stronger in African American than Caucasian children. Children's self-reported feelings about school moderated several relationships, consistent with the idea that positive feelings about school may be a protective factor against co-occurring academic and social problems.
NASA Astrophysics Data System (ADS)
Shorikov, A. F.
2016-12-01
In this article we consider a discrete-time dynamical system consisting of a set a controllable objects (region and forming it municipalities). The dynamics each of these is described by the corresponding linear or nonlinear discrete-time recurrent vector relations and its control system consist from two levels: basic level (control level I) that is dominating level and auxiliary level (control level II) that is subordinate level. Both levels have different criterions of functioning and united by information and control connections which defined in advance. In this article we study the problem of optimization of guaranteed result for program control by the final state of regional social and economic system in the presence of risks vectors. For this problem we propose a mathematical model in the form of two-level hierarchical minimax program control problem of the final states of this system with incomplete information and the general scheme for its solving.
Consistent three-equation model for thin films
NASA Astrophysics Data System (ADS)
Richard, Gael; Gisclon, Marguerite; Ruyer-Quil, Christian; Vila, Jean-Paul
2017-11-01
Numerical simulations of thin films of newtonian fluids down an inclined plane use reduced models for computational cost reasons. These models are usually derived by averaging over the fluid depth the physical equations of fluid mechanics with an asymptotic method in the long-wave limit. Two-equation models are based on the mass conservation equation and either on the momentum balance equation or on the work-energy theorem. We show that there is no two-equation model that is both consistent and theoretically coherent and that a third variable and a three-equation model are required to solve all theoretical contradictions. The linear and nonlinear properties of two and three-equation models are tested on various practical problems. We present a new consistent three-equation model with a simple mathematical structure which allows an easy and reliable numerical resolution. The numerical calculations agree fairly well with experimental measurements or with direct numerical resolutions for neutral stability curves, speed of kinematic waves and of solitary waves and depth profiles of wavy films. The model can also predict the flow reversal at the first capillary trough ahead of the main wave hump.
Petri net modeling of high-order genetic systems using grammatical evolution.
Moore, Jason H; Hahn, Lance W
2003-11-01
Understanding how DNA sequence variations impact human health through a hierarchy of biochemical and physiological systems is expected to improve the diagnosis, prevention, and treatment of common, complex human diseases. We have previously developed a hierarchical dynamic systems approach based on Petri nets for generating biochemical network models that are consistent with genetic models of disease susceptibility. This modeling approach uses an evolutionary computation approach called grammatical evolution as a search strategy for optimal Petri net models. We have previously demonstrated that this approach routinely identifies biochemical network models that are consistent with a variety of genetic models in which disease susceptibility is determined by nonlinear interactions between two DNA sequence variations. In the present study, we evaluate whether the Petri net approach is capable of identifying biochemical networks that are consistent with disease susceptibility due to higher order nonlinear interactions between three DNA sequence variations. The results indicate that our model-building approach is capable of routinely identifying good, but not perfect, Petri net models. Ideas for improving the algorithm for this high-dimensional problem are presented.
NASA Technical Reports Server (NTRS)
Simpson, Timothy W.
1998-01-01
The use of response surface models and kriging models are compared for approximating non-random, deterministic computer analyses. After discussing the traditional response surface approach for constructing polynomial models for approximation, kriging is presented as an alternative statistical-based approximation method for the design and analysis of computer experiments. Both approximation methods are applied to the multidisciplinary design and analysis of an aerospike nozzle which consists of a computational fluid dynamics model and a finite element analysis model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations. Four optimization problems are formulated and solved using both approximation models. While neither approximation technique consistently outperforms the other in this example, the kriging models using only a constant for the underlying global model and a Gaussian correlation function perform as well as the second order polynomial response surface models.
Using the Community Readiness Model in Native Communities.
ERIC Educational Resources Information Center
Jumper-Thurman, Pamela; Plested, Barbara A.; Edwards, Ruth W.; Helm, Heather M.; Oetting, Eugene R.
The effects of alcohol and other drug abuse are recognized as a serious problem in U.S. communities. Policy efforts and increased law enforcement have only a minimal impact if prevention strategies are not consistent with the community's level of readiness, are not culturally relevant, and are not community-specific. A model has been developed for…
Visual Literacy and the Integration of Parametric Modeling in the Problem-Based Curriculum
ERIC Educational Resources Information Center
Assenmacher, Matthew Benedict
2013-01-01
This quasi-experimental study investigated the application of visual literacy skills in the form of parametric modeling software in relation to traditional forms of sketching. The study included two groups of high school technical design students. The control and experimental groups involved in the study consisted of two randomly selected groups…
The Role of Human Intelligence in Computer-Based Intelligent Tutoring Systems.
ERIC Educational Resources Information Center
Epstein, Kenneth; Hillegeist, Eleanor
An Intelligent Tutoring System (ITS) consists of an expert problem-solving program in a subject domain, a tutoring model capable of remediation or primary instruction, and an assessment model that monitors student understanding. The Geometry Proof Tutor (GPT) is an ITS which was developed at Carnegie Mellon University and field tested in the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bossavit, A.
The authors show how to pass from the local Bean`s model, assumed to be valid as a behavior law for a homogeneous superconductor, to a model of similar form, valid on a larger space scale. The process, which can be iterated to higher and higher space scales, consists in solving for the fields e and j over a ``periodicity cell`` with periodic boundary conditions.
NASA Astrophysics Data System (ADS)
Alhusaini, Abdulnasser Alashaal F.
The Real Engagement in Active Problem Solving (REAPS) model was developed in 2004 by C. June Maker and colleagues as an intervention for gifted students to develop creative problem solving ability through the use of real-world problems. The primary purpose of this study was to examine the effects of the REAPS model on developing students' general creativity and creative problem solving in science with two durations as independent variables. The long duration of the REAPS model implementation lasted five academic quarters or approximately 10 months; the short duration lasted two quarters or approximately four months. The dependent variables were students' general creativity and creative problem solving in science. The second purpose of the study was to explore which aspects of creative problem solving (i.e., generating ideas, generating different types of ideas, generating original ideas, adding details to ideas, generating ideas with social impact, finding problems, generating and elaborating on solutions, and classifying elements) were most affected by the long duration of the intervention. The REAPS model in conjunction with Amabile's (1983; 1996) model of creative performance provided the theoretical framework for this study. The study was conducted using data from the Project of Differentiation for Diverse Learners in Regular Classrooms (i.e., the Australian Project) in which one public elementary school in the eastern region of Australia cooperated with the DISCOVER research team at the University of Arizona. All students in the school from first to sixth grade participated in the study. The total sample was 360 students, of which 115 were exposed to a long duration and 245 to a short duration of the REAPS model. The principal investigators used a quasi-experimental research design in which all students in the school received the treatment for different durations. Students in both groups completed pre- and posttests using the Test of Creative Thinking-Drawing Production (TCT-DP) and the Test of Creative Problem Solving in Science (TCPS-S). A one-way analysis of covariance (ANCOVA) was conducted to control for differences between the two groups on pretest results. Statistically significant differences were not found between posttest scores on the TCT-DP for the two durations of REAPS model implementation. However, statistically significant differences were found between posttest scores on the TCPS-S. These findings are consistent with Amabile's (1983; 1996) model of creative performance, particularly her explanation that domain-specific creativity requires knowledge such as specific content and technical skills that must be learned prior to being applied creatively. The findings are also consistent with literature in which researchers have found that longer interventions typically result in expected positive growth in domain-specific creativity, while both longer and shorter interventions have been found effective in improving domain-general creativity. Change scores were also calculated between pre- and posttest scores on the 8 aspects of creativity (Maker, Jo, Alfaiz, & Alhusaini, 2015a), and a binary logistic regression was conducted to assess which were the most affected by the long duration of the intervention. The regression model was statistically significant, with aspects of generating ideas, adding details to ideas, and finding problems being the most affected by the long duration of the intervention. Based on these findings, the researcher believes that the REAPS model is a useful intervention to develop students' creativity. Future researchers should implement the model for longer durations if they are interested in developing students' domain-specific creative problem solving ability.
Brief Psychotherapy in Family Practice
MacDonald, Peter J.; Brown, Alan
1986-01-01
A large number of patients with psychosocial or psychiatric disorders present to family physicians, and the family physician needs a model of psychotherapy with which to cope with their problems. A model of brief psychotherapy is presented which is time limited, goal directed and easy to learn. It consists of four facets drawn from established areas of psychotherapy: characteristics of the therapist; characteristics of the patient; Eriksonian developmental stages; and the process of therapy as described by Carkhuff. These facets fit together in a way which is useful to the family physician in managing those patient problems for which brief psychotherapy is indicated. PMID:21267176
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Y. B.; Zhu, X. W., E-mail: xiaowuzhu1026@znufe.edu.cn; Dai, H. H.
Though widely used in modelling nano- and micro- structures, Eringen’s differential model shows some inconsistencies and recent study has demonstrated its differences between the integral model, which then implies the necessity of using the latter model. In this paper, an analytical study is taken to analyze static bending of nonlocal Euler-Bernoulli beams using Eringen’s two-phase local/nonlocal model. Firstly, a reduction method is proved rigorously, with which the integral equation in consideration can be reduced to a differential equation with mixed boundary value conditions. Then, the static bending problem is formulated and four types of boundary conditions with various loadings aremore » considered. By solving the corresponding differential equations, exact solutions are obtained explicitly in all of the cases, especially for the paradoxical cantilever beam problem. Finally, asymptotic analysis of the exact solutions reveals clearly that, unlike the differential model, the integral model adopted herein has a consistent softening effect. Comparisons are also made with existing analytical and numerical results, which further shows the advantages of the analytical results obtained. Additionally, it seems that the once controversial nonlocal bar problem in the literature is well resolved by the reduction method.« less
Norman, Laura M.
2007-01-01
Ecological considerations need to be interwoven with economic policy and planning along the United States‐Mexican border. Non‐point source pollution can have significant implications for the availability of potable water and the continued health of borderland ecosystems in arid lands. However, environmental assessments in this region present a host of unique issues and problems. A common obstacle to the solution of these problems is the integration of data with different resolutions, naming conventions, and quality to create a consistent database across the binational study area. This report presents a simple modeling approach to predict nonpoint source pollution that can be used for border watersheds. The modeling approach links a hillslopescale erosion‐prediction model and a spatially derived sediment‐delivery model within a geographic information system to estimate erosion, sediment yield, and sediment deposition across the Ambos Nogales watershed in Sonora, Mexico, and Arizona. This paper discusses the procedures used for creating a watershed database to apply the models and presents an example of the modeling approach applied to a conservation‐planning problem.
NASA Astrophysics Data System (ADS)
Fazayeli, Saeed; Eydi, Alireza; Kamalabadi, Isa Nakhai
2017-07-01
Nowadays, organizations have to compete with different competitors in regional, national and international levels, so they have to improve their competition capabilities to survive against competitors. Undertaking activities on a global scale requires a proper distribution system which could take advantages of different transportation modes. Accordingly, the present paper addresses a location-routing problem on multimodal transportation network. The introduced problem follows four objectives simultaneously which form main contribution of the paper; determining multimodal routes between supplier and distribution centers, locating mode changing facilities, locating distribution centers, and determining product delivery tours from the distribution centers to retailers. An integer linear programming is presented for the problem, and a genetic algorithm with a new chromosome structure proposed to solve the problem. Proposed chromosome structure consists of two different parts for multimodal transportation and location-routing parts of the model. Based on published data in the literature, two numerical cases with different sizes generated and solved. Also, different cost scenarios designed to better analyze model and algorithm performance. Results show that algorithm can effectively solve large-size problems within a reasonable time which GAMS software failed to reach an optimal solution even within much longer times.
NASA Astrophysics Data System (ADS)
Fazayeli, Saeed; Eydi, Alireza; Kamalabadi, Isa Nakhai
2018-07-01
Nowadays, organizations have to compete with different competitors in regional, national and international levels, so they have to improve their competition capabilities to survive against competitors. Undertaking activities on a global scale requires a proper distribution system which could take advantages of different transportation modes. Accordingly, the present paper addresses a location-routing problem on multimodal transportation network. The introduced problem follows four objectives simultaneously which form main contribution of the paper; determining multimodal routes between supplier and distribution centers, locating mode changing facilities, locating distribution centers, and determining product delivery tours from the distribution centers to retailers. An integer linear programming is presented for the problem, and a genetic algorithm with a new chromosome structure proposed to solve the problem. Proposed chromosome structure consists of two different parts for multimodal transportation and location-routing parts of the model. Based on published data in the literature, two numerical cases with different sizes generated and solved. Also, different cost scenarios designed to better analyze model and algorithm performance. Results show that algorithm can effectively solve large-size problems within a reasonable time which GAMS software failed to reach an optimal solution even within much longer times.
Soenens, Bart; Vansteenkiste, Maarten; Luyckx, Koen; Goossens, Luc
2006-03-01
Parental monitoring, assessed as (perceived) parental knowledge of the child's behavior, has been established as a consistent predictor of problem behavior. However, recent research indicates that parental knowledge has more to do with adolescents' self-disclosure than with parents' active monitoring. Although these findings may suggest that parents exert little influence on adolescents' problem behavior, the authors argue that this conclusion is premature, because self-disclosure may in itself be influenced by parents' rearing style. This study (a) examined relations between parenting dimensions and self-disclosure and (b) compared 3 models describing the relations among parenting, self-disclosure, perceived parental knowledge, and problem behavior. Results in a sample of 10th- to 12th-grade students, their parents, and their peers demonstrated that high responsiveness, high behavioral control, and low psychological control are independent predictors of self-disclosure. In addition, structural equation modeling analyses demonstrated that parenting is both indirectly (through self-disclosure) and directly associated with perceived parental knowledge but is not directly related to problem behavior or affiliation with peers engaging in problem behavior. Copyright (c) 2006 APA, all rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di Vittorio, Alan V.; Chini, Louise M.; Bond-Lamberty, Benjamin
2014-11-27
Climate projections depend on scenarios of fossil fuel emissions and land use change, and the IPCC AR5 parallel process assumes consistent climate scenarios across Integrated Assessment and Earth System Models (IAMs and ESMs). To facilitate consistency, CMIP5 used a novel land use harmonization to provide ESMs with seamless, 1500-2100 land use trajectories generated by historical data and four IAMs. However, we have identified and partially addressed a major gap in the CMIP5 land coupling design. The CMIP5 Community ESM (CESM) global afforestation is only 22% of RCP4.5 afforestation from 2005 to 2100. Likewise, only 17% of the Global Change Assessmentmore » Model’s (GCAM’s) 2040 RCP4.5 afforestation signal, and none of the pasture loss, were transmitted to CESM within a newly integrated model. This is a critical problem because afforestation is necessary for achieving the RCP4.5 climate stabilization. We attempted to rectify this problem by modifying only the ESM component of the integrated model, enabling CESM to simulate 66% of GCAM’s afforestation in 2040, and 94% of GCAM’s pasture loss as grassland and shrubland losses. This additional afforestation increases vegetation carbon gain by 19 PgC and decreases atmospheric CO2 gain by 8 ppmv from 2005 to 2040, implying different climate scenarios between CMIP5 GCAM and CESM. Similar inconsistencies likely exist in other CMIP5 model results, primarily because land cover information is not shared between models, with possible contributions from afforestation exceeding model-specific, potentially viable forest area. Further work to harmonize land cover among models will be required to adequately rectify this problem.« less
Center for Extended Magnetohydrodynamics Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramos, Jesus
This researcher participated in the DOE-funded Center for Extended Magnetohydrodynamics Modeling (CEMM), a multi-institutional collaboration led by the Princeton Plasma Physics Laboratory with Dr. Stephen Jardin as the overall Principal Investigator. This project developed advanced simulation tools to study the non-linear macroscopic dynamics of magnetically confined plasmas. The collaborative effort focused on the development of two large numerical simulation codes, M3D-C1 and NIMROD, and their application to a wide variety of problems. Dr. Ramos was responsible for theoretical aspects of the project, deriving consistent sets of model equations applicable to weakly collisional plasmas and devising test problems for verification ofmore » the numerical codes. This activity was funded for twelve years.« less
Optimization as a Tool for Consistency Maintenance in Multi-Resolution Simulation
NASA Technical Reports Server (NTRS)
Drewry, Darren T; Reynolds, Jr , Paul F; Emanuel, William R
2006-01-01
The need for new approaches to the consistent simulation of related phenomena at multiple levels of resolution is great. While many fields of application would benefit from a complete and approachable solution to this problem, such solutions have proven extremely difficult. We present a multi-resolution simulation methodology that uses numerical optimization as a tool for maintaining external consistency between models of the same phenomena operating at different levels of temporal and/or spatial resolution. Our approach follows from previous work in the disparate fields of inverse modeling and spacetime constraint-based animation. As a case study, our methodology is applied to two environmental models of forest canopy processes that make overlapping predictions under unique sets of operating assumptions, and which execute at different temporal resolutions. Experimental results are presented and future directions are addressed.
Hoppmann, Christiane A; Blanchard-Fields, Fredda
2011-09-01
Problem-solving does not take place in isolation and often involves social others such as spouses. Using repeated daily life assessments from 98 older spouses (M age = 72 years; M marriage length = 42 years), the present study examined theoretical notions from social-contextual models of coping regarding (a) the origins of problem-solving variability and (b) associations between problem-solving and specific problem-, person-, and couple- characteristics. Multilevel models indicate that the lion's share of variability in everyday problem-solving is located at the level of the problem situation. Importantly, participants reported more proactive emotion regulation and collaborative problem-solving for social than nonsocial problems. We also found person-specific consistencies in problem-solving. That is, older spouses high in Neuroticism reported more problems across the study period as well as less instrumental problem-solving and more passive emotion regulation than older spouses low in Neuroticism. Contrary to expectations, relationship satisfaction was unrelated to problem-solving in the present sample. Results are in line with the stress and coping literature in demonstrating that everyday problem-solving is a dynamic process that has to be viewed in the broader context in which it occurs. Our findings also complement previous laboratory-based work on everyday problem-solving by underscoring the benefits of examining everyday problem-solving as it unfolds in spouses' own environment.
Wide baseline stereo matching based on double topological relationship consistency
NASA Astrophysics Data System (ADS)
Zou, Xiaohong; Liu, Bin; Song, Xiaoxue; Liu, Yang
2009-07-01
Stereo matching is one of the most important branches in computer vision. In this paper, an algorithm is proposed for wide-baseline stereo vision matching. Here, a novel scheme is presented called double topological relationship consistency (DCTR). The combination of double topological configuration includes the consistency of first topological relationship (CFTR) and the consistency of second topological relationship (CSTR). It not only sets up a more advanced model on matching, but discards mismatches by iteratively computing the fitness of the feature matches and overcomes many problems of traditional methods depending on the powerful invariance to changes in the scale, rotation or illumination across large view changes and even occlusions. Experimental examples are shown where the two cameras have been located in very different orientations. Also, epipolar geometry can be recovered using RANSAC by far the most widely method adopted possibly. By the method, we can obtain correspondences with high precision on wide baseline matching problems. Finally, the effectiveness and reliability of this method are demonstrated in wide-baseline experiments on the image pairs.
Some solutions for one of the cosmological constant problems
NASA Astrophysics Data System (ADS)
Nojiri, Shin'Ichi
2016-11-01
We propose several covariant models which may solve one of the problems in the cosmological constant. One of the models can be regarded as an extension of sequestering model. Other models could be regarded as extensions of the covariant formulation of the unimodular gravity. The contributions to the vacuum energy from the quantum corrections from the matters are absorbed into a redefinition of a scalar field and the quantum corrections become irrelevant to the dynamics. In a class of the extended unimodular gravity models, we also consider models which are regarded as topological field theories. The models can be extended and not only the vacuum energy but also any quantum corrections to the gravitational action could become irrelevant for the dynamics. We find, however, that the BRS symmetry in the topological field theories is broken spontaneously and therefore, the models might not be consistent.
State estimation improves prospects for ocean research
NASA Astrophysics Data System (ADS)
Stammer, Detlef; Wunsch, C.; Fukumori, I.; Marshall, J.
Rigorous global ocean state estimation methods can now be used to produce dynamically consistent time-varying model/data syntheses, the results of which are being used to study a variety of important scientific problems. Figure 1 shows a schematic of a complete ocean observing and synthesis system that includes global observations and state-of-the-art ocean general circulation models (OGCM) run on modern computer platforms. A global observing system is described in detail in Smith and Koblinsky [2001],and the present status of ocean modeling and anticipated improvements are addressed by Griffies et al. [2001]. Here, the focus is on the third component of state estimation: the synthesis of the observations and a model into a unified, dynamically consistent estimate.
ERIC Educational Resources Information Center
Marcovitz, Alan B., Ed.
A particularly difficult area for many engineering students is the approximate nature of the relation between models and physical systems. This is notably true when the models consist of differential equations. An approach applied to this problem has been to use analog computers to assist in portraying the output of a model as it is progressively…
ERIC Educational Resources Information Center
Tay, Su Lynn; Yeo, Jennifer
2018-01-01
Great teaching is characterised by the specific actions a teacher takes in the classroom to bring about learning. In the context of model-based teaching (MBT), teachers' difficulty in working with students' models that are not scientifically consistent is troubling. To address this problem, the aim of this study is to identify the pedagogical…
NASA Astrophysics Data System (ADS)
Sauer, Tim Allen
The purpose of this study was to evaluate the effectiveness of utilizing student constructed theoretical math models when teaching acceleration to high school introductory physics students. The goal of the study was for the students to be able to utilize mathematical modeling strategies to improve their problem solving skills, as well as their standardized scientific and conceptual understanding. This study was based on mathematical modeling research, conceptual change research and constructivist theory of learning, all of which suggest that mathematical modeling is an effective way to influence students' conceptual connectiveness and sense making of formulaic equations and problem solving. A total of 48 students in two sections of high school introductory physics classes received constructivist, inquiry-based, cooperative learning, and conceptual change-oriented instruction. The difference in the instruction for the 24 students in the mathematical modeling treatment group was that they constructed every formula they needed to solve problems from data they collected. In contrast, the instructional design for the control group of 24 students allowed the same instruction with assigned problems solved with formulas given to them without explanation. The results indicated that the mathematical modeling students were able to solve less familiar and more complicated problems with greater confidence and mental flexibility than the control group students. The mathematical modeling group maintained fewer alternative conceptions consistently in the interviews than did the control group. The implications for acceleration instruction from these results were discussed.
Process-informed extreme value statistics- Why and how?
NASA Astrophysics Data System (ADS)
Schumann, Andreas; Fischer, Svenja
2017-04-01
In many parts of the world, annual maximum series (AMS) of runoff consist of flood peaks, which differ in their genesis. There are several aspects why these differences should be considered: Often multivariate flood characteristics (volumes, shapes) are of interest. These characteristics depend on the flood types. For regionalization, the main impacts on the flood regime has to be specified. If this regime depends on different flood types, type-specific hydro-meteorological and/or watershed characteristics are relevant. The ratios between event types often change over the range of observations. If a majority of events, which belongs to certain flood type, dominates the extrapolation of a probability distribution function (pdf), it is a problem if this more frequent type would not be typical for extraordinary large extremes, determining the right tail of the pdf. To consider differences in flood origin, several problems has to be solved. The events have to be separated into different groups according to their genesis. This can be a problem for long past events where e.g. precipitation data are not available. Another problem consists in the flood type-specific statistics. If block maxima are used, the sample of floods belong to a certain type is often incomplete as other events are overlaying smaller events. Some practical useable statistical tools to solve this and other problems are presented in a case study. Seasonal models were developed which differ between winter and summer floods but also between events with long and short timescales. The pdfs of the two groups of summer floods are combined via a new mixing model. The application to German watersheds demonstrates the advantages of the new model, giving specific influence to flood types.
One-dimensional hybrid model of plasma-solid interaction in argon plasma at higher pressures
NASA Astrophysics Data System (ADS)
Jelínek, P.; Hrach, R.
2007-04-01
One of problems important in the present plasma science is the surface treatment of materials at higher pressures, including the atmospheric pressure plasma. The theoretical analysis of processes in such plasmas is difficult, because the theories derived for collisionless or slightly collisional plasma lose their validity at medium and high pressures, therefore the methods of computational physics are being widely used. There are two basic ways, how to model the physical processes taking place during the interaction of plasma with immersed solids. The first technique is the particle approach, the second one is called the fluid modelling. Both these approaches have their limitations-small efficiency of particle modelling and limited accuracy of fluid models. In computer modelling is endeavoured to use advantages by combination of these two approaches, this combination is named hybrid modelling. In our work one-dimensional hybrid model of plasma-solid interaction has been developed for an electropositive plasma at higher pressures. We have used hybrid model for this problem only as the test for our next applications, e.g. pulsed discharge, RF discharge, etc. The hybrid model consists of a combined molecular dynamics-Monte Carlo model for fast electrons and fluid model for slow electrons and positive argon ions. The latter model also contains Poisson's equation, to obtain a self-consistent electric field distribution. The derived results include the spatial distributions of electric potential, concentrations and fluxes of individual charged species near the substrate for various pressures and for various probe voltage bias.
Volume of the steady-state space of financial flows in a monetary stock-flow-consistent model
NASA Astrophysics Data System (ADS)
Hazan, Aurélien
2017-05-01
We show that a steady-state stock-flow consistent macro-economic model can be represented as a Constraint Satisfaction Problem (CSP). The set of solutions is a polytope, which volume depends on the constraints applied and reveals the potential fragility of the economic circuit, with no need to study the dynamics. Several methods to compute the volume are compared, inspired by operations research methods and the analysis of metabolic networks, both exact and approximate. We also introduce a random transaction matrix, and study the particular case of linear flows with respect to money stocks.
Veliz-Cuba, Alan; Aguilar, Boris; Hinkelmann, Franziska; Laubenbacher, Reinhard
2014-06-26
A key problem in the analysis of mathematical models of molecular networks is the determination of their steady states. The present paper addresses this problem for Boolean network models, an increasingly popular modeling paradigm for networks lacking detailed kinetic information. For small models, the problem can be solved by exhaustive enumeration of all state transitions. But for larger models this is not feasible, since the size of the phase space grows exponentially with the dimension of the network. The dimension of published models is growing to over 100, so that efficient methods for steady state determination are essential. Several methods have been proposed for large networks, some of them heuristic. While these methods represent a substantial improvement in scalability over exhaustive enumeration, the problem for large networks is still unsolved in general. This paper presents an algorithm that consists of two main parts. The first is a graph theoretic reduction of the wiring diagram of the network, while preserving all information about steady states. The second part formulates the determination of all steady states of a Boolean network as a problem of finding all solutions to a system of polynomial equations over the finite number system with two elements. This problem can be solved with existing computer algebra software. This algorithm compares favorably with several existing algorithms for steady state determination. One advantage is that it is not heuristic or reliant on sampling, but rather determines algorithmically and exactly all steady states of a Boolean network. The code for the algorithm, as well as the test suite of benchmark networks, is available upon request from the corresponding author. The algorithm presented in this paper reliably determines all steady states of sparse Boolean networks with up to 1000 nodes. The algorithm is effective at analyzing virtually all published models even those of moderate connectivity. The problem for large Boolean networks with high average connectivity remains an open problem.
2014-01-01
Background A key problem in the analysis of mathematical models of molecular networks is the determination of their steady states. The present paper addresses this problem for Boolean network models, an increasingly popular modeling paradigm for networks lacking detailed kinetic information. For small models, the problem can be solved by exhaustive enumeration of all state transitions. But for larger models this is not feasible, since the size of the phase space grows exponentially with the dimension of the network. The dimension of published models is growing to over 100, so that efficient methods for steady state determination are essential. Several methods have been proposed for large networks, some of them heuristic. While these methods represent a substantial improvement in scalability over exhaustive enumeration, the problem for large networks is still unsolved in general. Results This paper presents an algorithm that consists of two main parts. The first is a graph theoretic reduction of the wiring diagram of the network, while preserving all information about steady states. The second part formulates the determination of all steady states of a Boolean network as a problem of finding all solutions to a system of polynomial equations over the finite number system with two elements. This problem can be solved with existing computer algebra software. This algorithm compares favorably with several existing algorithms for steady state determination. One advantage is that it is not heuristic or reliant on sampling, but rather determines algorithmically and exactly all steady states of a Boolean network. The code for the algorithm, as well as the test suite of benchmark networks, is available upon request from the corresponding author. Conclusions The algorithm presented in this paper reliably determines all steady states of sparse Boolean networks with up to 1000 nodes. The algorithm is effective at analyzing virtually all published models even those of moderate connectivity. The problem for large Boolean networks with high average connectivity remains an open problem. PMID:24965213
NASA Technical Reports Server (NTRS)
Acikmese, Behcet A.; Carson, John M., III
2005-01-01
A robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems is developed that guarantees the resolvability of the associated finite-horizon optimal control problem in a receding-horizon implementation. The control consists of two components; (i) feedforward, and (ii) feedback part. Feed-forward control is obtained by online solution of a finite-horizon optimal control problem for the nominal system dynamics. The feedback control policy is designed off-line based on a bound on the uncertainty in the system model. The entire controller is shown to be robustly stabilizing with a region of attraction composed of initial states for which the finite-horizon optimal control problem is feasible. The controller design for this algorithm is demonstrated on a class of systems with uncertain nonlinear terms that have norm-bounded derivatives, and derivatives in polytopes. An illustrative numerical example is also provided.
NASA Technical Reports Server (NTRS)
Acikmese, Ahmet Behcet; Carson, John M., III
2006-01-01
A robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems is developed that guarantees resolvability. With resolvability, initial feasibility of the finite-horizon optimal control problem implies future feasibility in a receding-horizon framework. The control consists of two components; (i) feed-forward, and (ii) feedback part. Feed-forward control is obtained by online solution of a finite-horizon optimal control problem for the nominal system dynamics. The feedback control policy is designed off-line based on a bound on the uncertainty in the system model. The entire controller is shown to be robustly stabilizing with a region of attraction composed of initial states for which the finite-horizon optimal control problem is feasible. The controller design for this algorithm is demonstrated on a class of systems with uncertain nonlinear terms that have norm-bounded derivatives and derivatives in polytopes. An illustrative numerical example is also provided.
NASA Astrophysics Data System (ADS)
Handayani, I.; Januar, R. L.; Purwanto, S. E.
2018-01-01
This research aims to know the influence of Missouri Mathematics Project Learning Model to Mathematical Problem-solving Ability of Students at Junior High School. This research is a quantitative research and uses experimental research method of Quasi Experimental Design. The research population includes all student of grade VII of Junior High School who are enrolled in the even semester of the academic year 2016/2017. The Sample studied are 76 students from experimental and control groups. The sampling technique being used is cluster sampling method. The instrument is consisted of 7 essay questions whose validity, reliability, difficulty level and discriminating power have been tested. Before analyzing the data by using t-test, the data has fulfilled the requirement for normality and homogeneity. The result of data shows that there is the influence of Missouri mathematics project learning model to mathematical problem-solving ability of students at junior high school with medium effect.
Recent Advances in the Edge-Function Method 1979-1980
1980-07-30
the residuals are within the limits within which an engineer can specify the boundary conditions of the problem, then the corresponding Mathematical ...truncation lvel . The consistent preference shown by the solver routine for verteA functions as opposed to polar functions reinforces the expectations of...Accordingly,each solution zr_.-4_des a Mathematical Model for the given physical problem- R.M.S. values provide a practical criterion for the enai--er to
Lee, Matthew R.; Chassin, Laurie; MacKinnon, David P.
2015-01-01
Background Research has shown a developmental process of “maturing out” of problem drinking beginning in young adulthood. Perhaps surprisingly, past studies suggests that young adult drinking reductions may be particularly pronounced among those exhibiting relatively severe forms of problem drinking earlier in emerging adulthood. This may occur because more severe problem drinkers experience stronger ameliorative effects of normative young adult role transitions like marriage. Methods The hypothesis of stronger marriage effects among more severe problem drinkers was tested using three waves of data from a large ongoing study of familial alcohol disorder (Chassin et al., 1992; N=844; 51% children of alcoholics). Results Longitudinal growth models characterized (1) the curvilinear trajectory of drinking quantity from ages 17-40, (2) effects of marriage on altering this age-related trajectory, and moderation of this effect by pre-marriage problem drinking levels (alcohol consequences and dependence symptoms). Results confirmed the hypothesis that protective marriage effects on drinking quantity trajectories would be stronger among more severe pre-marriage problem drinkers. Supplemental analyses showed that results were robust to alternative construct operationalizations and modeling approaches. Conclusions Consistent with role incompatibility theory, findings support the view of role conflict as a key mechanism of role-driven behavior change, as greater problem drinking likely conflicts more with demands of roles like marriage. This is also consistent with the developmental psychopathology view of transitions and turning points. Role transitions among already low-severity drinkers may merely represent developmental continuity of a low-risk trajectory, whereas role transitions among higher-severity problem drinkers may represent developmentally discontinuous “turning points” that divert individuals from a higher- to a lower-risk trajectory. Practically, findings support the clinical relevance of role-related “maturing out processes” by suggesting that they often reflect natural recovery from clinically significant problem drinking. Thus, understanding these processes could help clarify the nature of pathological drinking and inform interventions. PMID:26009967
Lee, Matthew R; Chassin, Laurie; MacKinnon, David P
2015-06-01
Research has shown a developmental process of "maturing out" of problem drinking beginning in young adulthood. Perhaps surprisingly, past studies suggest that young adult drinking reductions may be particularly pronounced among those exhibiting relatively severe forms of problem drinking earlier in emerging adulthood. This may occur because more severe problem drinkers experience stronger ameliorative effects of normative young adult role transitions like marriage. The hypothesis of stronger marriage effects among more severe problem drinkers was tested using 3 waves of data from a large ongoing study of familial alcohol disorder (N = 844; 51% children of alcoholics). Longitudinal growth models characterized (i) the curvilinear trajectory of drinking quantity from ages 17 to 40, (ii) effects of marriage on altering this age-related trajectory, and (iii) moderation of this effect by premarriage problem drinking levels (alcohol consequences and dependence symptoms). Results confirmed the hypothesis that protective marriage effects on drinking quantity trajectories would be stronger among more severe premarriage problem drinkers. Supplemental analyses showed that results were robust to alternative construct operationalizations and modeling approaches. Consistent with role incompatibility theory, findings support the view of role conflict as a key mechanism of role-driven behavior change, as greater problem drinking likely conflicts more with demands of roles like marriage. This is also consistent with the developmental psychopathology view of transitions and turning points. Role transitions among already low-severity drinkers may merely represent developmental continuity of a low-risk trajectory, whereas role transitions among higher-severity problem drinkers may represent developmentally discontinuous "turning points" that divert individuals from a higher- to a lower-risk trajectory. Practically, findings support the clinical relevance of role-related "maturing out processes" by suggesting that they often reflect natural recovery from clinically significant problem drinking. Thus, understanding these processes could help clarify the nature of pathological drinking and inform interventions. Copyright © 2015 by the Research Society on Alcoholism.
Reiner, A; Høye, J S
2005-12-01
The hierarchical reference theory and the self-consistent Ornstein-Zernike approximation are two liquid state theories that both furnish a largely satisfactory description of the critical region as well as phase coexistence and the equation of state in general. Furthermore, there are a number of similarities that suggest the possibility of a unification of both theories. As a first step towards this goal, we consider the problem of combining the lowest order gamma expansion result for the incorporation of a Fourier component of the interaction with the requirement of consistency between internal and free energies, leaving aside the compressibility relation. For simplicity, we restrict ourselves to a simplified lattice gas that is expected to display the same qualitative behavior as more elaborate models. It turns out that the analytically tractable mean spherical approximation is a solution to this problem, as are several of its generalizations. Analysis of the characteristic equations shows the potential for a practical scheme and yields necessary conditions that any closure to the Ornstein-Zernike relation must fulfill for the consistency problem to be well posed and to have a unique differentiable solution. These criteria are expected to remain valid for more general discrete and continuous systems, even if consistency with the compressibility route is also enforced where possible explicit solutions will require numerical evaluations.
Solution of the Eshelby problem in gradient elasticity for multilayer spherical inclusions
NASA Astrophysics Data System (ADS)
Volkov-Bogorodskii, D. B.; Lurie, S. A.
2016-03-01
We consider gradient models of elasticity which permit taking into account the characteristic scale parameters of the material. We prove the Papkovich-Neuber theorems, which determine the general form of the gradient solution and the structure of scale effects. We derive the Eshelby integral formula for the gradient moduli of elasticity, which plays the role of the closing equation in the self-consistent three-phase method. In the gradient theory of deformations, we consider the fundamental Eshelby-Christensen problem of determining the effective elastic properties of dispersed composites with spherical inclusions; the exact solution of this problem for classical models was obtained in 1976. This paper is the first to present the exact analytical solution of the Eshelby-Christensen problem for the gradient theory, which permits estimating the influence of scale effects on the stress state and the effective properties of the dispersed composites under study.We also analyze the influence of scale factors.
NASA Astrophysics Data System (ADS)
Aswan, D. M.; Lufri, L.; Sumarmin, R.
2018-04-01
This research intends to determine the effect of Problem Based Learning models on students' critical thinking skills and competences. This study was a quasi-experimental research. The population of the study was the students of class VIII SMPN 1 Subdistrict Gunuang Omeh. Random sample selection is done by randomizing the class. Sample class that was chosen VIII3 as an experimental class given that treatment study based on problems and class VIII1 as control class that treatment usually given study. Instrument that used to consist of critical thinking test, cognitive tests, observation sheet of affective and psychomotor. Independent t-test and Mann Whitney U test was used for the analysis. Results showed that there was significant difference (sig <0.05) between control and experimental group. The conclusion of this study was Problem Based Learning models affected the students’ critical thinking skills and competences.
An inverse model for a free-boundary problem with a contact line: Steady case
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volkov, Oleg; Protas, Bartosz
2009-07-20
This paper reformulates the two-phase solidification problem (i.e., the Stefan problem) as an inverse problem in which a cost functional is minimized with respect to the position of the interface and subject to PDE constraints. An advantage of this formulation is that it allows for a thermodynamically consistent treatment of the interface conditions in the presence of a contact point involving a third phase. It is argued that such an approach in fact represents a closure model for the original system and some of its key properties are investigated. We describe an efficient iterative solution method for the Stefan problemmore » formulated in this way which uses shape differentiation and adjoint equations to determine the gradient of the cost functional. Performance of the proposed approach is illustrated with sample computations concerning 2D steady solidification phenomena.« less
Fong, Michelle C; Measelle, Jeffrey; Conradt, Elisabeth; Ablow, Jennifer C
2017-02-01
The purpose of the current study was to predict concurrent levels of problem behaviors from young children's baseline cortisol and attachment classification, a proxy for the quality of caregiving experienced. In a sample of 58 children living at or below the federal poverty threshold, children's baseline cortisol levels, attachment classification, and problem behaviors were assessed at 17 months of age. We hypothesized that an interaction between baseline cortisol and attachment classification would predict problem behaviors above and beyond any main effects of baseline cortisol and attachment. However, based on limited prior research, we did not predict whether or not this interaction would be more consistent with diathesis-stress or differential susceptibility models. Consistent with diathesis-stress theory, the results indicated no significant differences in problem behavior levels among children with high baseline cortisol. In contrast, children with low baseline cortisol had the highest level of problem behaviors in the context of a disorganized attachment relationship. However, in the context of a secure attachment relationship, children with low baseline cortisol looked no different, with respect to problem behavior levels, then children with high cortisol levels. These findings have substantive implications for the socioemotional development of children reared in poverty. Copyright © 2017 Elsevier Inc. All rights reserved.
Rasch Measurement of Collaborative Problem Solving in an Online Environment.
Harding, Susan-Marie E; Griffin, Patrick E
2016-01-01
This paper describes an approach to the assessment of human to human collaborative problem solving using a set of online interactive tasks completed by student dyads. Within the dyad, roles were nominated as either A or B and students selected their own roles. The question as to whether role selection affected individual student performance measures is addressed. Process stream data was captured from 3402 students in six countries who explored the problem space by clicking, dragging the mouse, moving the cursor and collaborating with their partner through a chat box window. Process stream data were explored to identify behavioural indicators that represented elements of a conceptual framework. These indicative behaviours were coded into a series of dichotomous items. These items represented actions and chats performed by students. The frequency of occurrence was used as a proxy measure of item difficulty. Then given a measure of item difficulty, student ability could be estimated using the difficulty estimates of the range of items demonstrated by the student. The Rasch simple logistic model was used to review the indicators to identify those that were consistent with the assumptions of the model and were invariant across national samples, language, curriculum and age of the student. The data were analysed using a one and two dimension, one parameter model. Rasch separation reliability, fit to the model, distribution of students and items on the underpinning construct, estimates for each country and the effect of role differences are reported. This study provides evidence that collaborative problem solving can be assessed in an online environment involving human to human interaction using behavioural indicators shown to have a consistent relationship between the estimate of student ability, and the probability of demonstrating the behaviour.
Efficient 3D multi-region prostate MRI segmentation using dual optimization.
Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Sun, Yue; Rajchl, Martin; Fenster, Aaron
2013-01-01
Efficient and accurate extraction of the prostate, in particular its clinically meaningful sub-regions from 3D MR images, is of great interest in image-guided prostate interventions and diagnosis of prostate cancer. In this work, we propose a novel multi-region segmentation approach to simultaneously locating the boundaries of the prostate and its two major sub-regions: the central gland and the peripheral zone. The proposed method utilizes the prior knowledge of the spatial region consistency and employs a customized prostate appearance model to simultaneously segment multiple clinically meaningful regions. We solve the resulted challenging combinatorial optimization problem by means of convex relaxation, for which we introduce a novel spatially continuous flow-maximization model and demonstrate its duality to the investigated convex relaxed optimization problem with the region consistency constraint. Moreover, the proposed continuous max-flow model naturally leads to a new and efficient continuous max-flow based algorithm, which enjoys great advantages in numerics and can be readily implemented on GPUs. Experiments using 15 T2-weighted 3D prostate MR images, by inter- and intra-operator variability, demonstrate the promising performance of the proposed approach.
Asteroseismic Constraints on the Models of Hot B Subdwarfs: Convective Helium-Burning Cores
NASA Astrophysics Data System (ADS)
Schindler, Jan-Torge; Green, Elizabeth M.; Arnett, W. David
2017-10-01
Asteroseismology of non-radial pulsations in Hot B Subdwarfs (sdB stars) offers a unique view into the interior of core-helium-burning stars. Ground-based and space-borne high precision light curves allow for the analysis of pressure and gravity mode pulsations to probe the structure of sdB stars deep into the convective core. As such asteroseismological analysis provides an excellent opportunity to test our understanding of stellar evolution. In light of the newest constraints from asteroseismology of sdB and red clump stars, standard approaches of convective mixing in 1D stellar evolution models are called into question. The problem lies in the current treatment of overshooting and the entrainment at the convective boundary. Unfortunately no consistent algorithm of convective mixing exists to solve the problem, introducing uncertainties to the estimates of stellar ages. Three dimensional simulations of stellar convection show the natural development of an overshooting region and a boundary layer. In search for a consistent prescription of convection in one dimensional stellar evolution models, guidance from three dimensional simulations and asteroseismological results is indispensable.
Retrosynthetic Reaction Prediction Using Neural Sequence-to-Sequence Models
2017-01-01
We describe a fully data driven model that learns to perform a retrosynthetic reaction prediction task, which is treated as a sequence-to-sequence mapping problem. The end-to-end trained model has an encoder–decoder architecture that consists of two recurrent neural networks, which has previously shown great success in solving other sequence-to-sequence prediction tasks such as machine translation. The model is trained on 50,000 experimental reaction examples from the United States patent literature, which span 10 broad reaction types that are commonly used by medicinal chemists. We find that our model performs comparably with a rule-based expert system baseline model, and also overcomes certain limitations associated with rule-based expert systems and with any machine learning approach that contains a rule-based expert system component. Our model provides an important first step toward solving the challenging problem of computational retrosynthetic analysis. PMID:29104927
Polar versus Cartesian velocity models for maneuvering target tracking with IMM
NASA Astrophysics Data System (ADS)
Laneuville, Dann
This paper compares various model sets in different IMM filters for the maneuvering target tracking problem. The aim is to see whether we can improve the tracking performance of what is certainly the most widely used model set in the literature for the maneuvering target tracking problem: a Nearly Constant Velocity model and a Nearly Coordinated Turn model. Our new challenger set consists of a mixed Cartesian position and polar velocity state vector to describe the uniform motion segments and is augmented with the turn rate to obtain the second model for the maneuvering segments. This paper also gives a general procedure to discretize up to second order any non-linear continuous time model with linear diffusion. Comparative simulations on an air defence scenario with a 2D radar, show that this new approach improves significantly the tracking performance in this case.
Child Abuse & Neglect in the Mexican American Community. Course Model.
ERIC Educational Resources Information Center
Camacho, Rosie Lee
Consisting of three units, the course model aims to prepare students to address the problem of abuse and/or neglect in the Mexican American community. Unit one focuses on the two major parts of the informal helping system in the Mexican American community, the barrio and the family. Unit two concentrates on the traditional child welfare system and…
An interactive review system for NASTRAN
NASA Technical Reports Server (NTRS)
Durocher, L. L.; Gasper, A. F.
1982-01-01
An interactive review system that addresses the problems of model display, model error checking, and postprocessing is described. The menu driven system consists of four programs whose advantages and limitations are detailed. The interface between NASTRAN and MOVIE-BYU, the modifications required to make MOVIE usable in a finite element context, and the resulting capabilities of MOVIE as a graphics postprocessor for NASTRAN are illustrated.
Diesel engine torsional vibration control coupling with speed control system
NASA Astrophysics Data System (ADS)
Guo, Yibin; Li, Wanyou; Yu, Shuwen; Han, Xiao; Yuan, Yunbo; Wang, Zhipeng; Ma, Xiuzhen
2017-09-01
The coupling problems between shafting torsional vibration and speed control system of diesel engine are very common. Neglecting the coupling problems sometimes lead to serious oscillation and vibration during the operation of engines. For example, during the propulsion shafting operation of a diesel engine, the oscillation of engine speed and the severe vibration of gear box occur which cause the engine is unable to operate. To find the cause of the malfunctions, a simulation model coupling the speed control system with the torsional vibration of deformable shafting is proposed and investigated. In the coupling model, the shafting is simplified to be a deformable one which consists of several inertias and shaft sections and with characteristics of torsional vibration. The results of instantaneous rotation speed from this proposed model agree with the test results very well and are successful in reflecting the real oscillation state of the engine operation. Furthermore, using the proposed model, the speed control parameters can be tuned up to predict the diesel engine a stable and safe running. The results from the tests on the diesel engine with a set of tuned control parameters are consistent with the simulation results very well.
A Very Large Area Network (VLAN) knowledge-base applied to space communication problems
NASA Technical Reports Server (NTRS)
Zander, Carol S.
1988-01-01
This paper first describes a hierarchical model for very large area networks (VLAN). Space communication problems whose solution could profit by the model are discussed and then an enhanced version of this model incorporating the knowledge needed for the missile detection-destruction problem is presented. A satellite network or VLAN is a network which includes at least one satellite. Due to the complexity, a compromise between fully centralized and fully distributed network management has been adopted. Network nodes are assigned to a physically localized group, called a partition. Partitions consist of groups of cell nodes with one cell node acting as the organizer or master, called the Group Master (GM). Coordinating the group masters is a Partition Master (PM). Knowledge is also distributed hierarchically existing in at least two nodes. Each satellite node has a back-up earth node. Knowledge must be distributed in such a way so as to minimize information loss when a node fails. Thus the model is hierarchical both physically and informationally.
NASA Astrophysics Data System (ADS)
Anagnostopoulos, Konstantinos N.; Azuma, Takehiro; Ito, Yuta; Nishimura, Jun; Papadoudis, Stratos Kovalkov
2018-02-01
In recent years the complex Langevin method (CLM) has proven a powerful method in studying statistical systems which suffer from the sign problem. Here we show that it can also be applied to an important problem concerning why we live in four-dimensional spacetime. Our target system is the type IIB matrix model, which is conjectured to be a nonperturbative definition of type IIB superstring theory in ten dimensions. The fermion determinant of the model becomes complex upon Euclideanization, which causes a severe sign problem in its Monte Carlo studies. It is speculated that the phase of the fermion determinant actually induces the spontaneous breaking of the SO(10) rotational symmetry, which has direct consequences on the aforementioned question. In this paper, we apply the CLM to the 6D version of the type IIB matrix model and show clear evidence that the SO(6) symmetry is broken down to SO(3). Our results are consistent with those obtained previously by the Gaussian expansion method.
On the stability of equilibrium for a reformulated foreign trade model of three countries
NASA Astrophysics Data System (ADS)
Dassios, Ioannis K.; Kalogeropoulos, Grigoris
2014-06-01
In this paper, we study the stability of equilibrium for a foreign trade model consisting of three countries. As the gravity equation has been proven an excellent tool of analysis and adequately stable over time and space all over the world, we further enhance the problem to three masses. We use the basic Structure of Heckscher-Ohlin-Samuelson model. The national income equals consumption outlays plus investment plus exports minus imports. The proposed reformulation of the problem focus on two basic concepts: (1) the delay inherited in our economic variables and (2) the interaction effect along the three economies involved. Stability and stabilizability conditions are investigated while numerical examples provide further insight and better understanding. Finally, a generalization of the gravity equation is somehow obtained for the model.
Forward Field Computation with OpenMEEG
Gramfort, Alexandre; Papadopoulo, Théodore; Olivi, Emmanuel; Clerc, Maureen
2011-01-01
To recover the sources giving rise to electro- and magnetoencephalography in individual measurements, realistic physiological modeling is required, and accurate numerical solutions must be computed. We present OpenMEEG, which solves the electromagnetic forward problem in the quasistatic regime, for head models with piecewise constant conductivity. The core of OpenMEEG consists of the symmetric Boundary Element Method, which is based on an extended Green Representation theorem. OpenMEEG is able to provide lead fields for four different electromagnetic forward problems: Electroencephalography (EEG), Magnetoencephalography (MEG), Electrical Impedance Tomography (EIT), and intracranial electric potentials (IPs). OpenMEEG is open source and multiplatform. It can be used from Python and Matlab in conjunction with toolboxes that solve the inverse problem; its integration within FieldTrip is operational since release 2.0. PMID:21437231
Development of guidelines for the definition of the relavant information content in data classes
NASA Technical Reports Server (NTRS)
Schmitt, E.
1973-01-01
The problem of experiment design is defined as an information system consisting of information source, measurement unit, environmental disturbances, data handling and storage, and the mathematical analysis and usage of data. Based on today's concept of effective computability, general guidelines for the definition of the relevant information content in data classes are derived. The lack of a universally applicable information theory and corresponding mathematical or system structure is restricting the solvable problem classes to a small set. It is expected that a new relativity theory of information, generally described by a universal algebra of relations will lead to new mathematical models and system structures capable of modeling any well defined practical problem isomorphic to an equivalence relation at any corresponding level of abstractness.
In defense of genuine ignorance: supporting vitality and relevance in graduate curricula.
Goren, S; Peter, L; Fischer, S
1992-01-01
Genuine ignorance, defined by John Dewey as curiosity and openmindedness in opposition to repetition of catch phrases and familiar propositions, is nurtured in graduate nursing curricula in which the educational process is congruent with course content. Preparation for advanced practice in the mental health environment of the foreseeable future required abandonment of the familiar medical model in favor of conceptual models consistent with current thinking in psychiatric nursing and exposure to current problems (homelessness, family violence, AIDS) and current problem solving strategies (brief treatment, family preservation). Involvement in practice-based research and operationalizing new perspectives on familiar clinical problems, are suggested as strategies for developing the advanced practitioner. Two of the authors, former graduate students, describe the impact of changed perspectives and research activity on their own practice.
Validation of a coupled core-transport, pedestal-structure, current-profile and equilibrium model
NASA Astrophysics Data System (ADS)
Meneghini, O.
2015-11-01
The first workflow capable of predicting the self-consistent solution to the coupled core-transport, pedestal structure, and equilibrium problems from first-principles and its experimental tests are presented. Validation with DIII-D discharges in high confinement regimes shows that the workflow is capable of robustly predicting the kinetic profiles from on axis to the separatrix and matching the experimental measurements to within their uncertainty, with no prior knowledge of the pedestal height nor of any measurement of the temperature or pressure. Self-consistent coupling has proven to be essential to match the experimental results, and capture the non-linear physics that governs the core and pedestal solutions. In particular, clear stabilization of the pedestal peeling ballooning instabilities by the global Shafranov shift and destabilization by additional edge bootstrap current, and subsequent effect on the core plasma profiles, have been clearly observed and documented. In our model, self-consistency is achieved by iterating between the TGYRO core transport solver (with NEO and TGLF for neoclassical and turbulent flux), and the pedestal structure predicted by the EPED model. A self-consistent equilibrium is calculated by EFIT, while the ONETWO transport package evolves the current profile and calculates the particle and energy sources. The capabilities of such workflow are shown to be critical for the design of future experiments such as ITER and FNSF, which operate in a regime where the equilibrium, the pedestal, and the core transport problems are strongly coupled, and for which none of these quantities can be assumed to be known. Self-consistent core-pedestal predictions for ITER, as well as initial optimizations, will be presented. Supported by the US Department of Energy under DE-FC02-04ER54698, DE-SC0012652.
Lorber, Michael F; Egeland, Byron
2009-07-01
Developmental models and previous findings suggest that early parenting is more strongly associated with externalizing problems in early childhood than it is in adolescence. In this article, the authors address whether the association of poor-quality infancy parenting and externalizing problems "rebounds" in adulthood. Poor-quality infancy parenting was associated with externalizing problems at kindergarten and first grade (mother report) as well as at 23 and 26 years (self report). Infancy parenting was not significantly associated with either mothers' or youths' reports of externalizing problems at 16 years. These findings are consistent with the notion that poor-quality infancy parenting is a risk factor for externalizing problems in developmental periods for which externalizing behavior is most deviant.
Liu, Wei; Huang, Jie
2018-03-01
This paper studies the cooperative global robust output regulation problem for a class of heterogeneous second-order nonlinear uncertain multiagent systems with jointly connected switching networks. The main contributions consist of the following three aspects. First, we generalize the result of the adaptive distributed observer from undirected jointly connected switching networks to directed jointly connected switching networks. Second, by performing a new coordinate and input transformation, we convert our problem into the cooperative global robust stabilization problem of a more complex augmented system via the distributed internal model principle. Third, we solve the stabilization problem by a distributed state feedback control law. Our result is illustrated by the leader-following consensus problem for a group of Van der Pol oscillators.
NASA Astrophysics Data System (ADS)
Wang, Y. B.; Zhu, X. W.; Dai, H. H.
2016-08-01
Though widely used in modelling nano- and micro- structures, Eringen's differential model shows some inconsistencies and recent study has demonstrated its differences between the integral model, which then implies the necessity of using the latter model. In this paper, an analytical study is taken to analyze static bending of nonlocal Euler-Bernoulli beams using Eringen's two-phase local/nonlocal model. Firstly, a reduction method is proved rigorously, with which the integral equation in consideration can be reduced to a differential equation with mixed boundary value conditions. Then, the static bending problem is formulated and four types of boundary conditions with various loadings are considered. By solving the corresponding differential equations, exact solutions are obtained explicitly in all of the cases, especially for the paradoxical cantilever beam problem. Finally, asymptotic analysis of the exact solutions reveals clearly that, unlike the differential model, the integral model adopted herein has a consistent softening effect. Comparisons are also made with existing analytical and numerical results, which further shows the advantages of the analytical results obtained. Additionally, it seems that the once controversial nonlocal bar problem in the literature is well resolved by the reduction method.
The sdg interacting-boson model applied to 168Er
NASA Astrophysics Data System (ADS)
Yoshinaga, N.; Akiyama, Y.; Arima, A.
1986-03-01
The sdg interacting-boson model is applied to 168Er. Energy levels and E2 transitions are calculated. This model is shown to solve the problem of anharmonicity regarding the excitation energy of the first Kπ=4+ band relative to that of the first Kπ=2+ one. The level scheme including the Kπ=3+ band is well reproduced and the calculated B(E2)'s are consistent with the experimental data.
Existence of solutions for a host-parasite model
NASA Astrophysics Data System (ADS)
Milner, Fabio Augusto; Patton, Curtis Allan
2001-12-01
The sea bass Dicentrarchus labrax has several gill ectoparasites. Diplectanum aequans (Plathelminth, Monogenea) is one of these species. Under certain demographic conditions, this flat worm can trigger pathological problems, in particular in fish farms. The life cycle of the parasite is described and a model for the dynamics of its interaction with the fish is described and analyzed. The model consists of a coupled system of ordinary differential equations and one integro-differential equation.
Full-wave Moment Tensor and Tomographic Inversions Based on 3D Strain Green Tensor
2010-01-31
propagation in three-dimensional (3D) earth, linearizes the inverse problem by iteratively updating the earth model , and provides an accurate way to...self-consistent FD-SGT databases constructed from finite-difference simulations of wave propagation in full-wave tomographic models can be used to...determine the moment tensors within minutes after a seismic event, making it possible for real time monitoring using 3D models . 15. SUBJECT TERMS
On the consistency of Reynolds stress turbulence closures with hydrodynamic stability theory
NASA Technical Reports Server (NTRS)
Speziale, Charles G.; Abid, Ridha; Blaisdell, Gregory A.
1995-01-01
The consistency of second-order closure models with results from hydrodynamic stability theory is analyzed for the simplified case of homogeneous turbulence. In a recent study, Speziale, Gatski, and MacGiolla Mhuiris showed that second-order closures are capable of yielding results that are consistent with hydrodynamic stability theory for the case of homogeneous shear flow in a rotating frame. It is demonstrated in this paper that this success is due to the fact that the stability boundaries for rotating homogeneous shear flow are not dependent on the details of the spatial structure of the disturbances. For those instances where they are -- such as in the case of elliptical flows where the instability mechanism is more subtle -- the results are not so favorable. The origins and extent of this modeling problem are examined in detail along with a possible resolution based on rapid distortion theory (RDT) and its implications for turbulence modeling.
The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem.
Muller, A; Pontonnier, C; Dumont, G
2018-02-01
The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions - two polynomial criteria and a min/max criterion - were tested on a planar musculoskeletal model. The MusIC method provides a computation frequency approximately 10 times higher compared to a classical optimization problem with a relative mean error of 4% on cost function evaluation.
Hart, Trevor A; Noor, Syed W; Adam, Barry D; Vernon, Julia R G; Brennan, David J; Gardner, Sandra; Husbands, Winston; Myers, Ted
2017-10-01
Syndemics research shows the additive effect of psychosocial problems on high-risk sexual behavior among gay and bisexual men (GBM). Psychosocial strengths may predict less engagement in high-risk sexual behavior. In a study of 470 ethnically diverse HIV-negative GBM, regression models were computed using number of syndemic psychosocial problems, number of psychosocial strengths, and serodiscordant condomless anal sex (CAS). The number of syndemic psychosocial problems correlated with serodiscordant CAS (RR = 1.51, 95% CI 1.18-1.92; p = 0.001). When adding the number of psychosocial strengths to the model, the effect of syndemic psychosocial problems became non-significant, but the number of strengths-based factors remained significant (RR = 0.67, 95% CI 0.53-0.86; p = 0.002). Psychosocial strengths may operate additively in the same way as syndemic psychosocial problems, but in the opposite direction. Consistent with theories of resilience, psychosocial strengths may be an important set of variables predicting sexual risk behavior that is largely missing from the current HIV behavioral literature.
Fung, Wenson; Swanson, H Lee
2017-07-01
The purpose of this study was to assess whether the differential effects of working memory (WM) components (the central executive, phonological loop, and visual-spatial sketchpad) on math word problem-solving accuracy in children (N = 413, ages 6-10) are completely mediated by reading, calculation, and fluid intelligence. The results indicated that all three WM components predicted word problem solving in the nonmediated model, but only the storage component of WM yielded a significant direct path to word problem-solving accuracy in the fully mediated model. Fluid intelligence was found to moderate the relationship between WM and word problem solving, whereas reading, calculation, and related skills (naming speed, domain-specific knowledge) completely mediated the influence of the executive system on problem-solving accuracy. Our results are consistent with findings suggesting that storage eliminates the predictive contribution of executive WM to various measures Colom, Rebollo, Abad, & Shih (Memory & Cognition, 34: 158-171, 2006). The findings suggest that the storage component of WM, rather than the executive component, has a direct path to higher-order processing in children.
NASA Astrophysics Data System (ADS)
Protas, Bartosz
2007-11-01
In this investigation we are concerned with a family of solutions of the 2D steady--state Euler equations, known as the Prandtl--Batchelor flows, which are characterized by the presence of finite--area vortex patches embedded in an irrotational flow. We are interested in flows in the exterior of a circular cylinder and with a uniform stream at infinity, since such flows are often employed as models of bluff body wakes in the high--Reynolds number limit. The ``vortex design'' problem we consider consists in determining a distribution of the wall--normal velocity on parts of the cylinder boundary such that the vortex patches modelling the wake vortices will have a prescribed shape and location. Such inverse problem have applications in various areas of flow control, such as mitigation of the wake hazard. We show how this problem can be solved computationally by formulating it as a free--boundary optimization problem. In particular, we demonstrate that derivation of the adjoint system, required to compute the cost functional gradient, is facilitated by application of the shape differential calculus. Finally, solutions of the vortex design problem are illustrated with computational examples.
Konovalov, Arkady; Krajbich, Ian
2016-01-01
Organisms appear to learn and make decisions using different strategies known as model-free and model-based learning; the former is mere reinforcement of previously rewarded actions and the latter is a forward-looking strategy that involves evaluation of action-state transition probabilities. Prior work has used neural data to argue that both model-based and model-free learners implement a value comparison process at trial onset, but model-based learners assign more weight to forward-looking computations. Here using eye-tracking, we report evidence for a different interpretation of prior results: model-based subjects make their choices prior to trial onset. In contrast, model-free subjects tend to ignore model-based aspects of the task and instead seem to treat the decision problem as a simple comparison process between two differentially valued items, consistent with previous work on sequential-sampling models of decision making. These findings illustrate a problem with assuming that experimental subjects make their decisions at the same prescribed time. PMID:27511383
Data matching for free-surface multiple attenuation by multidimensional deconvolution
NASA Astrophysics Data System (ADS)
van der Neut, Joost; Frijlink, Martijn; van Borselen, Roald
2012-09-01
A common strategy for surface-related multiple elimination of seismic data is to predict multiples by a convolutional model and subtract these adaptively from the input gathers. Problems can be posed by interfering multiples and primaries. Removing multiples by multidimensional deconvolution (MDD) (inversion) does not suffer from these problems. However, this approach requires data to be consistent, which is often not the case, especially not at interpolated near-offsets. A novel method is proposed to improve data consistency prior to inversion. This is done by backpropagating first-order multiples with a time-gated reference primary event and matching these with early primaries in the input gather. After data matching, multiple elimination by MDD can be applied with a deterministic inversion scheme.
Content analysis in information flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grusho, Alexander A.; Faculty of Computational Mathematics and Cybernetics, Moscow State University, Moscow; Grusho, Nick A.
The paper deals with architecture of content recognition system. To analyze the problem the stochastic model of content recognition in information flows was built. We proved that under certain conditions it is possible to solve correctly a part of the problem with probability 1, viewing a finite section of the information flow. That means that good architecture consists of two steps. The first step determines correctly certain subsets of contents, while the second step may demand much more time for true decision.
NASA Technical Reports Server (NTRS)
Frisch, Harold P.
2007-01-01
Engineers, who design systems using text specification documents, focus their work upon the completed system to meet Performance, time and budget goals. Consistency and integrity is difficult to maintain within text documents for a single complex system and more difficult to maintain as several systems are combined into higher-level systems, are maintained over decades, and evolve technically and in performance through updates. This system design approach frequently results in major changes during the system integration and test phase, and in time and budget overruns. Engineers who build system specification documents within a model-based systems environment go a step further and aggregate all of the data. They interrelate all of the data to insure consistency and integrity. After the model is constructed, the various system specification documents are prepared, all from the same database. The consistency and integrity of the model is assured, therefore the consistency and integrity of the various specification documents is insured. This article attempts to define model-based systems relative to such an environment. The intent is to expose the complexity of the enabling problem by outlining what is needed, why it is needed and how needs are being addressed by international standards writing teams.
Uncertain programming models for portfolio selection with uncertain returns
NASA Astrophysics Data System (ADS)
Zhang, Bo; Peng, Jin; Li, Shengguo
2015-10-01
In an indeterminacy economic environment, experts' knowledge about the returns of securities consists of much uncertainty instead of randomness. This paper discusses portfolio selection problem in uncertain environment in which security returns cannot be well reflected by historical data, but can be evaluated by the experts. In the paper, returns of securities are assumed to be given by uncertain variables. According to various decision criteria, the portfolio selection problem in uncertain environment is formulated as expected-variance-chance model and chance-expected-variance model by using the uncertainty programming. Within the framework of uncertainty theory, for the convenience of solving the models, some crisp equivalents are discussed under different conditions. In addition, a hybrid intelligent algorithm is designed in the paper to provide a general method for solving the new models in general cases. At last, two numerical examples are provided to show the performance and applications of the models and algorithm.
Douglas, P; Hayes, E T; Williams, W B; Tyrrel, S F; Kinnersley, R P; Walsh, K; O'Driscoll, M; Longhurst, P J; Pollard, S J T; Drew, G H
2017-12-01
With the increase in composting asa sustainable waste management option, biological air pollution (bioaerosols) from composting facilities have become a cause of increasing concern due to their potential health impacts. Estimating community exposure to bioaerosols is problematic due to limitations in current monitoring methods. Atmospheric dispersion modelling can be used to estimate exposure concentrations, however several issues arise from the lack of appropriate bioaerosol data to use as inputs into models, and the complexity of the emission sources at composting facilities. This paper analyses current progress in using dispersion models for bioaerosols, examines the remaining problems and provides recommendations for future prospects in this area. A key finding is the urgent need for guidance for model users to ensure consistent bioaerosol modelling practices. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Troutman, Brent M.
1982-01-01
Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.
Approximate optimal tracking control for near-surface AUVs with wave disturbances
NASA Astrophysics Data System (ADS)
Yang, Qing; Su, Hao; Tang, Gongyou
2016-10-01
This paper considers the optimal trajectory tracking control problem for near-surface autonomous underwater vehicles (AUVs) in the presence of wave disturbances. An approximate optimal tracking control (AOTC) approach is proposed. Firstly, a six-degrees-of-freedom (six-DOF) AUV model with its body-fixed coordinate system is decoupled and simplified and then a nonlinear control model of AUVs in the vertical plane is given. Also, an exosystem model of wave disturbances is constructed based on Hirom approximation formula. Secondly, the time-parameterized desired trajectory which is tracked by the AUV's system is represented by the exosystem. Then, the coupled two-point boundary value (TPBV) problem of optimal tracking control for AUVs is derived from the theory of quadratic optimal control. By using a recently developed successive approximation approach to construct sequences, the coupled TPBV problem is transformed into a problem of solving two decoupled linear differential sequences of state vectors and adjoint vectors. By iteratively solving the two equation sequences, the AOTC law is obtained, which consists of a nonlinear optimal feedback item, an expected output tracking item, a feedforward disturbances rejection item, and a nonlinear compensatory term. Furthermore, a wave disturbances observer model is designed in order to solve the physically realizable problem. Simulation is carried out by using the Remote Environmental Unit (REMUS) AUV model to demonstrate the effectiveness of the proposed algorithm.
Mathematical model of information process of protection of the social sector
NASA Astrophysics Data System (ADS)
Novikov, D. A.; Tsarkova, E. G.; Dubrovin, A. S.; Soloviev, A. S.
2018-03-01
In work the mathematical model of information protection of society against distribution of extremist moods by means of impact on mass consciousness of information placed in media is investigated. Internal and external channels on which there is a dissemination of information are designated. The problem of optimization consisting in search of the optimum strategy allowing to use most effectively media for dissemination of antiterrorist information with the minimum financial expenses is solved. The algorithm of a numerical method of the solution of a problem of optimization is constructed and also the analysis of results of a computing experiment is carried out.
Coetzee, Lezanie
2017-01-01
The complex problem of drug abuse and drug-related crimes in communities in the Western Cape province cannot be studied in isolation but through the system they are embedded in. In this paper, a theoretical model to evaluate the syndemic of substance abuse and drug-related crimes within the Western Cape province of South Africa is constructed and explored. The dynamics of drug abuse and drug-related crimes within the Western Cape are simulated using STELLA software. The simulation results are consistent with the data from SACENDU and CrimeStats SA, highlighting the usefulness of such a model in designing and planning interventions to combat substance abuse and its related problems. PMID:28555161
DeepQA: improving the estimation of single protein model quality with deep belief networks.
Cao, Renzhi; Bhattacharya, Debswapna; Hou, Jie; Cheng, Jianlin
2016-12-05
Protein quality assessment (QA) useful for ranking and selecting protein models has long been viewed as one of the major challenges for protein tertiary structure prediction. Especially, estimating the quality of a single protein model, which is important for selecting a few good models out of a large model pool consisting of mostly low-quality models, is still a largely unsolved problem. We introduce a novel single-model quality assessment method DeepQA based on deep belief network that utilizes a number of selected features describing the quality of a model from different perspectives, such as energy, physio-chemical characteristics, and structural information. The deep belief network is trained on several large datasets consisting of models from the Critical Assessment of Protein Structure Prediction (CASP) experiments, several publicly available datasets, and models generated by our in-house ab initio method. Our experiments demonstrate that deep belief network has better performance compared to Support Vector Machines and Neural Networks on the protein model quality assessment problem, and our method DeepQA achieves the state-of-the-art performance on CASP11 dataset. It also outperformed two well-established methods in selecting good outlier models from a large set of models of mostly low quality generated by ab initio modeling methods. DeepQA is a useful deep learning tool for protein single model quality assessment and protein structure prediction. The source code, executable, document and training/test datasets of DeepQA for Linux is freely available to non-commercial users at http://cactus.rnet.missouri.edu/DeepQA/ .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balajewicz, Maciej; Tezaur, Irina; Dowell, Earl
For a projection-based reduced order model (ROM) of a fluid flow to be stable and accurate, the dynamics of the truncated subspace must be taken into account. This paper proposes an approach for stabilizing and enhancing projection-based fluid ROMs in which truncated modes are accounted for a priori via a minimal rotation of the projection subspace. Attention is focused on the full non-linear compressible Navier–Stokes equations in specific volume form as a step toward a more general formulation for problems with generic non-linearities. Unlike traditional approaches, no empirical turbulence modeling terms are required, and consistency between the ROM and themore » Navier–Stokes equation from which the ROM is derived is maintained. Mathematically, the approach is formulated as a trace minimization problem on the Stiefel manifold. As a result, the reproductive as well as predictive capabilities of the method are evaluated on several compressible flow problems, including a problem involving laminar flow over an airfoil with a high angle of attack, and a channel-driven cavity flow problem.« less
Balajewicz, Maciej; Tezaur, Irina; Dowell, Earl
2016-05-25
For a projection-based reduced order model (ROM) of a fluid flow to be stable and accurate, the dynamics of the truncated subspace must be taken into account. This paper proposes an approach for stabilizing and enhancing projection-based fluid ROMs in which truncated modes are accounted for a priori via a minimal rotation of the projection subspace. Attention is focused on the full non-linear compressible Navier–Stokes equations in specific volume form as a step toward a more general formulation for problems with generic non-linearities. Unlike traditional approaches, no empirical turbulence modeling terms are required, and consistency between the ROM and themore » Navier–Stokes equation from which the ROM is derived is maintained. Mathematically, the approach is formulated as a trace minimization problem on the Stiefel manifold. As a result, the reproductive as well as predictive capabilities of the method are evaluated on several compressible flow problems, including a problem involving laminar flow over an airfoil with a high angle of attack, and a channel-driven cavity flow problem.« less
Academic and social motives and drinking behavior.
Vaughan, Ellen L; Corbin, William R; Fromme, Kim
2009-12-01
This longitudinal study of 1,447 first-time college students tested separate time-varying covariate models of the relations between academic and social motives/behaviors and alcohol use and related problems from senior year of high school through the end of the second year in college. Structural equation models identified small but significant inverse relations between academic motives/behaviors and alcohol use across all time points, with relations of somewhat larger magnitude between academic motives/behaviors and alcohol-related problems across all semesters other than senior year in high school. At all time points, there were much larger positive relations between social motives/behaviors and alcohol use across all semesters, with smaller but significant relations between social motives/behaviors and alcohol-related problems. Multi-group models found considerable consistency in the relations between motives/behaviors and alcohol-related outcomes across gender, race/ethnicity, and family history of alcohol problems, although academic motives/behaviors played a stronger protective role for women, and social motives were a more robust risk factor for Caucasian and Latino students and individuals with a positive family history of alcohol problems. Implications for alcohol prevention efforts among college students are discussed. Copyright 2009 APA
An experiment with interactive planning models
NASA Technical Reports Server (NTRS)
Beville, J.; Wagner, J. H.; Zannetos, Z. S.
1970-01-01
Experiments on decision making in planning problems are described. Executives were tested in dealing with capital investments and competitive pricing decisions under conditions of uncertainty. A software package, the interactive risk analysis model system, was developed, and two controlled experiments were conducted. It is concluded that planning models can aid management, and predicted uses of the models are as a central tool, as an educational tool, to improve consistency in decision making, to improve communications, and as a tool for consensus decision making.
Wolchik, S A; Wilcox, K L; Tein, J Y; Sandler, I N
2000-02-01
This study examines whether two aspects of mothering--acceptance and consistency of discipline--buffer the effect of divorce stressors on adjustment problems in 678 children, ages 8 to 15, whose families had divorced within the past 2 years. Children reported on divorce stressors; both mothers and children reported on mothering and internalizing and externalizing problems. Multiple regressions indicate that for maternal report of mothering, acceptance interacted with divorce stressors in predicting both dimensions of adjustment problems, with the pattern of findings supporting a stress-buffering effect. For child report of mothering, acceptance, consistency of discipline, and divorce stressors interacted in predicting adjustment problems. The relation between divorce stressors and internalizing and externalizing problems is stronger for children who report low acceptance and low consistency of discipline than for children who report either low acceptance and high consistency of discipline or high acceptance and low consistency of discipline. Children reporting high acceptance and high consistency of discipline have the lowest levels of adjustment problems. Implications of these results for understanding variability in children's postdivorce adjustment and interventions for divorced families are discussed.
Decisionmaking in practice: The dynamics of muddling through.
Flach, John M; Feufel, Markus A; Reynolds, Peter L; Parker, Sarah Henrickson; Kellogg, Kathryn M
2017-09-01
An alternative to conventional models that treat decisions as open-loop independent choices is presented. The alterative model is based on observations of work situations such as healthcare, where decisionmaking is more typically a closed-loop, dynamic, problem-solving process. The article suggests five important distinctions between the processes assumed by conventional models and the reality of decisionmaking in practice. It is suggested that the logic of abduction in the form of an adaptive, muddling through process is more consistent with the realities of practice in domains such as healthcare. The practical implication is that the design goal should not be to improve consistency with normative models of rationality, but to tune the representations guiding the muddling process to increase functional perspicacity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Diagnosing a Strong-Fault Model by Conflict and Consistency
Zhou, Gan; Feng, Wenquan
2018-01-01
The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model’s prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS) with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF). Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain—the heat control unit of a spacecraft—where the proposed methods are significantly better than best first and conflict directly with A* search methods. PMID:29596302
NASA Astrophysics Data System (ADS)
Antamoshkin, O. A.; Kilochitskaya, T. R.; Ontuzheva, G. A.; Stupina, A. A.; Tynchenko, V. S.
2018-05-01
This study reviews the problem of allocation of resources in the heterogeneous distributed information processing systems, which may be formalized in the form of a multicriterion multi-index problem with the linear constraints of the transport type. The algorithms for solution of this problem suggest a search for the entire set of Pareto-optimal solutions. For some classes of hierarchical systems, it is possible to significantly speed up the procedure of verification of a system of linear algebraic inequalities for consistency due to the reducibility of them to the stream models or the application of other solution schemes (for strongly connected structures) that take into account the specifics of the hierarchies under consideration.
A Solution Method of Scheduling Problem with Worker Allocation by a Genetic Algorithm
NASA Astrophysics Data System (ADS)
Osawa, Akira; Ida, Kenichi
In a scheduling problem with worker allocation (SPWA) proposed by Iima et al, the worker's skill level to each machine is all the same. However, each worker has a different skill level for each machine in the real world. For that reason, we propose a new model of SPWA in which a worker has the different skill level to each machine. To solve the problem, we propose a new GA for SPWA consisting of the following new three procedures, shortening of idle time, modifying infeasible solution to feasible solution, and a new selection method for GA. The effectiveness of the proposed algorithm is clarified by numerical experiments using benchmark problems for job-shop scheduling.
Metcalfe, Lindsay A; Harvey, Elizabeth A; Laws, Holly B
2013-08-01
Existing research suggests that there is a relation between academic/cognitive deficits and externalizing behavior in young children, but the direction of this relation is unclear. The present study tested competing models of the relation between academic/cognitive functioning and behavior problems during early childhood. Participants were 221 children (120 boys, 101 girls) who participated in a longitudinal study from age 3 to 6. A reciprocal relation (Model 3) was observed only between inattention and academic achievement; this relation remained controlling for SES and family stress. The relation between inattention and cognitive ability was consistent with Model 1 (cognitive skills predicting later inattention) with controls. For hyperactivity and aggression, there was some support for Model 2 (early behavior predicting later academic/cognitive ability), but this model was no longer supported when controlling for family functioning. These results suggest that the relation between academic achievement/cognitive ability and externalizing problems may be driven primarily by inattention. These results also suggest that this relation is evident early in development, highlighting the need for early assessment and intervention.
Using Bayesian Networks for Candidate Generation in Consistency-based Diagnosis
NASA Technical Reports Server (NTRS)
Narasimhan, Sriram; Mengshoel, Ole
2008-01-01
Consistency-based diagnosis relies heavily on the assumption that discrepancies between model predictions and sensor observations can be detected accurately. When sources of uncertainty like sensor noise and model abstraction exist robust schemes have to be designed to make a binary decision on whether predictions are consistent with observations. This risks the occurrence of false alarms and missed alarms when an erroneous decision is made. Moreover when multiple sensors (with differing sensing properties) are available the degree of match between predictions and observations can be used to guide the search for fault candidates. In this paper we propose a novel approach to handle this problem using Bayesian networks. In the consistency- based diagnosis formulation, automatically generated Bayesian networks are used to encode a probabilistic measure of fit between predictions and observations. A Bayesian network inference algorithm is used to compute most probable fault candidates.
Six Concepts to Enhance School Effectiveness.
ERIC Educational Resources Information Center
Gleave, Doug
1984-01-01
An action research method, consisting of data collection, diagnosis, action planning, and evaluation, was used by the Saskatoon Schools (Canada) to facilitate school self-diagnosis and problem solving. The organizational model that helped categorize research findings on school effectiveness and innovation is explored in this article. (DF)
High Assurance Models for Secure Systems
ERIC Educational Resources Information Center
Almohri, Hussain M. J.
2013-01-01
Despite the recent advances in systems and network security, attacks on large enterprise networks consistently impose serious challenges to maintaining data privacy and software service integrity. We identify two main problems that contribute to increasing the security risk in a networked environment: (i) vulnerable servers, workstations, and…
Li, Jun; Tibshirani, Robert
2015-01-01
We discuss the identification of features that are associated with an outcome in RNA-Sequencing (RNA-Seq) and other sequencing-based comparative genomic experiments. RNA-Seq data takes the form of counts, so models based on the normal distribution are generally unsuitable. The problem is especially challenging because different sequencing experiments may generate quite different total numbers of reads, or ‘sequencing depths’. Existing methods for this problem are based on Poisson or negative binomial models: they are useful but can be heavily influenced by ‘outliers’ in the data. We introduce a simple, nonparametric method with resampling to account for the different sequencing depths. The new method is more robust than parametric methods. It can be applied to data with quantitative, survival, two-class or multiple-class outcomes. We compare our proposed method to Poisson and negative binomial-based methods in simulated and real data sets, and find that our method discovers more consistent patterns than competing methods. PMID:22127579
Analysis of x-ray hand images for bone age assessment
NASA Astrophysics Data System (ADS)
Serrat, Joan; Vitria, Jordi M.; Villanueva, Juan J.
1990-09-01
In this paper we describe a model-based system for the assessment of skeletal maturity on hand radiographs by the TW2 method. The problem consists in classiflying a set of bones appearing in an image in one of several stages described in an atlas. A first approach consisting in pre-processing segmentation and classification independent phases is also presented. However it is only well suited for well contrasted low noise images without superimposed bones were the edge detection by zero crossing of second directional derivatives is able to extract all bone contours maybe with little gaps and few false edges on the background. Hence the use of all available knowledge about the problem domain is needed to build a rather general system. We have designed a rule-based system for narrow down the rank of possible stages for each bone and guide the analysis process. It calls procedures written in conventional languages for matching stage models against the image and getting features needed in the classification process.
The role of competing knowledge structures in undermining learning: Newton's second and third laws
NASA Astrophysics Data System (ADS)
Low, David J.; Wilson, Kate F.
2017-01-01
We investigate the development of student understanding of Newton's laws using a pre-instruction test (the Force Concept Inventory), followed by a series of post-instruction tests and interviews. While some students' somewhat naive, pre-existing models of Newton's third law are largely eliminated following a semester of teaching, we find that a particular inconsistent model is highly resilient to, and may even be strengthened by, instruction. If test items contain words that cue students to think of Newton's second law, then students are more likely to apply a "net force" approach to solving problems, even if it is inappropriate to do so. Additional instruction, reinforcing physical concepts in multiple settings and from multiple sources, appears to help students develop a more connected and consistent level of understanding. We recommend explicitly encouraging students to check their work for consistency with physical principles, along with the standard checks for dimensionality and order of magnitude, to encourage reflective and rigorous problem solving.
Behavior problems and placement change in a national child welfare sample: a prospective study.
Aarons, Gregory A; James, Sigrid; Monn, Amy R; Raghavan, Ramesh; Wells, Rebecca S; Leslie, Laurel K
2010-01-01
There is ongoing debate regarding the impact of youth behavior problems on placement change in child welfare compared to the impact of placement change on behavior problems. Existing studies provide support for both perspectives. The purpose of this study was to prospectively examine the relations of behavior problems and placement change in a nationally representative sample of youths in the National Survey of Child and Adolescent Well-Being. The sample consisted of 500 youths in the child welfare system with out-of-home placements over the course of the National Survey of Child and Adolescent Well-Being study. We used a prospective cross-lag design and path analysis to examine reciprocal effects of behavior problems and placement change, testing an overall model and models examining effects of age and gender. In the overall model, out of a total of eight path coefficients, behavior problems significantly predicted placement changes for three paths and placement change predicted behavior problems for one path. Internalizing and externalizing behavior problems at baseline predicted placement change between baseline and 18 months. Behavior problems at an older age and externalizing behavior at 18 months appear to confer an increased risk of placement change. Of note, among female subjects, placement changes later in the study predicted subsequent internalizing and externalizing behavior problems. In keeping with recommendations from a number of professional bodies, we suggest that initial and ongoing screening for internalizing and externalizing behavior problems be instituted as part of standard practice for youths entering or transitioning in the child welfare system.
Validation of a finite element method framework for cardiac mechanics applications
NASA Astrophysics Data System (ADS)
Danan, David; Le Rolle, Virginie; Hubert, Arnaud; Galli, Elena; Bernard, Anne; Donal, Erwan; Hernández, Alfredo I.
2017-11-01
Modeling cardiac mechanics is a particularly challenging task, mainly because of the poor understanding of the underlying physiology, the lack of observability and the complexity of the mechanical properties of myocardial tissues. The choice of cardiac mechanic solvers, especially, implies several difficulties, notably due to the potential instability arising from the nonlinearities inherent to the large deformation framework. Furthermore, the verification of the obtained simulations is a difficult task because there is no analytic solutions for these kinds of problems. Hence, the objective of this work is to provide a quantitative verification of a cardiac mechanics implementation based on two published benchmark problems. The first problem consists in deforming a bar whereas the second problem concerns the inflation of a truncated ellipsoid-shaped ventricle, both in the steady state case. Simulations were obtained by using the finite element software GETFEM++. Results were compared to the consensus solution published by 11 groups and the proposed solutions were indistinguishable. The validation of the proposed mechanical model implementation is an important step toward the proposition of a global model of cardiac electro-mechanical activity.
Three essays on multi-level optimization models and applications
NASA Astrophysics Data System (ADS)
Rahdar, Mohammad
The general form of a multi-level mathematical programming problem is a set of nested optimization problems, in which each level controls a series of decision variables independently. However, the value of decision variables may also impact the objective function of other levels. A two-level model is called a bilevel model and can be considered as a Stackelberg game with a leader and a follower. The leader anticipates the response of the follower and optimizes its objective function, and then the follower reacts to the leader's action. The multi-level decision-making model has many real-world applications such as government decisions, energy policies, market economy, network design, etc. However, there is a lack of capable algorithms to solve medium and large scale these types of problems. The dissertation is devoted to both theoretical research and applications of multi-level mathematical programming models, which consists of three parts, each in a paper format. The first part studies the renewable energy portfolio under two major renewable energy policies. The potential competition for biomass for the growth of the renewable energy portfolio in the United States and other interactions between two policies over the next twenty years are investigated. This problem mainly has two levels of decision makers: the government/policy makers and biofuel producers/electricity generators/farmers. We focus on the lower-level problem to predict the amount of capacity expansions, fuel production, and power generation. In the second part, we address uncertainty over demand and lead time in a multi-stage mathematical programming problem. We propose a two-stage tri-level optimization model in the concept of rolling horizon approach to reducing the dimensionality of the multi-stage problem. In the third part of the dissertation, we introduce a new branch and bound algorithm to solve bilevel linear programming problems. The total time is reduced by solving a smaller relaxation problem in each node and decreasing the number of iterations. Computational experiments show that the proposed algorithm is faster than the existing ones.
Execution Of Systems Integration Principles During Systems Engineering Design
2016-09-01
This thesis discusses integration failures observed by DOD and non - DOD systems as, inadequate stakeholder analysis, incomplete problem space and design ... design , development, test and deployment of a system. A lifecycle structure consists of phases within a methodology or process model. There are many...investigate design decisions without the need to commit to physical forms; “ experimental investigation using a model yields design or operational
Reasoning, Problem Solving, and Intelligence.
1980-04-01
designed to test the validity of their model of response choice in analogical reason- ing. In the first experiment, they set out to demonstrate that...second experiment were somewhat consistent with the prediction. The third experiment used a concept-formation design in which subjects were required to... designed to show interrelationships between various forms of inductive reasoning. Their model fits were highly comparable to those of Rumelhart and
Navigating complex decision spaces: Problems and paradigms in sequential choice
Walsh, Matthew M.; Anderson, John R.
2015-01-01
To behave adaptively, we must learn from the consequences of our actions. Doing so is difficult when the consequences of an action follow a delay. This introduces the problem of temporal credit assignment. When feedback follows a sequence of decisions, how should the individual assign credit to the intermediate actions that comprise the sequence? Research in reinforcement learning provides two general solutions to this problem: model-free reinforcement learning and model-based reinforcement learning. In this review, we examine connections between stimulus-response and cognitive learning theories, habitual and goal-directed control, and model-free and model-based reinforcement learning. We then consider a range of problems related to temporal credit assignment. These include second-order conditioning and secondary reinforcers, latent learning and detour behavior, partially observable Markov decision processes, actions with distributed outcomes, and hierarchical learning. We ask whether humans and animals, when faced with these problems, behave in a manner consistent with reinforcement learning techniques. Throughout, we seek to identify neural substrates of model-free and model-based reinforcement learning. The former class of techniques is understood in terms of the neurotransmitter dopamine and its effects in the basal ganglia. The latter is understood in terms of a distributed network of regions including the prefrontal cortex, medial temporal lobes cerebellum, and basal ganglia. Not only do reinforcement learning techniques have a natural interpretation in terms of human and animal behavior, but they also provide a useful framework for understanding neural reward valuation and action selection. PMID:23834192
NASA Astrophysics Data System (ADS)
Zittersteijn, M.; Vananti, A.; Schildknecht, T.; Dolado Perez, J. C.; Martinot, V.
2016-11-01
Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. The problem faced in this framework is that of Multiple Target Tracking (MTT). The MTT problem quickly becomes an NP-hard combinatorial optimization problem. This means that the effort required to solve the MTT problem increases exponentially with the number of tracked objects. In an attempt to find an approximate solution of sufficient quality, several Population-Based Meta-Heuristic (PBMH) algorithms are implemented and tested on simulated optical measurements. These first results show that one of the tested algorithms, namely the Elitist Genetic Algorithm (EGA), consistently displays the desired behavior of finding good approximate solutions before reaching the optimum. The results further suggest that the algorithm possesses a polynomial time complexity, as the computation times are consistent with a polynomial model. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the association and orbit determination problems simultaneously, and is able to efficiently process large data sets with minimal manual intervention.
Sciberras, Emma; Song, Jie Cheng; Mulraney, Melissa; Schuster, Tibor; Hiscock, Harriet
2017-09-01
We aimed to examine the association between sleep problems and parenting and sleep hygiene in children with attention-deficit/hyperactivity disorder (ADHD). Participants included 5-13-year-old children with DSM 5 defined ADHD and a parent-reported moderate-to-severe sleep problem (N = 361). Sleep was assessed using the parent-reported Children's Sleep Habits Questionnaire. Parents also completed checklists assessing sleep hygiene, parenting consistency, and parenting warmth. Linear regression established prediction models controlling for confounding variables including child age and sex, ADHD symptom severity, comorbidities, medication use, and socio-demographic factors. More consistent parenting was associated with decreased bedtime resistance (β = -0.16) and decreased sleep anxiety (β = -0.14), while greater parental warmth was associated with increased parasomnias (β = +0.18) and sleep anxiety (β = +0.13). Poorer sleep hygiene was associated with increased bedtime resistance (β = +0.20), increased daytime sleepiness (β = +0.12), and increased sleep duration problems (β = +0.13). In conclusion, sleep hygiene and parenting are important modifiable factors independently associated with sleep problems in children with ADHD. These factors should be considered in the management of sleep problems in children with ADHD.
Mantle convection and plate tectonics: toward an integrated physical and chemical theory
Tackley
2000-06-16
Plate tectonics and convection of the solid, rocky mantle are responsible for transporting heat out of Earth. However, the physics of plate tectonics is poorly understood; other planets do not exhibit it. Recent seismic evidence for convection and mixing throughout the mantle seems at odds with the chemical composition of erupted magmas requiring the presence of several chemically distinct reservoirs within the mantle. There has been rapid progress on these two problems, with the emergence of the first self-consistent models of plate tectonics and mantle convection, along with new geochemical models that may be consistent with seismic and dynamical constraints on mantle structure.
Alterations in choice behavior by manipulations of world model.
Green, C S; Benson, C; Kersten, D; Schrater, P
2010-09-14
How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) "probability matching"-a consistent example of suboptimal choice behavior seen in humans-occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning.
Alterations in choice behavior by manipulations of world model
Green, C. S.; Benson, C.; Kersten, D.; Schrater, P.
2010-01-01
How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) “probability matching”—a consistent example of suboptimal choice behavior seen in humans—occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning. PMID:20805507
Redshift space clustering of galaxies and cold dark matter model
NASA Technical Reports Server (NTRS)
Bahcall, Neta A.; Cen, Renyue; Gramann, Mirt
1993-01-01
The distorting effect of peculiar velocities on the power speturm and correlation function of IRAS and optical galaxies is studied. The observed redshift space power spectra and correlation functions of IRAS and optical the galaxies over the entire range of scales are directly compared with the corresponding redshift space distributions using large-scale computer simulations of cold dark matter (CDM) models in order to study the distortion effect of peculiar velocities on the power spectrum and correlation function of the galaxies. It is found that the observed power spectrum of IRAS and optical galaxies is consistent with the spectrum of an Omega = 1 CDM model. The problems that such a model currently faces may be related more to the high value of Omega in the model than to the shape of the spectrum. A low-density CDM model is also investigated and found to be consistent with the data.
A systematic linear space approach to solving partially described inverse eigenvalue problems
NASA Astrophysics Data System (ADS)
Hu, Sau-Lon James; Li, Haujun
2008-06-01
Most applications of the inverse eigenvalue problem (IEP), which concerns the reconstruction of a matrix from prescribed spectral data, are associated with special classes of structured matrices. Solving the IEP requires one to satisfy both the spectral constraint and the structural constraint. If the spectral constraint consists of only one or few prescribed eigenpairs, this kind of inverse problem has been referred to as the partially described inverse eigenvalue problem (PDIEP). This paper develops an efficient, general and systematic approach to solve the PDIEP. Basically, the approach, applicable to various structured matrices, converts the PDIEP into an ordinary inverse problem that is formulated as a set of simultaneous linear equations. While solving simultaneous linear equations for model parameters, the singular value decomposition method is applied. Because of the conversion to an ordinary inverse problem, other constraints associated with the model parameters can be easily incorporated into the solution procedure. The detailed derivation and numerical examples to implement the newly developed approach to symmetric Toeplitz and quadratic pencil (including mass, damping and stiffness matrices of a linear dynamic system) PDIEPs are presented. Excellent numerical results for both kinds of problem are achieved under the situations that have either unique or infinitely many solutions.
Sosu, Edward M; Schmidt, Peter
2017-01-01
This study investigated the mechanisms by which experiences of poverty influence the trajectory of conduct problems among preschool children. Drawing on two theoretical perspectives, we focused on family stress (stress and harsh discipline) and investment variables (educational investment, nutrition, and cognitive ability) as key mediators. Structural equation modeling techniques with prospective longitudinal data from the Growing Up in Scotland survey ( N = 3,375) were used. Economic deprivation measured around the first birthday of the sample children had both direct and indirect effects on conduct problems across time (ages 4, 5, and 6). In line with the family stress hypothesis, higher levels of childhood poverty predicted conduct problems across time through increased parental stress and punitive discipline. Consistent with the investment model, childhood deprivation was associated with higher levels of conduct problems via educational investment and cognitive ability. The study extends previous knowledge on the mechanisms of this effect by demonstrating that cognitive ability is a key mediator between poverty and the trajectory of childhood conduct problems. This suggests that interventions aimed at reducing child conduct problems should be expanded to include factors that compromise parenting as well as improve child cognitive ability.
Sosu, Edward M.; Schmidt, Peter
2017-01-01
This study investigated the mechanisms by which experiences of poverty influence the trajectory of conduct problems among preschool children. Drawing on two theoretical perspectives, we focused on family stress (stress and harsh discipline) and investment variables (educational investment, nutrition, and cognitive ability) as key mediators. Structural equation modeling techniques with prospective longitudinal data from the Growing Up in Scotland survey (N = 3,375) were used. Economic deprivation measured around the first birthday of the sample children had both direct and indirect effects on conduct problems across time (ages 4, 5, and 6). In line with the family stress hypothesis, higher levels of childhood poverty predicted conduct problems across time through increased parental stress and punitive discipline. Consistent with the investment model, childhood deprivation was associated with higher levels of conduct problems via educational investment and cognitive ability. The study extends previous knowledge on the mechanisms of this effect by demonstrating that cognitive ability is a key mediator between poverty and the trajectory of childhood conduct problems. This suggests that interventions aimed at reducing child conduct problems should be expanded to include factors that compromise parenting as well as improve child cognitive ability. PMID:28955283
Cheng, Yen-Pi; Birditt, Kira; Zarit, Steven
2012-01-01
Objectives. Middle-aged parents’ well-being may be tied to successes and failures of grown children. Moreover, most parents have more than one child, but studies have not considered how different children's successes and failures may be associated with parental well-being. Methods. Middle-aged adults (aged 40–60; N = 633) reported on each of their grown children (n = 1,384) and rated their own well-being. Participants indicated problems each child had experienced in the past two years, rated their children's successes, as well as positive and negative relationship qualities. Results. Analyses compared an exposure model (i.e., having one grown child with a problem or deemed successful) and a cumulative model (i.e., total problems or successes in the family). Consistent with the exposure and cumulative models, having one child with problems predicted poorer parental well-being and the more problems in the family, the worse parental well-being. Having one successful child did not predict well-being, but multiple grown children with higher total success in the family predicted enhanced parental well-being. Relationship qualities partially explained associations between children's successes and parental well-being. Discussion. Discussion focuses on benefits and detriments parents derive from how grown progeny turn out and particularly the implications of grown children's problems. PMID:21856677
Criteria for assessing problem solving and decision making in complex environments
NASA Technical Reports Server (NTRS)
Orasanu, Judith
1993-01-01
Training crews to cope with unanticipated problems in high-risk, high-stress environments requires models of effective problem solving and decision making. Existing decision theories use the criteria of logical consistency and mathematical optimality to evaluate decision quality. While these approaches are useful under some circumstances, the assumptions underlying these models frequently are not met in dynamic time-pressured operational environments. Also, applying formal decision models is both labor and time intensive, a luxury often lacking in operational environments. Alternate approaches and criteria are needed. Given that operational problem solving and decision making are embedded in ongoing tasks, evaluation criteria must address the relation between those activities and satisfaction of broader task goals. Effectiveness and efficiency become relevant for judging reasoning performance in operational environments. New questions must be addressed: What is the relation between the quality of decisions and overall performance by crews engaged in critical high risk tasks? Are different strategies most effective for different types of decisions? How can various decision types be characterized? A preliminary model of decision types found in air transport environments will be described along with a preliminary performance model based on an analysis of 30 flight crews. The performance analysis examined behaviors that distinguish more and less effective crews (based on performance errors). Implications for training and system design will be discussed.
Burt, S. A.; Klump, K. L.
2018-01-01
Background Prior research has suggested that, consistent with the diathesis–stress model of gene–environment interaction (G × E), parent–child conflict activates genetic influences on antisocial/externalizing behaviors during adolescence. It remains unclear, however, whether this model is also important during childhood, or whether the moderation of child conduct problems by negative/conflictive parenting is better characterized as a bioecological interaction, in which environmental influences are enhanced in the presence of environmental risk whereas genetic influences are expressed most strongly in their absence. The current study sought to distinguish between these possibilities, evaluating how the parent–child relationship moderates the etiology of childhood-onset conduct problems. Method We conducted a series of ‘latent G by measured E’ interaction analyses, in which a measured environmental variable was allowed to moderate both genetic and environmental influences on child conduct problems. Participants included 500 child twin pairs from the Michigan State University Twin Registry (MSUTR). Results Shared environmental influences on conduct problems were found to be several-fold larger in those with high levels of parent–child conflict as compared with those with low levels. Genetic influences, by contrast, were proportionally more influential at lower levels of conflict than at higher levels. Conclusions Our findings suggest that, although the diathesis–stress form of G × E appears to underlie the relationship between parenting and conduct problems during adolescence, this pattern of moderation does not extend to childhood. Instead, results were more consistent with the bioecological form of G × E which postulates that, in some cases, genetic influences may be most fully manifested in the absence of environmental risk. PMID:23746066
ERIC Educational Resources Information Center
Krahenbuhl, Kevin S.
2017-01-01
The flipped classroom is growing significantly as a model of learning in higher education. However, there are ample problems with the research on flipped classrooms, including where success is often defined by student perceptions and a lack of consistent, empirical research supporting improved academic learning. This quasi-experimental study…
Contemporary Inventional Theory: An Aristotelian Model.
ERIC Educational Resources Information Center
Skopec, Eric W.
Contemporary rhetoricians are concerned with the re-examination of classical doctrines in the hope of finding solutions to current problems. In this study, the author presents a methodological perspective consistent with current interests, by re-examining the assumptions that underlie each classical precept. He outlines an inventional system based…
Simulating the evolution of glyphosate resistance in grains farming in northern Australia.
Thornby, David F; Walker, Steve R
2009-09-01
The evolution of resistance to herbicides is a substantial problem in contemporary agriculture. Solutions to this problem generally consist of the use of practices to control the resistant population once it evolves, and/or to institute preventative measures before populations become resistant. Herbicide resistance evolves in populations over years or decades, so predicting the effectiveness of preventative strategies in particular relies on computational modelling approaches. While models of herbicide resistance already exist, none deals with the complex regional variability in the northern Australian sub-tropical grains farming region. For this reason, a new computer model was developed. The model consists of an age- and stage-structured population model of weeds, with an existing crop model used to simulate plant growth and competition, and extensions to the crop model added to simulate seed bank ecology and population genetics factors. Using awnless barnyard grass (Echinochloa colona) as a test case, the model was used to investigate the likely rate of evolution under conditions expected to produce high selection pressure. Simulating continuous summer fallows with glyphosate used as the only means of weed control resulted in predicted resistant weed populations after approx. 15 years. Validation of the model against the paddock history for the first real-world glyphosate-resistant awnless barnyard grass population shows that the model predicted resistance evolution to within a few years of the real situation. This validation work shows that empirical validation of herbicide resistance models is problematic. However, the model simulates the complexities of sub-tropical grains farming in Australia well, and can be used to investigate, generate and improve glyphosate resistance prevention strategies.
NASA Astrophysics Data System (ADS)
Pandey, S.; Vesselinov, V. V.; O'Malley, D.; Karra, S.; Hansen, S. K.
2016-12-01
Models and data are used to characterize the extent of contamination and remediation, both of which are dependent upon the complex interplay of processes ranging from geochemical reactions, microbial metabolism, and pore-scale mixing to heterogeneous flow and external forcings. Characterization is wrought with important uncertainties related to the model itself (e.g. conceptualization, model implementation, parameter values) and the data used for model calibration (e.g. sparsity, measurement errors). This research consists of two primary components: (1) Developing numerical models that incorporate the complex hydrogeology and biogeochemistry that drive groundwater contamination and remediation; (2) Utilizing novel techniques for data/model-based analyses (such as parameter calibration and uncertainty quantification) to aid in decision support for optimal uncertainty reduction related to characterization and remediation of contaminated sites. The reactive transport models are developed using PFLOTRAN and are capable of simulating a wide range of biogeochemical and hydrologic conditions that affect the migration and remediation of groundwater contaminants under diverse field conditions. Data/model-based analyses are achieved using MADS, which utilizes Bayesian methods and Information Gap theory to address the data/model uncertainties discussed above. We also use these tools to evaluate different models, which vary in complexity, in order to weigh and rank models based on model accuracy (in representation of existing observations), model parsimony (everything else being equal, models with smaller number of model parameters are preferred), and model robustness (related to model predictions of unknown future states). These analyses are carried out on synthetic problems, but are directly related to real-world problems; for example, the modeled processes and data inputs are consistent with the conditions at the Los Alamos National Laboratory contamination sites (RDX and Chromium).
Observation of quantum criticality with ultracold atoms in optical lattices
NASA Astrophysics Data System (ADS)
Zhang, Xibo
As biological problems are becoming more complex and data growing at a rate much faster than that of computer hardware, new and faster algorithms are required. This dissertation investigates computational problems arising in two of the fields: comparative genomics and epigenomics, and employs a variety of computational techniques to address the problems. One fundamental question in the studies of chromosome evolution is whether the rearrangement breakpoints are happening at random positions or along certain hotspots. We investigate the breakpoint reuse phenomenon, and show the analyses that support the more recently proposed fragile breakage model as opposed to the conventional random breakage models for chromosome evolution. The identification of syntenic regions between chromosomes forms the basis for studies of genome architectures, comparative genomics, and evolutionary genomics. The previous synteny block reconstruction algorithms could not be scaled to a large number of mammalian genomes being sequenced; neither did they address the issue of generating non-overlapping synteny blocks suitable for analyzing rearrangements and evolutionary history of large-scale duplications prevalent in plant genomes. We present a new unified synteny block generation algorithm based on A-Bruijn graph framework that overcomes these shortcomings. In the epigenome sequencing, a sample may contain a mixture of epigenomes and there is a need to resolve the distinct methylation patterns from the mixture. Many sequencing applications, such as haplotype inference for diploid or polyploid genomes, and metagenomic sequencing, share the similar objective: to infer a set of distinct assemblies from reads that are sequenced from a heterogeneous sample and subsequently aligned to a reference genome. We model the problem from both a combinatorial and a statistical angles. First, we describe a theoretical framework. A linear-time algorithm is then given to resolve a minimum number of assemblies that are consistent with all reads, substantially improving on previous algorithms. An efficient algorithm is also described to determine a set of assemblies that is consistent with a maximum subset of the reads, a previously untreated problem. We then prove that allowing nested reads or permitting mismatches between reads and their assemblies renders these problems NP-hard. Second, we describe a mixture model-based approach, and applied the model for the detection of allele-specific methylations.
Stabilizing l1-norm prediction models by supervised feature grouping.
Kamkar, Iman; Gupta, Sunil Kumar; Phung, Dinh; Venkatesh, Svetha
2016-02-01
Emerging Electronic Medical Records (EMRs) have reformed the modern healthcare. These records have great potential to be used for building clinical prediction models. However, a problem in using them is their high dimensionality. Since a lot of information may not be relevant for prediction, the underlying complexity of the prediction models may not be high. A popular way to deal with this problem is to employ feature selection. Lasso and l1-norm based feature selection methods have shown promising results. But, in presence of correlated features, these methods select features that change considerably with small changes in data. This prevents clinicians to obtain a stable feature set, which is crucial for clinical decision making. Grouping correlated variables together can improve the stability of feature selection, however, such grouping is usually not known and needs to be estimated for optimal performance. Addressing this problem, we propose a new model that can simultaneously learn the grouping of correlated features and perform stable feature selection. We formulate the model as a constrained optimization problem and provide an efficient solution with guaranteed convergence. Our experiments with both synthetic and real-world datasets show that the proposed model is significantly more stable than Lasso and many existing state-of-the-art shrinkage and classification methods. We further show that in terms of prediction performance, the proposed method consistently outperforms Lasso and other baselines. Our model can be used for selecting stable risk factors for a variety of healthcare problems, so it can assist clinicians toward accurate decision making. Copyright © 2015 Elsevier Inc. All rights reserved.
A Categorical Framework for Model Classification in the Geosciences
NASA Astrophysics Data System (ADS)
Hauhs, Michael; Trancón y Widemann, Baltasar; Lange, Holger
2016-04-01
Models have a mixed record of success in the geosciences. In meteorology, model development and implementation has been among the first and most successful examples of triggering computer technology in science. On the other hand, notorious problems such as the 'equifinality issue' in hydrology lead to a rather mixed reputation of models in other areas. The most successful models in geosciences are applications of dynamic systems theory to non-living systems or phenomena. Thus, we start from the hypothesis that the success of model applications relates to the influence of life on the phenomenon under study. We thus focus on the (formal) representation of life in models. The aim is to investigate whether disappointment in model performance is due to system properties such as heterogeneity and historicity of ecosystems, or rather reflects an abstraction and formalisation problem at a fundamental level. As a formal framework for this investigation, we use category theory as applied in computer science to specify behaviour at an interface. Its methods have been developed for translating and comparing formal structures among different application areas and seems highly suited for a classification of the current "model zoo" in the geosciences. The approach is rather abstract, with a high degree of generality but a low level of expressibility. Here, category theory will be employed to check the consistency of assumptions about life in different models. It will be shown that it is sufficient to distinguish just four logical cases to check for consistency of model content. All four cases can be formalised as variants of coalgebra-algebra homomorphisms. It can be demonstrated that transitions between the four variants affect the relevant observations (time series or spatial maps), the formalisms used (equations, decision trees) and the test criteria of success (prediction, classification) of the resulting model types. We will present examples from hydrology and ecology in which a transport problem is combined with the strategic behaviour of living agents. The living and the non-living aspects of the model belong to two different model types. If a model is built to combine strategic behaviour with the constraint of mass conservation, some critical assumptions appear as inevitable, or models may become logically inconsistent. The categorical assessment and the examples demonstrate that many models at ecosystem level, where both living and non-living aspects inevitably meet, pose so far unsolved, fundamental problems. Today, these are often pragmatically resolved at the level of software engineering. Some suggestions will be given how model documentation and benchmarking may help clarifying and resolving some of these issues.
Gavriel-Fried, Belle; Rabayov, Tal
2017-01-01
Aims: People with gambling as well as substance use problems who are exposed to public stigmatization may internalize and apply it to themselves through a mechanism known as self-stigma. This study implemented the Progressive Model for Self-Stigma which consists four sequential interrelated stages: awareness, agreement, application and harm on three groups of individuals with gambling, alcohol and other substance use problems. It explored whether the two guiding assumptions of this model (each stage is precondition for the following stage which are trickle-down in nature, and correlations between proximal stages should be larger than correlations between more distant stages) would differentiate people with gambling problems from those with alcohol and other substance use problems in terms of their patterns of self-stigma and in terms of the stages in the model. Method: 37 individuals with gambling problems, 60 with alcohol problems and 51 with drug problems who applied for treatment in rehabilitation centers in Israel in 2015–2016 were recruited. They completed the Self-stigma of Mental Illness Scale-Short Form which was adapted by changing the term “mental health” to gambling, alcohol or drugs, and the DSM-5-diagnostic criteria for gambling, alcohol or drug disorder. Results: The assumptions of the model were broadly confirmed: a repeated measures ANCOVA revealed that in all three groups there was a difference between first two stages (aware and agree) and the latter stages (apply and harm). In addition, the gambling group differed from the drug use and alcohol groups on the awareness stage: individuals with gambling problems were less likely to be aware of stigma than people with substance use or alcohol problems. Conclusion: The internalization of stigma among individuals with gambling problems tends to work in a similar way as for those with alcohol or drug problems. The differences between the gambling group and the alcohol and other substance groups at the aware stage may suggest that public stigma with regard to any given addictive disorder may be a function of the type of addiction (substance versus behavioral). PMID:28649212
NASA Astrophysics Data System (ADS)
Mercan, Fatih C.
This study examines epistemological beliefs of physics undergraduate and graduate students and faculty in the context of solving a well-structured and an ill-structured problem. The data collection consisted of a think aloud problem solving session followed by a semi-structured interview conducted with 50 participants, 10 participants at freshmen, seniors, masters, PhD, and faculty levels. The data analysis involved (a) identification of the range of beliefs about knowledge in the context of the well-structured and the ill-structured problem solving, (b) construction of a framework that unites the individual beliefs identified in each problem context under the same conceptual base, and (c) comparisons of the problem contexts and expertise level groups using the framework. The results of the comparison of the contexts of the well-structured and the ill-structured problem showed that (a) authoritative beliefs about knowledge were expressed in the well-structured problem context, (b) relativistic and religious beliefs about knowledge were expressed in the ill-structured problem context, and (c) rational, empirical, modeling beliefs about knowledge were expressed in both problem contexts. The results of the comparison of the expertise level groups showed that (a) undergraduates expressed authoritative beliefs about knowledge more than graduate students and faculty did not express authoritative beliefs, (b) faculty expressed modeling beliefs about knowledge more than graduate students and undergraduates did not express modeling beliefs, and (c) there were no differences in rational, empirical, experiential, relativistic, and religious beliefs about knowledge among the expertise level groups. As the expertise level increased the number of participants who expressed authoritative beliefs about knowledge decreased and the number of participants who expressed modeling based beliefs about knowledge increased. The results of this study implied that existing developmental and cognitive models of personal epistemology can explain personal epistemology in physics to a limited extent, however, these models cannot adequately account for the variation of epistemological beliefs across problem contexts. Modeling beliefs about knowledge emerged as a part of personal epistemology and an indicator of epistemological sophistication, which do not develop until extensive experience in the field. Based on these findings, the researcher recommended providing opportunities for practicing model construction for students.
A selection model for accounting for publication bias in a full network meta-analysis.
Mavridis, Dimitris; Welton, Nicky J; Sutton, Alex; Salanti, Georgia
2014-12-30
Copas and Shi suggested a selection model to explore the potential impact of publication bias via sensitivity analysis based on assumptions for the probability of publication of trials conditional on the precision of their results. Chootrakool et al. extended this model to three-arm trials but did not fully account for the implications of the consistency assumption, and their model is difficult to generalize for complex network structures with more than three treatments. Fitting these selection models within a frequentist setting requires maximization of a complex likelihood function, and identification problems are common. We have previously presented a Bayesian implementation of the selection model when multiple treatments are compared with a common reference treatment. We now present a general model suitable for complex, full network meta-analysis that accounts for consistency when adjusting results for publication bias. We developed a design-by-treatment selection model to describe the mechanism by which studies with different designs (sets of treatments compared in a trial) and precision may be selected for publication. We fit the model in a Bayesian setting because it avoids the numerical problems encountered in the frequentist setting, it is generalizable with respect to the number of treatments and study arms, and it provides a flexible framework for sensitivity analysis using external knowledge. Our model accounts for the additional uncertainty arising from publication bias more successfully compared to the standard Copas model or its previous extensions. We illustrate the methodology using a published triangular network for the failure of vascular graft or arterial patency. Copyright © 2014 John Wiley & Sons, Ltd.
Reinforcement Learning Using a Continuous Time Actor-Critic Framework with Spiking Neurons
Frémaux, Nicolas; Sprekeler, Henning; Gerstner, Wulfram
2013-01-01
Animals repeat rewarded behaviors, but the physiological basis of reward-based learning has only been partially elucidated. On one hand, experimental evidence shows that the neuromodulator dopamine carries information about rewards and affects synaptic plasticity. On the other hand, the theory of reinforcement learning provides a framework for reward-based learning. Recent models of reward-modulated spike-timing-dependent plasticity have made first steps towards bridging the gap between the two approaches, but faced two problems. First, reinforcement learning is typically formulated in a discrete framework, ill-adapted to the description of natural situations. Second, biologically plausible models of reward-modulated spike-timing-dependent plasticity require precise calculation of the reward prediction error, yet it remains to be shown how this can be computed by neurons. Here we propose a solution to these problems by extending the continuous temporal difference (TD) learning of Doya (2000) to the case of spiking neurons in an actor-critic network operating in continuous time, and with continuous state and action representations. In our model, the critic learns to predict expected future rewards in real time. Its activity, together with actual rewards, conditions the delivery of a neuromodulatory TD signal to itself and to the actor, which is responsible for action choice. In simulations, we show that such an architecture can solve a Morris water-maze-like navigation task, in a number of trials consistent with reported animal performance. We also use our model to solve the acrobot and the cartpole problems, two complex motor control tasks. Our model provides a plausible way of computing reward prediction error in the brain. Moreover, the analytically derived learning rule is consistent with experimental evidence for dopamine-modulated spike-timing-dependent plasticity. PMID:23592970
Reinforcement learning using a continuous time actor-critic framework with spiking neurons.
Frémaux, Nicolas; Sprekeler, Henning; Gerstner, Wulfram
2013-04-01
Animals repeat rewarded behaviors, but the physiological basis of reward-based learning has only been partially elucidated. On one hand, experimental evidence shows that the neuromodulator dopamine carries information about rewards and affects synaptic plasticity. On the other hand, the theory of reinforcement learning provides a framework for reward-based learning. Recent models of reward-modulated spike-timing-dependent plasticity have made first steps towards bridging the gap between the two approaches, but faced two problems. First, reinforcement learning is typically formulated in a discrete framework, ill-adapted to the description of natural situations. Second, biologically plausible models of reward-modulated spike-timing-dependent plasticity require precise calculation of the reward prediction error, yet it remains to be shown how this can be computed by neurons. Here we propose a solution to these problems by extending the continuous temporal difference (TD) learning of Doya (2000) to the case of spiking neurons in an actor-critic network operating in continuous time, and with continuous state and action representations. In our model, the critic learns to predict expected future rewards in real time. Its activity, together with actual rewards, conditions the delivery of a neuromodulatory TD signal to itself and to the actor, which is responsible for action choice. In simulations, we show that such an architecture can solve a Morris water-maze-like navigation task, in a number of trials consistent with reported animal performance. We also use our model to solve the acrobot and the cartpole problems, two complex motor control tasks. Our model provides a plausible way of computing reward prediction error in the brain. Moreover, the analytically derived learning rule is consistent with experimental evidence for dopamine-modulated spike-timing-dependent plasticity.
Constraints on Cosmology and Gravity from the Growth of X-ray Luminous Galaxy Clusters
NASA Astrophysics Data System (ADS)
Mantz, Adam; Allen, S. W.; Rapetti, D.; Ebeling, H.; Drlica-Wagner, A.
2010-03-01
I will present simultaneous constraints on galaxy cluster X-ray scaling relations and models of cosmology and gravity obtained from observations of the growth of massive clusters. The data set consists of 238 flux-selected clusters at redshifts z≤0.5 drawn from the ROSAT All-Sky Survey, and incorporates extensive Chandra follow-up observations. Our results on the scaling relations are consistent with excess heating of the intracluster medium, although the evolution of the relations remains consistent with the predictions of simple gravitational collapse models. For spatially flat, constant-w cosmological models, the cluster data yield Ωm=0.23±0.04, σ8=0.82±0.05, and w=-1.01±0.20, including conservative allowances for systematic uncertainties. Our results are consistent and competitive with a variety of independent cosmological data. In evolving-w models, marginalizing over transition redshifts in the range 0.05-1, the combination of the growth of structure data with the cosmic microwave background, supernovae, cluster gas mass fractions and baryon acoustic oscillations constrains the dark energy equation of state at late and early times to be respectively w0=-0.88±0.21 and wet=-1.05+0.20-0.36. Applying this combination of data to the problem of determining fundamental neutrino properties, we place an upper limit on the species-summed neutrino mass at 0.33eV (95% CL) and constrain the effective number of relativistic species to 3.4±0.6. In addition to dark energy and related problems, such data can be used to test the predictions of General Relativity. Introducing the standard Peebles/Linder parametrization of the linear growth rate, we use the cluster data to constrain the growth of structure, independent of the expansion of the Universe. Our analysis provides a tight constraint on the combination γ(σ8/0.8)6.8=0.55+0.13-0.10, and is simultaneously consistent with the predictions of relativity (γ=0.55) and the cosmological constant expansion model. This work was funded by NASA, the U.S. Department of Energy, and Stanford University.
Inference of emission rates from multiple sources using Bayesian probability theory.
Yee, Eugene; Flesch, Thomas K
2010-03-01
The determination of atmospheric emission rates from multiple sources using inversion (regularized least-squares or best-fit technique) is known to be very susceptible to measurement and model errors in the problem, rendering the solution unusable. In this paper, a new perspective is offered for this problem: namely, it is argued that the problem should be addressed as one of inference rather than inversion. Towards this objective, Bayesian probability theory is used to estimate the emission rates from multiple sources. The posterior probability distribution for the emission rates is derived, accounting fully for the measurement errors in the concentration data and the model errors in the dispersion model used to interpret the data. The Bayesian inferential methodology for emission rate recovery is validated against real dispersion data, obtained from a field experiment involving various source-sensor geometries (scenarios) consisting of four synthetic area sources and eight concentration sensors. The recovery of discrete emission rates from three different scenarios obtained using Bayesian inference and singular value decomposition inversion are compared and contrasted.
On the computation of molecular surface correlations for protein docking using fourier techniques.
Sakk, Eric
2007-08-01
The computation of surface correlations using a variety of molecular models has been applied to the unbound protein docking problem. Because of the computational complexity involved in examining all possible molecular orientations, the fast Fourier transform (FFT) (a fast numerical implementation of the discrete Fourier transform (DFT)) is generally applied to minimize the number of calculations. This approach is rooted in the convolution theorem which allows one to inverse transform the product of two DFTs in order to perform the correlation calculation. However, such a DFT calculation results in a cyclic or "circular" correlation which, in general, does not lead to the same result as the linear correlation desired for the docking problem. In this work, we provide computational bounds for constructing molecular models used in the molecular surface correlation problem. The derived bounds are then shown to be consistent with various intuitive guidelines previously reported in the protein docking literature. Finally, these bounds are applied to different molecular models in order to investigate their effect on the correlation calculation.
Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merzari, E.; Shemon, E. R.; Yu, Y. Q.
This report describes to employ SHARP to perform a first-of-a-kind analysis of the core radial expansion phenomenon in an SFR. This effort required significant advances in the framework Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit used to drive the coupled simulations, manipulate the mesh in response to the deformation of the geometry, and generate the necessary modified mesh files. Furthermore, the model geometry is fairly complex, and consistent mesh generation for the three physics modules required significant effort. Fully-integrated simulations of a 7-assembly mini-core test problem have been performed, and the results are presented here. Physics models ofmore » a full-core model of the Advanced Burner Test Reactor have also been developed for each of the three physics modules. Standalone results of each of the three physics modules for the ABTR are presented here, which provides a demonstration of the feasibility of the fully-integrated simulation.« less
The Role of Independent V&V in Upstream Software Development Processes
NASA Technical Reports Server (NTRS)
Easterbrook, Steve
1996-01-01
This paper describes the role of Verification and Validation (V&V) during the requirements and high level design processes, and in particular the role of Independent V&V (IV&V). The job of IV&V during these phases is to ensure that the requirements are complete, consistent and valid, and to ensure that the high level design meets the requirements. This contrasts with the role of Quality Assurance (QA), which ensures that appropriate standards and process models are defined and applied. This paper describes the current state of practice for IV&V, concentrating on the process model used in NASA projects. We describe a case study, showing the processes by which problem reporting and tracking takes place, and how IV&V feeds into decision making by the development team. We then describe the problems faced in implementing IV&V. We conclude that despite a well defined process model, and tools to support it, IV&V is still beset by communication and coordination problems.
Effect of differentiation of self on adolescent risk behavior: test of the theoretical model.
Knauth, Donna G; Skowron, Elizabeth A; Escobar, Melicia
2006-01-01
Innovative theoretical models are needed to explain the occurrence of high-risk sexual behaviors, alcohol and other-drug (AOD) use, and academic engagement among ethnically diverse, inner-city adolescents. The aim of this study was to test the credibility of a theoretical model based on the Bowen family systems theory to explain adolescent risk behavior. Specifically tested was the relationship between the predictor variables of differentiation of self, chronic anxiety, and social problem solving and the dependent variables of high-risk sexual behaviors, AOD use, and academic engagement. An ex post facto cross-sectional design was used to test the usefulness of the theoretical model. Data were collected from 161 racially/ethnically diverse, inner-city high school students, 14 to 19 years of age. Participants completed self-report written questionnaires, including the Differentiation of Self Inventory, State-Trait Anxiety Inventory, Social Problem Solving for Adolescents, Drug Involvement Scale for Adolescents, and the Sexual Behavior Questionnaire. Consistent with the model, higher levels of differentiation of self related to lower levels of chronic anxiety (p < .001) and higher levels of social problem solving (p < .01). Higher chronic anxiety was related to lower social problem solving (p < .001). A test of mediation showed that chronic anxiety mediates the relationship between differentiation of self and social problem solving (p < .001), indicating that differentiation influences social problem solving through chronic anxiety. Higher levels of social problem solving were related to less drug use (p < .05), less high-risk sexual behaviors (p < .01), and an increase in academic engagement (p < .01). Findings support the theoretical model's credibility and provide evidence that differentiation of self is an important cognitive factor that enables adolescents to manage chronic anxiety and motivates them to use effective problem solving, resulting in less involvement in health-comprising behaviors and increased academic engagement.
Ivanova, Masha Y.; Achenbach, Thomas M.; Rescorla, Leslie A.; Harder, Valerie S.; Ang, Rebecca P.; Bilenberg, Niels; Bjarnadottir, Gudrun; Capron, Christiane; De Pauw, Sarah S.W.; Dias, Pedro; Dobrean, Anca; Doepfner, Manfred; Duyme, Michele; Eapen, Valsamma; Erol, Nese; Esmaeili, Elaheh Mohammad; Ezpeleta, Lourdes; Frigerio, Alessandra; Gonçalves, Miguel M.; Gudmundsson, Halldor S.; Jeng, Suh-Fang; Jetishi, Pranvera; Jusiene, Roma; Kim, Young-Ah; Kristensen, Solvejg; Lecannelier, Felipe; Leung, Patrick W.L.; Liu, Jianghong; Montirosso, Rosario; Oh, Kyung Ja; Plueck, Julia; Pomalima, Rolando; Shahini, Mimoza; Silva, Jaime R.; Simsek, Zynep; Sourander, Andre; Valverde, Jose; Van Leeuwen, Karla G.; Woo, Bernardine S.C.; Wu, Yen-Tzu; Zubrick, Stephen R.; Verhulst, Frank C.
2014-01-01
Objective To test the fit of a seven-syndrome model to ratings of preschoolers' problems by parents in very diverse societies. Method Parents of 19,106 children 18 to 71 months of age from 23 societies in Asia, Australasia, Europe, the Middle East, and South America completed the Child Behavior Checklist for Ages 1.5–5 (CBCL/1.5–5). Confirmatory factor analyses were used to test the seven-syndrome model separately for each society. Results The primary model fit index, the root mean square error of approximation (RMSEA), indicated acceptable to good fit for each society. Although a six-syndrome model combining the Emotionally Reactive and Anxious/Depressed syndromes also fit the data for nine societies, it fit less well than the seven-syndrome model for seven of the nine societies. Other fit indices yielded less consistent results than the RMSEA. Conclusions The seven-syndrome model provides one way to capture patterns of children's problems that are manifested in ratings by parents from many societies. Clinicians working with preschoolers from these societies can thus assess and describe parents' ratings of behavioral, emotional, and social problems in terms of the seven syndromes. The results illustrate possibilities for culture–general taxonomic constructs of preschool psychopathology. Problems not captured by the CBCL/1.5–5 may form additional syndromes, and other syndrome models may also fit the data. PMID:21093771
Ivanova, Masha Y; Achenbach, Thomas M; Rescorla, Leslie A; Harder, Valerie S; Ang, Rebecca P; Bilenberg, Niels; Bjarnadottir, Gudrun; Capron, Christiane; De Pauw, Sarah S W; Dias, Pedro; Dobrean, Anca; Doepfner, Manfred; Duyme, Michele; Eapen, Valsamma; Erol, Nese; Esmaeili, Elaheh Mohammad; Ezpeleta, Lourdes; Frigerio, Alessandra; Gonçalves, Miguel M; Gudmundsson, Halldor S; Jeng, Suh-Fang; Jetishi, Pranvera; Jusiene, Roma; Kim, Young-Ah; Kristensen, Solvejg; Lecannelier, Felipe; Leung, Patrick W L; Liu, Jianghong; Montirosso, Rosario; Oh, Kyung Ja; Plueck, Julia; Pomalima, Rolando; Shahini, Mimoza; Silva, Jaime R; Simsek, Zynep; Sourander, Andre; Valverde, Jose; Van Leeuwen, Karla G; Woo, Bernardine S C; Wu, Yen-Tzu; Zubrick, Stephen R; Verhulst, Frank C
2010-12-01
To test the fit of a seven-syndrome model to ratings of preschoolers' problems by parents in very diverse societies. Parents of 19,106 children 18 to 71 months of age from 23 societies in Asia, Australasia, Europe, the Middle East, and South America completed the Child Behavior Checklist for Ages 1.5-5 (CBCL/1.5-5). Confirmatory factor analyses were used to test the seven-syndrome model separately for each society. The primary model fit index, the root mean square error of approximation (RMSEA), indicated acceptable to good fit for each society. Although a six-syndrome model combining the Emotionally Reactive and Anxious/Depressed syndromes also fit the data for nine societies, it fit less well than the seven-syndrome model for seven of the nine societies. Other fit indices yielded less consistent results than the RMSEA. The seven-syndrome model provides one way to capture patterns of children's problems that are manifested in ratings by parents from many societies. Clinicians working with preschoolers from these societies can thus assess and describe parents' ratings of behavioral, emotional, and social problems in terms of the seven syndromes. The results illustrate possibilities for culture-general taxonomic constructs of preschool psychopathology. Problems not captured by the CBCL/1.5-5 may form additional syndromes, and other syndrome models may also fit the data. Copyright © 2010 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.
Distributed Prognostics based on Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, I.
2014-01-01
Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. Index Terms-model-based prognostics, distributed prognostics, structural model decomposition ABBREVIATIONS
Engineering tradeoff problems viewed as multiple objective optimizations and the VODCA methodology
NASA Astrophysics Data System (ADS)
Morgan, T. W.; Thurgood, R. L.
1984-05-01
This paper summarizes a rational model for making engineering tradeoff decisions. The model is a hybrid from the fields of social welfare economics, communications, and operations research. A solution methodology (Vector Optimization Decision Convergence Algorithm - VODCA) firmly grounded in the economic model is developed both conceptually and mathematically. The primary objective for developing the VODCA methodology was to improve the process for extracting relative value information about the objectives from the appropriate decision makers. This objective was accomplished by employing data filtering techniques to increase the consistency of the relative value information and decrease the amount of information required. VODCA is applied to a simplified hypothetical tradeoff decision problem. Possible use of multiple objective analysis concepts and the VODCA methodology in product-line development and market research are discussed.
NASA Astrophysics Data System (ADS)
Dewi, N. R.; Arini, F. Y.
2018-03-01
The main purpose of this research is developing and produces a Calculus textbook model that supported with GeoGebra. This book was designed to enhancing students’ mathematical problem solving and mathematical representation. There were three stages in this research i.e. define, design, and develop. The textbooks consisted of 6 chapters which each chapter contains introduction, core materials and include examples and exercises. The textbook developed phase begins with the early stages of designed the book (draft 1) which then validated by experts. Revision of draft 1 produced draft 2. The data were analyzed with descriptive statistics. The analysis showed that the Calculus textbook model that supported with GeoGebra, valid and fill up the criteria of practicality.
Continuous-time system identification of a smoking cessation intervention
NASA Astrophysics Data System (ADS)
Timms, Kevin P.; Rivera, Daniel E.; Collins, Linda M.; Piper, Megan E.
2014-07-01
Cigarette smoking is a major global public health issue and the leading cause of preventable death in the United States. Toward a goal of designing better smoking cessation treatments, system identification techniques are applied to intervention data to describe smoking cessation as a process of behaviour change. System identification problems that draw from two modelling paradigms in quantitative psychology (statistical mediation and self-regulation) are considered, consisting of a series of continuous-time estimation problems. A continuous-time dynamic modelling approach is employed to describe the response of craving and smoking rates during a quit attempt, as captured in data from a smoking cessation clinical trial. The use of continuous-time models provide benefits of parsimony, ease of interpretation, and the opportunity to work with uneven or missing data.
NASA Astrophysics Data System (ADS)
Pieczyńska-Kozłowska, Joanna M.
2015-12-01
The design process in geotechnical engineering requires the most accurate mapping of soil. The difficulty lies in the spatial variability of soil parameters, which has been a site of investigation of many researches for many years. This study analyses the soil-modeling problem by suggesting two effective methods of acquiring information for modeling that consists of variability from cone penetration test (CPT). The first method has been used in geotechnical engineering, but the second one has not been associated with geotechnics so far. Both methods are applied to a case study in which the parameters of changes are estimated. The knowledge of the variability of parameters allows in a long term more effective estimation, for example, bearing capacity probability of failure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shorikov, A. F., E-mail: afshorikov@mail.ru
This article discusses a discrete-time dynamical system consisting of a set a controllable objects (region and forming it municipalities). The dynamics each of these is described by the corresponding vector nonlinear discrete-time recurrent vector equations and its control system consist from two levels: basic (control level I) that is dominating and subordinate level (control level II). Both levels have different criterions of functioning and united a priori by determined informational and control connections defined in advance. In this paper we study the problem of optimization of guaranteed result for program control by the final state of regional social and economicmore » system in the presence of risks. For this problem we proposed in this work an economical and mathematical model of two-level hierarchical minimax program control the final state of regional social and economic system in the presence of risks and the general scheme for its solving.« less
Davies, Patrick T.; Hentges, Rochelle F.; Coe, Jesse L.; Martin, Meredith J.; Sturge-Apple, Melissa L.; Cummings, E. Mark
2016-01-01
This multi-study paper examined the relative strength of mediational pathways involving hostile, disengaged, and uncooperative forms of interparental conflict, children’s emotional insecurity, and their externalizing problems across two longitudinal studies. Participants in Study 1 consisted of 243 preschool children (M age = 4.60 years) and their parents, whereas Study 2 consisted of 263 adolescents (M age = 12.62 years) and their parents. Both studies utilized multi-method, multi-informant assessment batteries within a longitudinal design with three measurement occasions. Across both studies, lagged, autoregressive tests of the mediational paths revealed that interparental hostility was a significantly stronger predictor of the prospective cascade of children’s insecurity and externalizing problems than interparental disengagement and low levels of interparental cooperation. Findings further indicated that interparental disengagement was a stronger predictor of the insecurity pathway than was low interparental cooperation for the sample of adolescents in Study 2. Results are discussed in relation to how they inform and advance developmental models of family risk. PMID:27175983
Support interference of wind tunnel models: A selective annotated bibliography
NASA Technical Reports Server (NTRS)
Tuttle, M. H.; Gloss, B. B.
1981-01-01
This bibliography, with abstracts, consists of 143 citations arranged in chronological order by dates of publication. Selection of the citations was made for their relevance to the problems involved in understanding or avoiding support interference in wind tunnel testing throughout the Mach number range. An author index is included.
Support interference of wind tunnel models: A selective annotated bibliography
NASA Technical Reports Server (NTRS)
Tuttle, M. H.; Lawing, P. L.
1984-01-01
This bibliography, with abstracts, consists of 143 citations arranged in chronological order by dates of publication. Selection of the citations was made for their relevance to the problems involved in understanding or avoiding support interference in wind tunnel testing throughout the Mach number range. An author index is included.
Learning Disabilities at Twenty-Five: The Early Adulthood of a Maturing Concept.
ERIC Educational Resources Information Center
Levine, Melvin D.
1989-01-01
The keynote speech identifies six categories of problem areas for children with learning disabilities: (1) synchronization, (2) consistency, (3) methodology, (4) cohesion, (5) saliency determination, and (6) tempo. A model of neurodevelopmental functions and performance elements to guide researchers and practitioners is offered. (DB)
Clarifying Parent-Child Reciprocities during Early Childhood: The Early Childhood Coercion Model
ERIC Educational Resources Information Center
Scaramella, Laura V.; Leve, Leslie D.
2004-01-01
Consistent with existing theory, the quality of parent-child interactions during early childhood affects children's social relationships and behavioral adjustment during middle childhood and adolescence. Harsh parenting and a propensity toward emotional overarousal interact very early in life to affect risk for later conduct problems. Less…
A New Consortial Model for Building Digital Libraries.
ERIC Educational Resources Information Center
Neff, Raymond K.
The libraries in U.S. research universities are being systematically depopulated of current subscriptions to scholarly journals. Annual increases in subscription costs are consistently outpacing the growth in library budgets; this has become a chronic problem for academic libraries which collect in the fields of science, engineering, and medicine.…
The Prevailing Construct in Civic Education and Its Problems
ERIC Educational Resources Information Center
Gutierrez, Robert
2010-01-01
This article presents the natural rights construct as the perspective used in civic education, by outlining its moral, theoretical, and curricular elements. Morally, the construct holds a liberal view of individual rights and liberty from subjugation. The theoretical element consists of a description of the political systems model, which…
Can Flipping the Classroom Work? Evidence from Undergraduate Chemistry
ERIC Educational Resources Information Center
Casasola, Timothy; Nguyen, Tutrang; Warschauer, Mark; Schenke, Katerina
2017-01-01
Our study describes student outcomes from an undergraduate chemistry course that implemented a flipped format: a pedagogical model that consists of students watching recorded video lectures outside of the classroom and engaging in problem solving activities during class. We investigated whether (1) interest, study skills, and attendance as…
A study analysis of cable-body systems totally immersed in a fluid stream
NASA Technical Reports Server (NTRS)
Delaurier, J. D.
1972-01-01
A general stability analysis of a cable-body system immersed in a fluid stream is presented. The analytical portion of this analysis treats the system as being essentially a cable problem, with the body dynamics giving the end conditions. The mathematical form of the analysis consists of partial differential wave equations, with the end and auxiliary conditions being determined from the body equations of motion. The equations uncouple to give a lateral problem and a longitudinal problem as in first order airplane dynamics. A series of tests on a tethered wind tunnel model provide a comparison of the theory with experiment.
NASA Astrophysics Data System (ADS)
Hobri; Suharto; Rifqi Naja, Ahmad
2018-04-01
This research aims to determine students’ creative thinking level in problem solving based on NCTM in function subject. The research type is descriptive with qualitative approach. Data collection methods which were used are test and interview. Creative thinking level in problem solving based on NCTM indicators consists of (1) Make mathematical model from a contextual problem and solve the problem, (2) Solve problem using various possible alternatives, (3) Find new alternative(s) to solve the problem, (4) Determine the most efficient and effective alternative for that problem, (5) Review and correct mistake(s) on the process of problem solving. Result of the research showed that 10 students categorized in very satisfying level, 23 students categorized in satisfying level and 1 students categorized in less satisfying level. Students in very satisfying level meet all indicators, students in satisfying level meet first, second, fourth, and fifth indicator, while students in less satisfying level only meet first and fifth indicator.
Routing and Scheduling Optimization Model of Sea Transportation
NASA Astrophysics Data System (ADS)
barus, Mika debora br; asyrafy, Habib; nababan, Esther; mawengkang, Herman
2018-01-01
This paper examines the routing and scheduling optimization model of sea transportation. One of the issues discussed is about the transportation of ships carrying crude oil (tankers) which is distributed to many islands. The consideration is the cost of transportation which consists of travel costs and the cost of layover at the port. Crude oil to be distributed consists of several types. This paper develops routing and scheduling model taking into consideration some objective functions and constraints. The formulation of the mathematical model analyzed is to minimize costs based on the total distance visited by the tanker and minimize the cost of the ports. In order for the model of the problem to be more realistic and the cost calculated to be more appropriate then added a parameter that states the multiplier factor of cost increases as the charge of crude oil is filled.
Stochastic parameter estimation in nonlinear time-delayed vibratory systems with distributed delay
NASA Astrophysics Data System (ADS)
Torkamani, Shahab; Butcher, Eric A.
2013-07-01
The stochastic estimation of parameters and states in linear and nonlinear time-delayed vibratory systems with distributed delay is explored. The approach consists of first employing a continuous time approximation to approximate the delayed integro-differential system with a large set of ordinary differential equations having stochastic excitations. Then the problem of state and parameter estimation in the resulting stochastic ordinary differential system is represented as an optimal filtering problem using a state augmentation technique. By adapting the extended Kalman-Bucy filter to the augmented filtering problem, the unknown parameters of the time-delayed system are estimated from noise-corrupted, possibly incomplete measurements of the states. Similarly, the upper bound of the distributed delay can also be estimated by the proposed technique. As an illustrative example to a practical problem in vibrations, the parameter, delay upper bound, and state estimation from noise-corrupted measurements in a distributed force model widely used for modeling machine tool vibrations in the turning operation is investigated.
REDUCING AMBIGUITY IN THE FUNCTIONAL ASSESSMENT OF PROBLEM BEHAVIOR
Rooker, Griffin W.; DeLeon, Iser G.; Borrero, Carrie S. W.; Frank-Crawford, Michelle A.; Roscoe, Eileen M.
2015-01-01
Severe problem behavior (e.g., self-injury and aggression) remains among the most serious challenges for the habilitation of persons with intellectual disabilities and is a significant obstacle to community integration. The current standard of behavior analytic treatment for problem behavior in this population consists of a functional assessment and treatment model. Within that model, the first step is to assess the behavior–environment relations that give rise to and maintain problem behavior, a functional behavioral assessment. Conventional methods of assessing behavioral function include indirect, descriptive, and experimental assessments of problem behavior. Clinical investigators have produced a rich literature demonstrating the relative effectiveness for each method, but in clinical practice, each can produce ambiguous or difficult-to-interpret outcomes that may impede treatment development. This paper outlines potential sources of variability in assessment outcomes and then reviews the evidence on strategies for avoiding ambiguous outcomes and/or clarifying initially ambiguous results. The end result for each assessment method is a set of best practice guidelines, given the available evidence, for conducting the initial assessment. PMID:26236145
NASA Astrophysics Data System (ADS)
Penenko, Vladimir; Tsvetova, Elena; Penenko, Aleksey
2014-05-01
A modeling technology based on coupled models of atmospheric dynamics and chemistry are presented [1-3]. It is the result of application of variational methods in combination with the methods of decomposition and splitting. The idea of Euler's integrating factors combined with technique of adjoint problems is also used. In online technologies, a significant part of algorithmic and computational work consist in solving the problems like convection-diffusion-reaction and in organizing data assimilation techniques based on them. For equations of convection-diffusion, the methodology gives us the unconditionally stable and monotone discrete-analytical schemes in the frames of methods of decomposition and splitting. These schemes are exact for locally one-dimensional problems respect to the spatial variables. For stiff systems of equations describing transformation of gas and aerosol substances, the monotone and stable schemes are also obtained. They are implemented by non- iterative algorithms. By construction, all schemes for different components of state functions are structurally uniform. They are coordinated among themselves in the sense of forward and inverse modeling. Variational principles are constructed taking into account the fact that the behavior of the different dynamic and chemical components of the state function is characterized by high variability and uncertainty. Information on the parameters of models, sources and emission impacts is also not determined precisely. Therefore, to obtain the consistent solutions, we construct methods of the sensitivity theory taking into account the influence of uncertainty. For this purpose, new methods of data assimilation of hydrodynamic fields and gas-aerosol substances measured by different observing systems are proposed. Optimization criteria for data assimilation problems are defined so that they include a set of functionals evaluating the total measure of uncertainties. The latter are explicitly introduced into the equations of the model of processes as desired deterministic control functions. This method of data assimilation with control functions is implemented by direct algorithms. The modeling technology presented here focuses on various scientific and applied problems of environmental prediction and design, including risk assessment in relation to existing and potential sources of natural and anthropogenic influences. The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of RAS; by RFBR projects NN 11-01-00187 and 14-01-31482; by Integrating projects of SD RAS No 8 and 35. Our studies are in the line with the goals of COST Action ES1004. References 1. V. Penenko, A.Baklanov, E. Tsvetova and A. Mahura. Direct and Inverse Problems in a Variational Concept of Environmental Modeling, Pure and Applied Geoph. 2012.V.169:447-465. 2. A.V. Penenko, Discrete-analytic schemes for solving an inverse coefficient heat conduction problem in a layered medium with gradient methods, Numerical Analysis and Applications, 2012. V. 5:326-341. 3. V. Penenko, E. Tsvetova. Variational methods for constructing the monotone approximations for atmospheric chemistry models, Numerical analysis and applications, 2013. V. 6: 210-220.
Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart
2011-01-01
We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the "model-free" variational analysis (VA)-based image enhancement approach and the "model-based" descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations.
Astroparticle physics with solar neutrinos.
Nakahata, Masayuki
2011-01-01
Solar neutrino experiments observed fluxes smaller than the expectations from the standard solar model. This discrepancy is known as the "solar neutrino problem". Flux measurements by Super-Kamiokande and SNO have demonstrated that the solar neutrino problem is due to neutrino oscillations. Combining the results of all solar neutrino experiments, parameters for solar neutrino oscillations are obtained. Correcting for the effect of neutrino oscillations, the observed neutrino fluxes are consistent with the prediction from the standard solar model. In this article, results of solar neutrino experiments are reviewed with detailed descriptions of what Kamiokande and Super-Kamiokande have contributed to the history of astroparticle physics with solar neutrino measurements. (Communicated by Toshimitsu Yamazaki, M.J.A.).
NASA Technical Reports Server (NTRS)
Chan, S. T. K.; Lee, C. H.; Brashears, M. R.
1975-01-01
A finite element algorithm for solving unsteady, three-dimensional high velocity impact problems is presented. A computer program was developed based on the Eulerian hydroelasto-viscoplastic formulation and the utilization of the theorem of weak solutions. The equations solved consist of conservation of mass, momentum, and energy, equation of state, and appropriate constitutive equations. The solution technique is a time-dependent finite element analysis utilizing three-dimensional isoparametric elements, in conjunction with a generalized two-step time integration scheme. The developed code was demonstrated by solving one-dimensional as well as three-dimensional impact problems for both the inviscid hydrodynamic model and the hydroelasto-viscoplastic model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malikopoulos, Andreas; Djouadi, Seddik M; Kuruganti, Teja
We consider the optimal stochastic control problem for home energy systems with solar and energy storage devices when the demand is realized from the grid. The demand is subject to Brownian motions with both drift and variance parameters modulated by a continuous-time Markov chain that represents the regime of electricity price. We model the systems as pure stochastic differential equation models, and then we follow the completing square technique to solve the stochastic home energy management problem. The effectiveness of the efficiency of the proposed approach is validated through a simulation example. For practical situations with constraints consistent to thosemore » studied here, our results imply the proposed framework could reduce the electricity cost from short-term purchase in peak hour market.« less
Familism Values as a Protective Factor for Mexican-origin Adolescents Exposed to Deviant Peers
Germán, Miguelina; Gonzales, Nancy A.; Dumka, Larry
2009-01-01
This study examined interactive relations between adolescent, maternal and paternal familism values and deviant peer affiliations in predicting adolescent externalizing problems within low-income, Mexican-origin families (N = 598). Adolescent, maternal and paternal familism values interacted protectively with deviant peer affiliations to predict lower levels of externalizing problems according to two independent teacher reports. These relations were not found with parent reports of adolescent externalizing problems although these models showed a direct, protective effect of maternal familism values. Consistent with the view that traditional cultural values are protective for Latino adolescents, these results suggest that supporting familism values among Mexican-origin groups is a useful avenue for improving adolescent conduct problems, particularly in a school context. PMID:21776180
A model for effective planning of SME support services.
Rakićević, Zoran; Omerbegović-Bijelović, Jasmina; Lečić-Cvetković, Danica
2016-02-01
This paper presents a model for effective planning of support services for small and medium-sized enterprises (SMEs). The idea is to scrutinize and measure the suitability of support services in order to give recommendations for the improvement of a support planning process. We examined the applied support services and matched them with the problems and needs of SMEs, based on the survey conducted in 2013 on a sample of 336 SMEs in Serbia. We defined and analysed the five research questions that refer to support services, their consistency with the SMEs' problems and needs, and the relation between the given support and SMEs' success. The survey results have shown a statistically significant connection between them. Based on this result, we proposed an eight-phase model as a method for the improvement of support service planning for SMEs. This model helps SMEs to plan better their requirements in terms of support; government and administration bodies at all levels and organizations that provide support services to understand better SMEs' problems and needs for support. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sukmawati, Zuhairoh, Faihatuz
2017-05-01
The purpose of this research was to develop authentic assessment model based on showcase portfolio on learning of mathematical problem solving. This research used research and development Method (R & D) which consists of four stages of development that: Phase I, conducting a preliminary study. Phase II, determining the purpose of developing and preparing the initial model. Phase III, trial test of instrument for the initial draft model and the initial product. The respondents of this research are the students of SMAN 8 and SMAN 20 Makassar. The collection of data was through observation, interviews, documentation, student questionnaire, and instrument tests mathematical solving abilities. The data were analyzed with descriptive and inferential statistics. The results of this research are authentic assessment model design based on showcase portfolio which involves: 1) Steps in implementing the authentic assessment based Showcase, assessment rubric of cognitive aspects, assessment rubric of affective aspects, and assessment rubric of skill aspect. 2) The average ability of the students' problem solving which is scored by using authentic assessment based on showcase portfolio was in high category and the students' response in good category.
NASA Astrophysics Data System (ADS)
Prahani, B. K.; Suprapto, N.; Suliyanah; Lestari, N. A.; Jauhariyah, M. N. R.; Admoko, S.; Wahyuni, S.
2018-03-01
In the previous research, Collaborative Problem Based Physic Learning (CPBPL) model has been developed to improve student’s science process skills, collaborative problem solving, and self-confidence on physics learning. This research is aimed to analyze the effectiveness of CPBPL model towards the improvement of student’s self-confidence on physics learning. This research implemented quasi experimental design on 140 senior high school students who were divided into 4 groups. Data collection was conducted through questionnaire, observation, and interview. Self-confidence measurement was conducted through Self-Confidence Evaluation Sheet (SCES). The data was analyzed using Wilcoxon test, n-gain, and Kruskal Wallis test. Result shows that: (1) There is a significant score improvement on student’s self-confidence on physics learning (α=5%), (2) n-gain value student’s self-confidence on physics learning is high, and (3) n-gain average student’s self-confidence on physics learning was consistent throughout all groups. It can be concluded that CPBPL model is effective to improve student’s self-confidence on physics learning.
Local mesh adaptation technique for front tracking problems
NASA Astrophysics Data System (ADS)
Lock, N.; Jaeger, M.; Medale, M.; Occelli, R.
1998-09-01
A numerical model is developed for the simulation of moving interfaces in viscous incompressible flows. The model is based on the finite element method with a pseudo-concentration technique to track the front. Since a Eulerian approach is chosen, the interface is advected by the flow through a fixed mesh. Therefore, material discontinuity across the interface cannot be described accurately. To remedy this problem, the model has been supplemented with a local mesh adaptation technique. This latter consists in updating the mesh at each time step to the interface position, such that element boundaries lie along the front. It has been implemented for unstructured triangular finite element meshes. The outcome of this technique is that it allows an accurate treatment of material discontinuity across the interface and, if necessary, a modelling of interface phenomena such as surface tension by using specific boundary elements. For illustration, two examples are computed and presented in this paper: the broken dam problem and the Rayleigh-Taylor instability. Good agreement has been obtained in the comparison of the numerical results with theory or available experimental data.
Dynamic optimization and its relation to classical and quantum constrained systems
NASA Astrophysics Data System (ADS)
Contreras, Mauricio; Pellicer, Rely; Villena, Marcelo
2017-08-01
We study the structure of a simple dynamic optimization problem consisting of one state and one control variable, from a physicist's point of view. By using an analogy to a physical model, we study this system in the classical and quantum frameworks. Classically, the dynamic optimization problem is equivalent to a classical mechanics constrained system, so we must use the Dirac method to analyze it in a correct way. We find that there are two second-class constraints in the model: one fix the momenta associated with the control variables, and the other is a reminder of the optimal control law. The dynamic evolution of this constrained system is given by the Dirac's bracket of the canonical variables with the Hamiltonian. This dynamic results to be identical to the unconstrained one given by the Pontryagin equations, which are the correct classical equations of motion for our physical optimization problem. In the same Pontryagin scheme, by imposing a closed-loop λ-strategy, the optimality condition for the action gives a consistency relation, which is associated to the Hamilton-Jacobi-Bellman equation of the dynamic programming method. A similar result is achieved by quantizing the classical model. By setting the wave function Ψ(x , t) =e iS(x , t) in the quantum Schrödinger equation, a non-linear partial equation is obtained for the S function. For the right-hand side quantization, this is the Hamilton-Jacobi-Bellman equation, when S(x , t) is identified with the optimal value function. Thus, the Hamilton-Jacobi-Bellman equation in Bellman's maximum principle, can be interpreted as the quantum approach of the optimization problem.
Alternative Method to Simulate a Sub-idle Engine Operation in Order to Synthesize Its Control System
NASA Astrophysics Data System (ADS)
Sukhovii, Sergii I.; Sirenko, Feliks F.; Yepifanov, Sergiy V.; Loboda, Igor
2016-09-01
The steady-state and transient engine performances in control systems are usually evaluated by applying thermodynamic engine models. Most models operate between the idle and maximum power points, only recently, they sometimes address a sub-idle operating range. The lack of information about the component maps at the sub-idle modes presents a challenging problem. A common method to cope with the problem is to extrapolate the component performances to the sub-idle range. Precise extrapolation is also a challenge. As a rule, many scientists concern only particular aspects of the problem such as the lighting combustion chamber or the turbine operation under the turned-off conditions of the combustion chamber. However, there are no reports about a model that considers all of these aspects and simulates the engine starting. The proposed paper addresses a new method to simulate the starting. The method substitutes the non-linear thermodynamic model with a linear dynamic model, which is supplemented with a simplified static model. The latter model is the set of direct relations between parameters that are used in the control algorithms instead of commonly used component performances. Specifically, this model consists of simplified relations between the gas path parameters and the corrected rotational speed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuo, Rui; Wu, C. F. Jeff
Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.
NASA Astrophysics Data System (ADS)
Kolkman, M. J.; Kok, M.; van der Veen, A.
The solution of complex, unstructured problems is faced with policy controversy and dispute, unused and misused knowledge, project delay and failure, and decline of public trust in governmental decisions. Mental model mapping (also called concept mapping) is a technique to analyse these difficulties on a fundamental cognitive level, which can reveal experiences, perceptions, assumptions, knowledge and subjective beliefs of stakeholders, experts and other actors, and can stimulate communication and learning. This article presents the theoretical framework from which the use of mental model mapping techniques to analyse this type of problems emerges as a promising technique. The framework consists of the problem solving or policy design cycle, the knowledge production or modelling cycle, and the (computer) model as interface between the cycles. Literature attributes difficulties in the decision-making process to communication gaps between decision makers, stakeholders and scientists, and to the construction of knowledge within different paradigm groups that leads to different interpretation of the problem situation. Analysis of the decision-making process literature indicates that choices, which are made in all steps of the problem solving cycle, are based on an individual decision maker’s frame of perception. This frame, in turn, depends on the mental model residing in the mind of the individual. Thus we identify three levels of awareness on which the decision process can be analysed. This research focuses on the third level. Mental models can be elicited using mapping techniques. In this way, analysing an individual’s mental model can shed light on decision-making problems. The steps of the knowledge production cycle are, in the same manner, ultimately driven by the mental models of the scientist in a specific discipline. Remnants of this mental model can be found in the resulting computer model. The characteristics of unstructured problems (complexity, uncertainty and disagreement) can be positioned in the framework, as can the communities of knowledge construction and valuation involved in the solution of these problems (core science, applied science, and professional consultancy, and “post-normal” science). Mental model maps, this research hypothesises, are suitable to analyse the above aspects of the problem. This hypothesis is tested for the case of the Zwolle storm surch barrier. Analysis can aid integration between disciplines, participation of public stakeholders, and can stimulate learning processes. Mental model mapping is recommended to visualise the use of knowledge, to analyse difficulties in problem solving process, and to aid information transfer and communication. Mental model mapping help scientists to shape their new, post-normal responsibilities in a manner that complies with integrity when dealing with unstructured problems in complex, multifunctional systems.
Integrative structure modeling with the Integrative Modeling Platform.
Webb, Benjamin; Viswanath, Shruthi; Bonomi, Massimiliano; Pellarin, Riccardo; Greenberg, Charles H; Saltzberg, Daniel; Sali, Andrej
2018-01-01
Building models of a biological system that are consistent with the myriad data available is one of the key challenges in biology. Modeling the structure and dynamics of macromolecular assemblies, for example, can give insights into how biological systems work, evolved, might be controlled, and even designed. Integrative structure modeling casts the building of structural models as a computational optimization problem, for which information about the assembly is encoded into a scoring function that evaluates candidate models. Here, we describe our open source software suite for integrative structure modeling, Integrative Modeling Platform (https://integrativemodeling.org), and demonstrate its use. © 2017 The Protein Society.
Efficient calibration for imperfect computer models
Tuo, Rui; Wu, C. F. Jeff
2015-12-01
Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.
Examining the Latent Structure of Anxiety Sensitivity in Adolescents using Factor Mixture Modeling
Allan, Nicholas P.; MacPherson, Laura; Young, Kevin C.; Lejuez, Carl W.; Schmidt, Norman B.
2014-01-01
Anxiety sensitivity has been implicated as an important risk factor, generalizable to most anxiety disorders. In adults, factor mixture modeling has been used to demonstrate that anxiety sensitivity is best conceptualized as categorical between individuals. That is, whereas most adults appear to possess normative levels of anxiety sensitivity, a small subset of the population appears to possess abnormally high levels of anxiety sensitivity. Further, those in the high anxiety sensitivity group are at increased risk of having high levels of anxiety and of having an anxiety disorder. This study was designed to determine whether these findings extend to adolescents. Factor mixture modeling was used to examine the best fitting model of anxiety sensitivity in a sample of 277 adolescents (M age = 11.0, SD = .81). Consistent with research in adults, the best fitting model consisted of two classes, one containing adolescents with high levels of anxiety sensitivity (n = 25), and another containing adolescents with normative levels of anxiety sensitivity (n = 252). Examination of anxiety sensitivity subscales revealed that the social concerns subscale was not important for classification of individuals. Convergent and discriminant validity of anxiety sensitivity classes were found in that membership in the high anxiety sensitivity class was associated with higher mean levels of anxiety symptoms, controlling for depression and externalizing problems, and was not associated with higher mean levels of depression or externalizing symptoms controlling for anxiety problems. PMID:24749756
Segmentation, modeling and classification of the compact objects in a pile
NASA Technical Reports Server (NTRS)
Gupta, Alok; Funka-Lea, Gareth; Wohn, Kwangyoen
1990-01-01
The problem of interpreting dense range images obtained from the scene of a heap of man-made objects is discussed. A range image interpretation system consisting of segmentation, modeling, verification, and classification procedures is described. First, the range image is segmented into regions and reasoning is done about the physical support of these regions. Second, for each region several possible three-dimensional interpretations are made based on various scenarios of the objects physical support. Finally each interpretation is tested against the data for its consistency. The superquadric model is selected as the three-dimensional shape descriptor, plus tapering deformations along the major axis. Experimental results obtained from some complex range images of mail pieces are reported to demonstrate the soundness and the robustness of our approach.
Numerical modelling of instantaneous plate tectonics
NASA Technical Reports Server (NTRS)
Minster, J. B.; Haines, E.; Jordan, T. H.; Molnar, P.
1974-01-01
Assuming lithospheric plates to be rigid, 68 spreading rates, 62 fracture zones trends, and 106 earthquake slip vectors are systematically inverted to obtain a self-consistent model of instantaneous relative motions for eleven major plates. The inverse problem is linearized and solved iteratively by a maximum-likelihood procedure. Because the uncertainties in the data are small, Gaussian statistics are shown to be adequate. The use of a linear theory permits (1) the calculation of the uncertainties in the various angular velocity vectors caused by uncertainties in the data, and (2) quantitative examination of the distribution of information within the data set. The existence of a self-consistent model satisfying all the data is strong justification of the rigid plate assumption. Slow movement between North and South America is shown to be resolvable.
Moorkanikkara, Srinivas Nageswaran; Blankschtein, Daniel
2010-12-21
How does one design a surfactant mixture using a set of available surfactants such that it exhibits a desired adsorption kinetics behavior? The traditional approach used to address this design problem involves conducting trial-and-error experiments with specific surfactant mixtures. This approach is typically time-consuming and resource-intensive and becomes increasingly challenging when the number of surfactants that can be mixed increases. In this article, we propose a new theoretical framework to identify a surfactant mixture that most closely meets a desired adsorption kinetics behavior. Specifically, the new theoretical framework involves (a) formulating the surfactant mixture design problem as an optimization problem using an adsorption kinetics model and (b) solving the optimization problem using a commercial optimization package. The proposed framework aims to identify the surfactant mixture that most closely satisfies the desired adsorption kinetics behavior subject to the predictive capabilities of the chosen adsorption kinetics model. Experiments can then be conducted at the identified surfactant mixture condition to validate the predictions. We demonstrate the reliability and effectiveness of the proposed theoretical framework through a realistic case study by identifying a nonionic surfactant mixture consisting of up to four alkyl poly(ethylene oxide) surfactants (C(10)E(4), C(12)E(5), C(12)E(6), and C(10)E(8)) such that it most closely exhibits a desired dynamic surface tension (DST) profile. Specifically, we use the Mulqueen-Stebe-Blankschtein (MSB) adsorption kinetics model (Mulqueen, M.; Stebe, K. J.; Blankschtein, D. Langmuir 2001, 17, 5196-5207) to formulate the optimization problem as well as the SNOPT commercial optimization solver to identify a surfactant mixture consisting of these four surfactants that most closely exhibits the desired DST profile. Finally, we compare the experimental DST profile measured at the surfactant mixture condition identified by the new theoretical framework with the desired DST profile and find good agreement between the two profiles.
Austin, J K; Perkins, S M; Johnson, C S; Fastenau, P S; Byars, A W; deGrauw, T J; Dunn, D W
2011-08-01
The purposes of this 36-month study of children with first recognized seizures were: (1) to describe baseline differences in behavior problems between children with and without prior unrecognized seizures; (2) to identify differences over time in behavior problems between children with seizures and their healthy siblings; (3) to identify the proportions of children with seizures and healthy siblings who were consistently at risk for behavior problems for 36 months; and (4) to identify risk factors for behavior problems 36 months following the first recognized seizure. Risk factors explored included demographic (child age and gender, caregiver education), neuropsychological (IQ, processing speed), seizure (epileptic syndrome, use of antiepileptic drug, seizure recurrence), and family (family mastery, satisfaction with family relationships, parent response) variables. Participants were 300 children aged 6 through 14 years with a first recognized seizure and 196 healthy siblings. Data were collected from medical records, structured interviews, self-report questionnaires, and neuropsychological testing. Behavior problems were measured using the Child Behavior Checklist and the Teacher's Report Form. Data analyses included descriptive statistics and linear mixed models. Children with prior unrecognized seizures were at higher risk for behavior problems at baseline. As a group, children with seizures showed a steady reduction in behavior problems over time. Children with seizures were found to have significantly more behavior problems than their siblings over time, and significantly more children with seizures (11.3%) than siblings (4.6%) had consistent behavior problems over time. Key risk factors for child behavior problems based on both caregivers and teachers were: less caregiver education, slower initial processing speed, slowing of processing speed over the first 36 months, and a number of family variables including lower levels of family mastery or child satisfaction with family relationships, lower parent support of the child's autonomy, and lower parent confidence in their ability to discipline their child. Children with new-onset seizures who are otherwise developing normally have higher rates of behavior problems than their healthy siblings; however, behavior problems are not consistently in the at-risk range in most children during the first 3 years after seizure onset. When children show behavior problems, family variables that might be targeted include family mastery, parent support of child autonomy, and parents' confidence in their ability to handle their children's behavior.
Mira variables: An informal review
NASA Technical Reports Server (NTRS)
Wing, R. F.
1980-01-01
The structure of the Mira variables is discussed with particular emphasis on the extent of their observable atmospheres, the various methods for measuring the sizes of these atmospheres, and the manner in which the size changes through the cycle. The results obtained by direct, photometric and spectroscopic methods are compared, and the problems of interpretation are addressed. Also, a simple model for the atmospheric structure and motions of Miras based on recent observations of the doubling of infrared molecualr times is described. This model, consisting of two atmospheric layers plus a circumstellar shell, provides a physically plausible picture of the atmosphere which is consistent with the photometrically measured magnitude and temperature variations as well as the spectroscopic data.
The 2014 Sandia Verification and Validation Challenge: Problem statement
Hu, Kenneth; Orient, George
2016-01-18
This paper presents a case study in utilizing information from experiments, models, and verification and validation (V&V) to support a decision. It consists of a simple system with data and models provided, plus a safety requirement to assess. The goal is to pose a problem that is flexible enough to allow challengers to demonstrate a variety of approaches, but constrained enough to focus attention on a theme. This was accomplished by providing a good deal of background information in addition to the data, models, and code, but directing the participants' activities with specific deliverables. In this challenge, the theme ismore » how to gather and present evidence about the quality of model predictions, in order to support a decision. This case study formed the basis of the 2014 Sandia V&V Challenge Workshop and this resulting special edition of the ASME Journal of Verification, Validation, and Uncertainty Quantification.« less
Jun, Won Hee; Lee, Eun Ju; Park, Han Jong; Chang, Ae Kyung; Kim, Mi Ja
2013-12-01
The 5E learning cycle model has shown a positive effect on student learning in science education, particularly in courses with theory and practice components. Combining problem-based learning (PBL) with the 5E learning cycle was suggested as a better option for students' learning of theory and practice. The purpose of this study was to compare the effects of the traditional learning method with the 5E learning cycle model with PBL. The control group (n = 78) was subjected to a learning method that consisted of lecture and practice. The experimental group (n = 83) learned by using the 5E learning cycle model with PBL. The results showed that the experimental group had significantly improved self-efficacy, critical thinking, learning attitude, and learning satisfaction. Such an approach could be used in other countries to enhance students' learning of fundamental nursing. Copyright 2013, SLACK Incorporated.
A Heuristic Model of Consciousness with Applications to the Development of Science and Society
NASA Technical Reports Server (NTRS)
Curreri, Peter A.
2010-01-01
A working model of consciousness is fundamental to understanding of the interactions of the observer in science. This paper examines contemporary understanding of consciousness. A heuristic model of consciousness is suggested that is consistent with psycophysics measurements of bandwidth of consciousness relative to unconscious perception. While the self reference nature of consciousness confers a survival benefit by assuring the all points of view regarding a problem are experienced in sufficiently large population, conscious bandwidth is constrained by design to avoid chaotic behavior. The multiple hypotheses provided by conscious reflection enable the rapid progression of science and technology. The questions of free will and the problem of attention are discussed in relation to the model. Finally the combination of rapid technology growth with the assurance of many unpredictable points of view is considered in respect to contemporary constraints to the development of society.
Numerical modeling process of embolization arteriovenous malformation
NASA Astrophysics Data System (ADS)
Cherevko, A. A.; Gologush, T. S.; Petrenko, I. A.; Ostapenko, V. V.
2017-10-01
Cerebral arteriovenous malformation is a difficult, dangerous, and most frequently encountered vascular failure of development. It consists of vessels of very small diameter, which perform a discharge of blood from the artery to the vein. In this regard it can be adequately modeled using porous medium. Endovascular embolization of arteriovenous malformation is effective treatment of such pathologies. However, the danger of intraoperative rupture during embolization still exists. The purpose is to model this process and build an optimization algorithm for arteriovenous malformation embolization. To study the different embolization variants, the initial-boundary value problems, describing the process of embolization, were solved numerically by using a new modification of CABARET scheme. The essential moments of embolization process were modeled in our numerical experiments. This approach well reproduces the essential features of discontinuous two-phase flows, arising in the embolization problems. It can be used for further study on the process of embolization.
NASA Technical Reports Server (NTRS)
Adams, D. F.; Mahishi, J. M.
1982-01-01
The axisymmetric finite element model and associated computer program developed for the analysis of crack propagation in a composite consisting of a single broken fiber in an annular sheath of matrix material was extended to include a constant displacement boundary condition during an increment of crack propagation. The constant displacement condition permits the growth of a stable crack, as opposed to the catastropic failure in an earlier version. The finite element model was refined to respond more accurately to the high stresses and steep stress gradients near the broken fiber end. The accuracy and effectiveness of the conventional constant strain axisymmetric element for crack problems was established by solving the classical problem of a penny-shaped crack in a thick cylindrical rod under axial tension. The stress intensity factors predicted by the present finite element model are compared with existing continuum results.
Model reduction in integrated controls-structures design
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.
1993-01-01
It is the objective of this paper to present a model reduction technique developed for the integrated controls-structures design of flexible structures. Integrated controls-structures design problems are typically posed as nonlinear mathematical programming problems, where the design variables consist of both structural and control parameters. In the solution process, both structural and control design variables are constantly changing; therefore, the dynamic characteristics of the structure are also changing. This presents a problem in obtaining a reduced-order model for active control design and analysis which will be valid for all design points within the design space. In other words, the frequency and number of the significant modes of the structure (modes that should be included) may vary considerably throughout the design process. This is also true as the locations and/or masses of the sensors and actuators change. Moreover, since the number of design evaluations in the integrated design process could easily run into thousands, any feasible order-reduction method should not require model reduction analysis at every design iteration. In this paper a novel and efficient technique for model reduction in the integrated controls-structures design process, which addresses these issues, is presented.
An optimal control strategies using vaccination and fogging in dengue fever transmission model
NASA Astrophysics Data System (ADS)
Fitria, Irma; Winarni, Pancahayani, Sigit; Subchan
2017-08-01
This paper discussed regarding a model and an optimal control problem of dengue fever transmission. We classified the model as human and vector (mosquito) population classes. For the human population, there are three subclasses, such as susceptible, infected, and resistant classes. Then, for the vector population, we divided it into wiggler, susceptible, and infected vector classes. Thus, the model consists of six dynamic equations. To minimize the number of dengue fever cases, we designed two optimal control variables in the model, the giving of fogging and vaccination. The objective function of this optimal control problem is to minimize the number of infected human population, the number of vector, and the cost of the controlling efforts. By giving the fogging optimally, the number of vector can be minimized. In this case, we considered the giving of vaccination as a control variable because it is one of the efforts that are being developed to reduce the spreading of dengue fever. We used Pontryagin Minimum Principle to solve the optimal control problem. Furthermore, the numerical simulation results are given to show the effect of the optimal control strategies in order to minimize the epidemic of dengue fever.
WANG, FRANCES L.; EISENBERG, NANCY; VALIENTE, CARLOS; SPINRAD, TRACY L.
2015-01-01
We contribute to the literature on the relations of temperament to externalizing and internalizing problems by considering parental emotional expressivity and child gender as moderators of such relations and examining prediction of pure and co-occurring problem behaviors during early to middle adolescence using bifactor models (which provide unique and continuous factors for pure and co-occurring internalizing and externalizing problems). Parents and teachers reported on children’s (4.5- to 8-year-olds; N = 214) and early adolescents’ (6 years later; N = 168) effortful control, impulsivity, anger, sadness, and problem behaviors. Parental emotional expressivity was measured observationally and with parents’ self-reports. Early-adolescents’ pure externalizing and co-occurring problems shared childhood and/or early-adolescent risk factors of low effortful control, high impulsivity, and high anger. Lower childhood and early-adolescent impulsivity and higher early-adolescent sadness predicted early-adolescents’ pure internalizing. Childhood positive parental emotional expressivity more consistently related to early-adolescents’ lower pure externalizing compared to co-occurring problems and pure internalizing. Lower effortful control predicted changes in externalizing (pure and co-occurring) over 6 years, but only when parental positive expressivity was low. Higher impulsivity predicted co-occurring problems only for boys. Findings highlight the probable complex developmental pathways to adolescent pure and co-occurring externalizing and internalizing problems. PMID:26646352
Wang, Frances L; Eisenberg, Nancy; Valiente, Carlos; Spinrad, Tracy L
2016-11-01
We contribute to the literature on the relations of temperament to externalizing and internalizing problems by considering parental emotional expressivity and child gender as moderators of such relations and examining prediction of pure and co-occurring problem behaviors during early to middle adolescence using bifactor models (which provide unique and continuous factors for pure and co-occurring internalizing and externalizing problems). Parents and teachers reported on children's (4.5- to 8-year-olds; N = 214) and early adolescents' (6 years later; N = 168) effortful control, impulsivity, anger, sadness, and problem behaviors. Parental emotional expressivity was measured observationally and with parents' self-reports. Early-adolescents' pure externalizing and co-occurring problems shared childhood and/or early-adolescent risk factors of low effortful control, high impulsivity, and high anger. Lower childhood and early-adolescent impulsivity and higher early-adolescent sadness predicted early-adolescents' pure internalizing. Childhood positive parental emotional expressivity more consistently related to early-adolescents' lower pure externalizing compared to co-occurring problems and pure internalizing. Lower effortful control predicted changes in externalizing (pure and co-occurring) over 6 years, but only when parental positive expressivity was low. Higher impulsivity predicted co-occurring problems only for boys. Findings highlight the probable complex developmental pathways to adolescent pure and co-occurring externalizing and internalizing problems.
WRAP-RIB antenna technology development
NASA Technical Reports Server (NTRS)
Freeland, R. E.; Garcia, N. F.; Iwamoto, H.
1985-01-01
The wrap-rib deployable antenna concept development is based on a combination of hardware development and testing along with extensive supporting analysis. The proof-of-concept hardware models are large in size so they will address the same basic problems associated with the design fabrication, assembly and test as the full-scale systems which were selected to be 100 meters at the beginning of the program. The hardware evaluation program consists of functional performance tests, design verification tests and analytical model verification tests. Functional testing consists of kinematic deployment, mesh management and verification of mechanical packaging efficiencies. Design verification consists of rib contour precision measurement, rib cross-section variation evaluation, rib materials characterizations and manufacturing imperfections assessment. Analytical model verification and refinement include mesh stiffness measurement, rib static and dynamic testing, mass measurement, and rib cross-section characterization. This concept was considered for a number of potential applications that include mobile communications, VLBI, and aircraft surveillance. In fact, baseline system configurations were developed by JPL, using the appropriate wrap-rib antenna, for all three classes of applications.
A constrained reconstruction technique of hyperelasticity parameters for breast cancer assessment
NASA Astrophysics Data System (ADS)
Mehrabian, Hatef; Campbell, Gordon; Samani, Abbas
2010-12-01
In breast elastography, breast tissue usually undergoes large compression resulting in significant geometric and structural changes. This implies that breast elastography is associated with tissue nonlinear behavior. In this study, an elastography technique is presented and an inverse problem formulation is proposed to reconstruct parameters characterizing tissue hyperelasticity. Such parameters can potentially be used for tumor classification. This technique can also have other important clinical applications such as measuring normal tissue hyperelastic parameters in vivo. Such parameters are essential in planning and conducting computer-aided interventional procedures. The proposed parameter reconstruction technique uses a constrained iterative inversion; it can be viewed as an inverse problem. To solve this problem, we used a nonlinear finite element model corresponding to its forward problem. In this research, we applied Veronda-Westmann, Yeoh and polynomial models to model tissue hyperelasticity. To validate the proposed technique, we conducted studies involving numerical and tissue-mimicking phantoms. The numerical phantom consisted of a hemisphere connected to a cylinder, while we constructed the tissue-mimicking phantom from polyvinyl alcohol with freeze-thaw cycles that exhibits nonlinear mechanical behavior. Both phantoms consisted of three types of soft tissues which mimic adipose, fibroglandular tissue and a tumor. The results of the simulations and experiments show feasibility of accurate reconstruction of tumor tissue hyperelastic parameters using the proposed method. In the numerical phantom, all hyperelastic parameters corresponding to the three models were reconstructed with less than 2% error. With the tissue-mimicking phantom, we were able to reconstruct the ratio of the hyperelastic parameters reasonably accurately. Compared to the uniaxial test results, the average error of the ratios of the parameters reconstructed for inclusion to the middle and external layers were 13% and 9.6%, respectively. Given that the parameter ratios of the abnormal tissues to the normal ones range from three times to more than ten times, this accuracy is sufficient for tumor classification.
A green vehicle routing problem with customer satisfaction criteria
NASA Astrophysics Data System (ADS)
Afshar-Bakeshloo, M.; Mehrabi, A.; Safari, H.; Maleki, M.; Jolai, F.
2016-12-01
This paper develops an MILP model, named Satisfactory-Green Vehicle Routing Problem. It consists of routing a heterogeneous fleet of vehicles in order to serve a set of customers within predefined time windows. In this model in addition to the traditional objective of the VRP, both the pollution and customers' satisfaction have been taken into account. Meanwhile, the introduced model prepares an effective dashboard for decision-makers that determines appropriate routes, the best mixed fleet, speed and idle time of vehicles. Additionally, some new factors evaluate the greening of each decision based on three criteria. This model applies piecewise linear functions (PLFs) to linearize a nonlinear fuzzy interval for incorporating customers' satisfaction into other linear objectives. We have presented a mixed integer linear programming formulation for the S-GVRP. This model enriches managerial insights by providing trade-offs between customers' satisfaction, total costs and emission levels. Finally, we have provided a numerical study for showing the applicability of the model.
Efficient Learning of Continuous-Time Hidden Markov Models for Disease Progression
Liu, Yu-Ying; Li, Shuang; Li, Fuxin; Song, Le; Rehg, James M.
2016-01-01
The Continuous-Time Hidden Markov Model (CT-HMM) is an attractive approach to modeling disease progression due to its ability to describe noisy observations arriving irregularly in time. However, the lack of an efficient parameter learning algorithm for CT-HMM restricts its use to very small models or requires unrealistic constraints on the state transitions. In this paper, we present the first complete characterization of efficient EM-based learning methods for CT-HMM models. We demonstrate that the learning problem consists of two challenges: the estimation of posterior state probabilities and the computation of end-state conditioned statistics. We solve the first challenge by reformulating the estimation problem in terms of an equivalent discrete time-inhomogeneous hidden Markov model. The second challenge is addressed by adapting three approaches from the continuous time Markov chain literature to the CT-HMM domain. We demonstrate the use of CT-HMMs with more than 100 states to visualize and predict disease progression using a glaucoma dataset and an Alzheimer’s disease dataset. PMID:27019571
Butsick, Andrew J; Wood, Jonathan S; Jovanis, Paul P
2017-09-01
The Highway Safety Manual provides multiple methods that can be used to identify sites with promise (SWiPs) for safety improvement. However, most of these methods cannot be used to identify sites with specific problems. Furthermore, given that infrastructure funding is often specified for use related to specific problems/programs, a method for identifying SWiPs related to those programs would be very useful. This research establishes a method for Identifying SWiPs with specific issues. This is accomplished using two safety performance functions (SPFs). This method is applied to identifying SWiPs with geometric design consistency issues. Mixed effects negative binomial regression was used to develop two SPFs using 5 years of crash data and over 8754km of two-lane rural roadway. The first SPF contained typical roadway elements while the second contained additional geometric design consistency parameters. After empirical Bayes adjustments, sites with promise (SWiPs) were identified. The disparity between SWiPs identified by the two SPFs was evident; 40 unique sites were identified by each model out of the top 220 segments. By comparing sites across the two models, candidate road segments can be identified where a lack design consistency may be contributing to an increase in expected crashes. Practitioners can use this method to more effectively identify roadway segments suffering from reduced safety performance due to geometric design inconsistency, with detailed engineering studies of identified sites required to confirm the initial assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.
Feasibility of an anticipatory noncontact precrash restraint actuation system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kercel, S.W.; Dress, W.B.
1995-12-31
The problem of providing an electronic warning of an impending crash to a precrash restraint system a fraction of a second before physical contact differs from more widely explored problems, such as providing several seconds of crash warning to a driver. One approach to precrash restraint sensing is to apply anticipatory system theory. This consists of nested simplified models of the system to be controlled and of the system`s environment. It requires sensory information to describe the ``current state`` of the system and the environment. The models use the sensory data to make a faster-than-real-time prediction about the near future.more » Anticipation theory is well founded but rarely used. A major problem is to extract real-time current-state information from inexpensive sensors. Providing current-state information to the nested models is the weakest element of the system. Therefore, sensors and real-time processing of sensor signals command the most attention in an assessment of system feasibility. This paper describes problem definition, potential ``showstoppers,`` and ways to overcome them. It includes experiments showing that inexpensive radar is a practical sensing element. It considers fast and inexpensive algorithms to extract information from sensor data.« less
Aggregation of LoD 1 building models as an optimization problem
NASA Astrophysics Data System (ADS)
Guercke, R.; Götzelmann, T.; Brenner, C.; Sester, M.
3D city models offered by digital map providers typically consist of several thousands or even millions of individual buildings. Those buildings are usually generated in an automated fashion from high resolution cadastral and remote sensing data and can be very detailed. However, not in every application such a high degree of detail is desirable. One way to remove complexity is to aggregate individual buildings, simplify the ground plan and assign an appropriate average building height. This task is computationally complex because it includes the combinatorial optimization problem of determining which subset of the original set of buildings should best be aggregated to meet the demands of an application. In this article, we introduce approaches to express different aspects of the aggregation of LoD 1 building models in the form of Mixed Integer Programming (MIP) problems. The advantage of this approach is that for linear (and some quadratic) MIP problems, sophisticated software exists to find exact solutions (global optima) with reasonable effort. We also propose two different heuristic approaches based on the region growing strategy and evaluate their potential for optimization by comparing their performance to a MIP-based approach.
Loads Correlation of a Full-Scale UH-60A Airloads Rotor in a Wind Tunnel
2012-05-01
modeling in lifting line theory is unsteady, compressible, viscous flow about an infinite wing in a uniform flow consisting of a yawed freestream and...wake-induced velocity. This problem is modeled within CAMRAD II as two-dimensional, steady, compressible, viscous flow (airfoil tables), plus...and 21 aerodynamic panels. Detailed rotor control system geometry, stiffness, and lag damper were also incorporated. When not coupling to OVERFLOW, a
Processing time tolerance-based ACO algorithm for solving job-shop scheduling problem
NASA Astrophysics Data System (ADS)
Luo, Yabo; Waden, Yongo P.
2017-06-01
Ordinarily, Job Shop Scheduling Problem (JSSP) is known as NP-hard problem which has uncertainty and complexity that cannot be handled by a linear method. Thus, currently studies on JSSP are concentrated mainly on applying different methods of improving the heuristics for optimizing the JSSP. However, there still exist many problems for efficient optimization in the JSSP, namely, low efficiency and poor reliability, which can easily trap the optimization process of JSSP into local optima. Therefore, to solve this problem, a study on Ant Colony Optimization (ACO) algorithm combined with constraint handling tactics is carried out in this paper. Further, the problem is subdivided into three parts: (1) Analysis of processing time tolerance-based constraint features in the JSSP which is performed by the constraint satisfying model; (2) Satisfying the constraints by considering the consistency technology and the constraint spreading algorithm in order to improve the performance of ACO algorithm. Hence, the JSSP model based on the improved ACO algorithm is constructed; (3) The effectiveness of the proposed method based on reliability and efficiency is shown through comparative experiments which are performed on benchmark problems. Consequently, the results obtained by the proposed method are better, and the applied technique can be used in optimizing JSSP.
Phase field benchmark problems for dendritic growth and linear elasticity
Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.; ...
2018-03-26
We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less
Phase field benchmark problems for dendritic growth and linear elasticity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.
We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less
Simulating the evolution of glyphosate resistance in grains farming in northern Australia
Thornby, David F.; Walker, Steve R.
2009-01-01
Background and Aims The evolution of resistance to herbicides is a substantial problem in contemporary agriculture. Solutions to this problem generally consist of the use of practices to control the resistant population once it evolves, and/or to institute preventative measures before populations become resistant. Herbicide resistance evolves in populations over years or decades, so predicting the effectiveness of preventative strategies in particular relies on computational modelling approaches. While models of herbicide resistance already exist, none deals with the complex regional variability in the northern Australian sub-tropical grains farming region. For this reason, a new computer model was developed. Methods The model consists of an age- and stage-structured population model of weeds, with an existing crop model used to simulate plant growth and competition, and extensions to the crop model added to simulate seed bank ecology and population genetics factors. Using awnless barnyard grass (Echinochloa colona) as a test case, the model was used to investigate the likely rate of evolution under conditions expected to produce high selection pressure. Key Results Simulating continuous summer fallows with glyphosate used as the only means of weed control resulted in predicted resistant weed populations after approx. 15 years. Validation of the model against the paddock history for the first real-world glyphosate-resistant awnless barnyard grass population shows that the model predicted resistance evolution to within a few years of the real situation. Conclusions This validation work shows that empirical validation of herbicide resistance models is problematic. However, the model simulates the complexities of sub-tropical grains farming in Australia well, and can be used to investigate, generate and improve glyphosate resistance prevention strategies. PMID:19567415
A cooperative strategy for parameter estimation in large scale systems biology models.
Villaverde, Alejandro F; Egea, Jose A; Banga, Julio R
2012-06-22
Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs ("threads") that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems.
A cooperative strategy for parameter estimation in large scale systems biology models
2012-01-01
Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs (“threads”) that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems. PMID:22727112
Differential Associations of UPPS-P Impulsivity Traits With Alcohol Problems.
McCarty, Kayleigh N; Morris, David H; Hatz, Laura E; McCarthy, Denis M
2017-07-01
The UPPS-P model posits that impulsivity comprises five factors: positive urgency, negative urgency, lack of planning, lack of perseverance, and sensation seeking. Negative and positive urgency are the traits most consistently associated with alcohol problems. However, previous work has examined alcohol problems either individually or in the aggregate, rather than examining multiple problem domains simultaneously. Recent work has also questioned the utility of distinguishing between positive and negative urgency, as this distinction did not meaningfully differ in predicting domains of psychopathology. The aims of this study were to address these issues by (a) testing unique associations of UPPS-P with specific domains of alcohol problems and (b) determining the utility of distinguishing between positive and negative urgency as risk factors for specific alcohol problems. Associations between UPPS-P traits and alcohol problem domains were examined in two cross-sectional data sets using negative binomial regression models. In both samples, negative urgency was associated with social/interpersonal, self-perception, risky behaviors, and blackout drinking problems. Positive urgency was associated with academic/occupational and physiological dependence problems. Both urgency traits were associated with impaired control and self-care problems. Associations for other UPPS-P traits did not replicate across samples. Results indicate that negative and positive urgency have differential associations with alcohol problem domains. Results also suggest a distinction between the type of alcohol problems associated with these traits-negative urgency was associated with problems experienced during a drinking episode, whereas positive urgency was associated with alcohol problems that result from longer-term drinking trends.
How financial hardship is associated with the onset of mental health problems over time.
Kiely, Kim M; Leach, Liana S; Olesen, Sarah C; Butterworth, Peter
2015-06-01
Poor mental health has been consistently linked with the experience of financial hardship and poverty. However, the temporal association between these factors must be clarified before hardship alleviation can be considered as an effective mental health promotion and prevention strategy. We examined whether the longitudinal associations between financial hardship and mental health problems are best explained by an individual's current or prior experience of hardship, or their underlying vulnerability. We analysed nine waves (years: 2001-2010) of nationally representative panel data from the Household, Income, and Labour Dynamics in Australia survey (n = 11,134). Two components of financial hardship (deprivation and cash-flow problems) and income poverty were coded into time-varying and time-invariant variables reflecting the contemporaneous experience of hardship (i.e., current), the prior experience of hardship (lagged/12 months), and any experience of hardship during the study period (vulnerability). Multilevel, mixed-effect logistic regression models tested the associations between these measures and mental health. Respondents who reported deprivation and cash-flow problems had greater risk of mental health problems than those who did not. Individuals vulnerable to hardship had greater risk of mental health problems, even at the times they did not report hardship. However, their risk of mental health problems was greater on occasions when they did experience hardship. The results are consistent with the argument that economic and social programmes that address and prevent hardship may promote community mental health.
Within Your Control? When Problem Solving May Be Most Helpful.
Sarfan, Laurel D; Gooch, Peter; Clerkin, Elise M
2017-08-01
Emotion regulation strategies have been conceptualized as adaptive or maladaptive, but recent evidence suggests emotion regulation outcomes may be context-dependent. The present study tested whether the adaptiveness of a putatively adaptive emotion regulation strategy-problem solving-varied across contexts of high and low controllability. The present study also tested rumination, suggested to be one of the most putatively maladaptive strategies, which was expected to be associated with negative outcomes regardless of context. Participants completed an in vivo speech task, in which they were randomly assigned to a controllable ( n = 65) or an uncontrollable ( n = 63) condition. Using moderation analyses, we tested whether controllability interacted with emotion regulation use to predict negative affect, avoidance, and perception of performance. Partially consistent with hypotheses, problem solving was associated with certain positive outcomes (i.e., reduced behavioral avoidance) in the controllable (vs. uncontrollable) condition. Consistent with predictions, rumination was associated with negative outcomes (i.e., desired avoidance, negative affect, negative perception of performance) in both conditions. Overall, findings partially support contextual models of emotion regulation, insofar as the data suggest that the effects of problem solving may be more adaptive in controllable contexts for certain outcomes, whereas rumination may be maladaptive regardless of context.
NASA Astrophysics Data System (ADS)
Hamed, Haikel Ben; Bennacer, Rachid
2008-08-01
This work consists in evaluating algebraically and numerically the influence of a disturbance on the spectral values of a diagonalizable matrix. Thus, two approaches will be possible; to use the theorem of disturbances of a matrix depending on a parameter, due to Lidskii and primarily based on the structure of Jordan of the no disturbed matrix. The second approach consists in factorizing the matrix system, and then carrying out a numerical calculation of the roots of the disturbances matrix characteristic polynomial. This problem can be a standard model in the equations of the continuous media mechanics. During this work, we chose to use the second approach and in order to illustrate the application, we choose the Rayleigh-Bénard problem in Darcy media, disturbed by a filtering through flow. The matrix form of the problem is calculated starting from a linear stability analysis by a finite elements method. We show that it is possible to break up the general phenomenon into other elementary ones described respectively by a disturbed matrix and a disturbance. A good agreement between the two methods was seen. To cite this article: H.B. Hamed, R. Bennacer, C. R. Mecanique 336 (2008).
SCOUT: simultaneous time segmentation and community detection in dynamic networks
Hulovatyy, Yuriy; Milenković, Tijana
2016-01-01
Many evolving complex real-world systems can be modeled via dynamic networks. An important problem in dynamic network research is community detection, which finds groups of topologically related nodes. Typically, this problem is approached by assuming either that each time point has a distinct community organization or that all time points share a single community organization. The reality likely lies between these two extremes. To find the compromise, we consider community detection in the context of the problem of segment detection, which identifies contiguous time periods with consistent network structure. Consequently, we formulate a combined problem of segment community detection (SCD), which simultaneously partitions the network into contiguous time segments with consistent community organization and finds this community organization for each segment. To solve SCD, we introduce SCOUT, an optimization framework that explicitly considers both segmentation quality and partition quality. SCOUT addresses limitations of existing methods that can be adapted to solve SCD, which consider only one of segmentation quality or partition quality. In a thorough evaluation, SCOUT outperforms the existing methods in terms of both accuracy and computational complexity. We apply SCOUT to biological network data to study human aging. PMID:27881879
ERIC Educational Resources Information Center
Hagermoser Sanetti, Lisa M.; Williamson, Kathleen M.; Long, Anna C. J.; Kratochwill, Thomas R.
2018-01-01
Numerous evidence-based classroom management strategies to prevent and respond to problem behavior have been identified, but research consistently indicates teachers rarely implement them with sufficient implementation fidelity. The purpose of this study was to evaluate the effectiveness of implementation planning, a strategy involving logistical…
Rhetorical Consequences of the Computer Society: Expert Systems and Human Communication.
ERIC Educational Resources Information Center
Skopec, Eric Wm.
Expert systems are computer programs that solve selected problems by modelling domain-specific behaviors of human experts. These computer programs typically consist of an input/output system that feeds data into the computer and retrieves advice, an inference system using the reasoning and heuristic processes of human experts, and a knowledge…
Principles of Integrative Modelling at Studying of Plasma and Welding Processes
ERIC Educational Resources Information Center
Anakhov, Sergey V.; Perminov, Evgeniy ?.; Dzyubich, Denis K.; Yarushina, Maria A.; Tarasova, Yuliya A.
2016-01-01
The relevance of the problem subject to the research is conditioned by need for introduction of modern technologies into the educational process and insufficient adaptation of the higher school teachers to the applied information and automated procedures in education and science. The purpose of the publication consists in the analysis of automated…
An RCT of an Evidence-Based Practice Teaching Model with the Field Instructor
ERIC Educational Resources Information Center
Tennille, Julie Anne
2013-01-01
Problem: Equipping current and future social work practitioners with skills to deliver evidence-based practice (EBP) has remained an elusive prospect since synchronized efforts with field instructors have not been a consistent part of dissemination and implementation efforts. Recognizing the highly influential position of field instructors, this…
Symmetries of relativistic world lines
NASA Astrophysics Data System (ADS)
Koch, Benjamin; Muñoz, Enrique; Reyes, Ignacio A.
2017-10-01
Symmetries are essential for a consistent formulation of many quantum systems. In this paper we discuss a fundamental symmetry, which is present for any Lagrangian term that involves x˙2. As a basic model that incorporates the fundamental symmetries of quantum gravity and string theory, we consider the Lagrangian action of the relativistic point particle. A path integral quantization for this seemingly simple system has long presented notorious problems. Here we show that those problems are overcome by taking into account the additional symmetry, leading directly to the exact Klein-Gordon propagator.
NASA Astrophysics Data System (ADS)
2018-05-01
Eigenvalues and eigenvectors, together, constitute the eigenstructure of the system. The design of vibrating systems aimed at satisfying specifications on eigenvalues and eigenvectors, which is commonly known as eigenstructure assignment, has drawn increasing interest over the recent years. The most natural mathematical framework for such problems is constituted by the inverse eigenproblems, which consist in the determination of the system model that features a desired set of eigenvalues and eigenvectors. Although such a problem is intrinsically challenging, several solutions have been proposed in the literature. The approaches to eigenstructure assignment can be basically divided into passive control and active control.
Location-allocation models and new solution methodologies in telecommunication networks
NASA Astrophysics Data System (ADS)
Dinu, S.; Ciucur, V.
2016-08-01
When designing a telecommunications network topology, three types of interdependent decisions are combined: location, allocation and routing, which are expressed by the following design considerations: how many interconnection devices - consolidation points/concentrators should be used and where should they be located; how to allocate terminal nodes to concentrators; how should the voice, video or data traffic be routed and what transmission links (capacitated or not) should be built into the network. Including these three components of the decision into a single model generates a problem whose complexity makes it difficult to solve. A first method to address the overall problem is the sequential one, whereby the first step deals with the location-allocation problem and based on this solution the subsequent sub-problem (routing the network traffic) shall be solved. The issue of location and allocation in a telecommunications network, called "The capacitated concentrator location- allocation - CCLA problem" is based on one of the general location models on a network in which clients/demand nodes are the terminals and facilities are the concentrators. Like in a location model, each client node has a demand traffic, which must be served, and the facilities can serve these demands within their capacity limit. In this study, the CCLA problem is modeled as a single-source capacitated location-allocation model whose optimization objective is to determine the minimum network cost consisting of fixed costs for establishing the locations of concentrators, costs for operating concentrators and costs for allocating terminals to concentrators. The problem is known as a difficult combinatorial optimization problem for which powerful algorithms are required. Our approach proposes a Fuzzy Genetic Algorithm combined with a local search procedure to calculate the optimal values of the location and allocation variables. To confirm the efficiency of the proposed algorithm with respect to the quality of solutions, significant size test problems were considered: up to 100 terminal nodes and 50 concentrators on a 100 × 100 square grid. The performance of this hybrid intelligent algorithm was evaluated by measuring the quality of its solutions with respect to the following statistics: the standard deviation and the ratio of the best solution obtained.
Davies, Patrick T.; Cicchetti, Dante; Martin, Meredith J.
2012-01-01
This study examined specific forms of emotional reactivity to conflict and temperamental emotionality as explanatory mechanisms in pathways among interparental aggression and child psychological problems. Participants of the multi-method, longitudinal study included 201 two-year-old children and their mothers who had experienced elevated violence in the home. Consistent with emotional security theory, autoregressive structural equation model analyses indicated that children’s fearful reactivity to conflict was the only consistent mediator in the associations among interparental aggression and their internalizing and externalizing symptoms one year later. Pathways remained significant across maternal and observer ratings of children’s symptoms and with the inclusion of other predictors and mediators, including children’s sad and angry forms of reactivity to conflict, temperamental emotionality, gender, and socioeconomic status. PMID:22716918
Surrogate assisted multidisciplinary design optimization for an all-electric GEO satellite
NASA Astrophysics Data System (ADS)
Shi, Renhe; Liu, Li; Long, Teng; Liu, Jian; Yuan, Bin
2017-09-01
State-of-the-art all-electric geostationary earth orbit (GEO) satellites use electric thrusters to execute all propulsive duties, which significantly differ from the traditional all-chemical ones in orbit-raising, station-keeping, radiation damage protection, and power budget, etc. Design optimization task of an all-electric GEO satellite is therefore a complex multidisciplinary design optimization (MDO) problem involving unique design considerations. However, solving the all-electric GEO satellite MDO problem faces big challenges in disciplinary modeling techniques and efficient optimization strategy. To address these challenges, we presents a surrogate assisted MDO framework consisting of several modules, i.e., MDO problem definition, multidisciplinary modeling, multidisciplinary analysis (MDA), and surrogate assisted optimizer. Based on the proposed framework, the all-electric GEO satellite MDO problem is formulated to minimize the total mass of the satellite system under a number of practical constraints. Then considerable efforts are spent on multidisciplinary modeling involving geosynchronous transfer, GEO station-keeping, power, thermal control, attitude control, and structure disciplines. Since orbit dynamics models and finite element structural model are computationally expensive, an adaptive response surface surrogate based optimizer is incorporated in the proposed framework to solve the satellite MDO problem with moderate computational cost, where a response surface surrogate is gradually refined to represent the computationally expensive MDA process. After optimization, the total mass of the studied GEO satellite is decreased by 185.3 kg (i.e., 7.3% of the total mass). Finally, the optimal design is further discussed to demonstrate the effectiveness of our proposed framework to cope with the all-electric GEO satellite system design optimization problems. This proposed surrogate assisted MDO framework can also provide valuable references for other all-electric spacecraft system design.
A constitutive law for finite element contact problems with unclassical friction
NASA Technical Reports Server (NTRS)
Plesha, M. E.; Steinetz, B. M.
1986-01-01
Techniques for modeling complex, unclassical contact-friction problems arising in solid and structural mechanics are discussed. A constitutive modeling concept is employed whereby analytic relations between increments of contact surface stress (i.e., traction) and contact surface deformation (i.e., relative displacement) are developed. Because of the incremental form of these relations, they are valid for arbitrary load-deformation histories. The motivation for the development of such a constitutive law is that more realistic friction idealizations can be implemented in finite element analysis software in a consistent, straightforward manner. Of particular interest is modeling of two-body (i.e., unlubricated) metal-metal, ceramic-ceramic, and metal-ceramic contact. Interfaces involving ceramics are of engineering importance and are being considered for advanced turbine engines in which higher temperature materials offer potential for higher engine fuel efficiency.
Smith, Dale L; Gozal, David; Hunter, Scott J; Kheirandish-Gozal, Leila
2017-01-01
Numerous studies over the past several decades have illustrated that children who suffer from sleep-disordered breathing (SDB) are at greater risk for cognitive, behavioral, and psychiatric problems. Although behavioral problems have been proposed as a potential mediator between SDB and cognitive functioning, these relationships have not been critically examined. This analysis is based on a community-based cohort of 1,115 children who underwent overnight polysomnography, and cognitive and behavioral phenotyping. Structural model of the relationships between SDB, behavior, and cognition, and two recently developed mediation approaches based on propensity score weighting and resampling were used to assess the mediational role of parent-reported behavior and psychiatric problems in the relationship between SDB and cognitive functioning. Multiple models utilizing two different SDB definitions further explored direct effects of SDB on cognition as well as indirect effects through behavioral pathology. All models were adjusted for age, sex, race, BMI z -score, and asthma status. Indirect effects of SDB through behavior problems were significant in all mediation models, while direct effects of SDB on cognition were not. The findings were consistent across different mediation procedures and remained essentially unaltered when different criteria for SDB, behavior, and cognition were used. Potential effects of SDB on cognitive functioning appear to occur through behavioral problems that are detectable in this pediatric population. Thus, early attentional or behavioral pathology may be implicated in the cognitive functioning deficits associated with SDB, and may present an early morbidity-related susceptibility biomarker.
De Clercq, Etienne
2008-09-01
It is widely accepted that the development of electronic patient records, or even of a common electronic patient record, is one possible way to improve cooperation and data communication between nurses and physicians. Yet, little has been done so far to develop a common conceptual model for both medical and nursing patient records, which is a first challenge that should be met to set up a common electronic patient record. In this paper, we describe a problem-oriented conceptual model and we show how it may suit both nursing and medical perspectives in a hospital setting. We started from existing nursing theory and from an initial model previously set up for primary care. In a hospital pilot site, a multi-disciplinary team refined this model using one large and complex clinical case (retrospective study) and nine ongoing cases (prospective study). An internal validation was performed through hospital-wide multi-professional interviews and through discussions around a graphical user interface prototype. To assess the consistency of the model, a computer engineer specified it. Finally, a Belgian expert working group performed an external assessment of the model. As a basis for a common patient record we propose a simple problem-oriented conceptual model with two levels of meta-information. The model is mapped with current nursing theories and it includes the following concepts: "health care element", "health approach", "health agent", "contact", "subcontact" and "service". These concepts, their interrelationships and some practical rules for using the model are illustrated in this paper. Our results are compatible with ongoing standardization work at the Belgian and European levels. Our conceptual model is potentially a foundation for a multi-professional electronic patient record that is problem-oriented and therefore patient-centred.
Approaches in highly parameterized inversion - GENIE, a general model-independent TCP/IP run manager
Muffels, Christopher T.; Schreuder, Willem A.; Doherty, John E.; Karanovic, Marinko; Tonkin, Matthew J.; Hunt, Randall J.; Welter, David E.
2012-01-01
GENIE is a model-independent suite of programs that can be used to generally distribute, manage, and execute multiple model runs via the TCP/IP infrastructure. The suite consists of a file distribution interface, a run manage, a run executer, and a routine that can be compiled as part of a program and used to exchange model runs with the run manager. Because communication is via a standard protocol (TCP/IP), any computer connected to the Internet can serve in any of the capacities offered by this suite. Model independence is consistent with the existing template and instruction file protocols of the widely used PEST parameter estimation program. This report describes (1) the problem addressed; (2) the approach used by GENIE to queue, distribute, and retrieve model runs; and (3) user instructions, classes, and functions developed. It also includes (4) an example to illustrate the linking of GENIE with Parallel PEST using the interface routine.
Ensembles vs. information theory: supporting science under uncertainty
NASA Astrophysics Data System (ADS)
Nearing, Grey S.; Gupta, Hoshin V.
2018-05-01
Multi-model ensembles are one of the most common ways to deal with epistemic uncertainty in hydrology. This is a problem because there is no known way to sample models such that the resulting ensemble admits a measure that has any systematic (i.e., asymptotic, bounded, or consistent) relationship with uncertainty. Multi-model ensembles are effectively sensitivity analyses and cannot - even partially - quantify uncertainty. One consequence of this is that multi-model approaches cannot support a consistent scientific method - in particular, multi-model approaches yield unbounded errors in inference. In contrast, information theory supports a coherent hypothesis test that is robust to (i.e., bounded under) arbitrary epistemic uncertainty. This paper may be understood as advocating a procedure for hypothesis testing that does not require quantifying uncertainty, but is coherent and reliable (i.e., bounded) in the presence of arbitrary (unknown and unknowable) uncertainty. We conclude by offering some suggestions about how this proposed philosophy of science suggests new ways to conceptualize and construct simulation models of complex, dynamical systems.
Perspective: Stochastic magnetic devices for cognitive computing
NASA Astrophysics Data System (ADS)
Roy, Kaushik; Sengupta, Abhronil; Shim, Yong
2018-06-01
Stochastic switching of nanomagnets can potentially enable probabilistic cognitive hardware consisting of noisy neural and synaptic components. Furthermore, computational paradigms inspired from the Ising computing model require stochasticity for achieving near-optimality in solutions to various types of combinatorial optimization problems such as the Graph Coloring Problem or the Travelling Salesman Problem. Achieving optimal solutions in such problems are computationally exhaustive and requires natural annealing to arrive at the near-optimal solutions. Stochastic switching of devices also finds use in applications involving Deep Belief Networks and Bayesian Inference. In this article, we provide a multi-disciplinary perspective across the stack of devices, circuits, and algorithms to illustrate how the stochastic switching dynamics of spintronic devices in the presence of thermal noise can provide a direct mapping to the computational units of such probabilistic intelligent systems.
NASA Astrophysics Data System (ADS)
Charles, Alexandre; Ballard, Patrick
2016-08-01
The dynamics of mechanical systems with a finite number of degrees of freedom (discrete mechanical systems) is governed by the Lagrange equation which is a second-order differential equation on a Riemannian manifold (the configuration manifold). The handling of perfect (frictionless) unilateral constraints in this framework (that of Lagrange's analytical dynamics) was undertaken by Schatzman and Moreau at the beginning of the 1980s. A mathematically sound and consistent evolution problem was obtained, paving the road for many subsequent theoretical investigations. In this general evolution problem, the only reaction force which is involved is a generalized reaction force, consistently with the virtual power philosophy of Lagrange. Surprisingly, such a general formulation was never derived in the case of frictional unilateral multibody dynamics. Instead, the paradigm of the Coulomb law applying to reaction forces in the real world is generally invoked. So far, this paradigm has only enabled to obtain a consistent evolution problem in only some very few specific examples and to suggest numerical algorithms to produce computational examples (numerical modeling). In particular, it is not clear what is the evolution problem underlying the computational examples. Moreover, some of the few specific cases in which this paradigm enables to write down a precise evolution problem are known to show paradoxes: the Painlevé paradox (indeterminacy) and the Kane paradox (increase in kinetic energy due to friction). In this paper, we follow Lagrange's philosophy and formulate the frictional unilateral multibody dynamics in terms of the generalized reaction force and not in terms of the real-world reaction force. A general evolution problem that governs the dynamics is obtained for the first time. We prove that all the solutions are dissipative; that is, this new formulation is free of Kane paradox. We also prove that some indeterminacy of the Painlevé paradox is fixed in this formulation.
Splitting algorithm for numerical simulation of Li-ion battery electrochemical processes
NASA Astrophysics Data System (ADS)
Iliev, Oleg; Nikiforova, Marina A.; Semenov, Yuri V.; Zakharov, Petr E.
2017-11-01
In this paper we present a splitting algorithm for a numerical simulation of Li-ion battery electrochemical processes. Liion battery consists of three domains: anode, cathode and electrolyte. Mathematical model of electrochemical processes is described on a microscopic scale, and contains nonlinear equations for concentration and potential in each domain. On the interface of electrodes and electrolyte there are the Lithium ions intercalation and deintercalation processes, which are described by Butler-Volmer nonlinear equation. To approximate in spatial coordinates we use finite element methods with discontinues Galerkin elements. To simplify numerical simulations we develop the splitting algorithm, which split the original problem into three independent subproblems. We investigate the numerical convergence of the algorithm on 2D model problem.
Kwak, Youngbin; Payne, John W; Cohen, Andrew L; Huettel, Scott A
2015-01-01
Adolescence is often viewed as a time of irrational, risky decision-making - despite adolescents' competence in other cognitive domains. In this study, we examined the strategies used by adolescents (N=30) and young adults (N=47) to resolve complex, multi-outcome economic gambles. Compared to adults, adolescents were more likely to make conservative, loss-minimizing choices consistent with economic models. Eye-tracking data showed that prior to decisions, adolescents acquired more information in a more thorough manner; that is, they engaged in a more analytic processing strategy indicative of trade-offs between decision variables. In contrast, young adults' decisions were more consistent with heuristics that simplified the decision problem, at the expense of analytic precision. Collectively, these results demonstrate a counter-intuitive developmental transition in economic decision making: adolescents' decisions are more consistent with rational-choice models, while young adults more readily engage task-appropriate heuristics.
Kwak, Youngbin; Payne, John W.; Cohen, Andrew L.; Huettel, Scott A.
2015-01-01
Adolescence is often viewed as a time of irrational, risky decision-making – despite adolescents' competence in other cognitive domains. In this study, we examined the strategies used by adolescents (N=30) and young adults (N=47) to resolve complex, multi-outcome economic gambles. Compared to adults, adolescents were more likely to make conservative, loss-minimizing choices consistent with economic models. Eye-tracking data showed that prior to decisions, adolescents acquired more information in a more thorough manner; that is, they engaged in a more analytic processing strategy indicative of trade-offs between decision variables. In contrast, young adults' decisions were more consistent with heuristics that simplified the decision problem, at the expense of analytic precision. Collectively, these results demonstrate a counter-intuitive developmental transition in economic decision making: adolescents' decisions are more consistent with rational-choice models, while young adults more readily engage task-appropriate heuristics. PMID:26388664
Sentse, Miranda; Kretschmer, Tina; de Haan, Amaranta; Prinzie, Peter
2017-08-01
Individual heterogeneity exists in the onset and development of conduct problems, but theoretical claims about predictors and prognosis are often not consistent with the empirical findings. This study examined shape and outcomes of conduct problem trajectories in a Belgian population-based sample (N = 682; 49.5 % boys). Mothers reported on children's conduct problems across six waves (age 4-17) and emerging adults reported on their behavioral adjustment (age 17-20). Applying mixture modeling, we found four gender-invariant trajectories (labeled life-course-persistent, adolescence-onset, childhood-limited, and low). The life-course-persistent group was least favorably adjusted, but the adolescence-onset group was similarly maladjusted in externalizing problems and may be less normative (15 % of the sample) than previously believed. The childhood-limited group was at heightened risk for specifically internalizing problems, being more worrisome than its label suggests. Interventions should not only be aimed at early detection of conduct problems, but also at adolescents to avoid future maladjustment.
Abdollahi, Abbas; Talib, Mansor Abu; Yaacob, Siti Nor; Ismail, Zanariah
2015-01-01
Recent evidence suggests that suicidal ideation is increased among university students, it is essential to increase our knowledge concerning the etiology of suicidal ideation among university students. This study was conducted to examine the relationships between problem-solving skills appraisal, hardiness, and suicidal ideation among university students. In addition, this study was conducted to examine problem-solving skills appraisal (including the three components of problem-solving confidence, approach-avoidance style, and personal control of emotion) as a potential mediator between hardiness and suicidal ideation. The participants consisted of 500 undergraduate students from Malaysian public universities. Structural Equation Modelling (SEM) estimated that undergraduate students with lower hardiness, poor problem-solving confidence, external personal control of emotion, and avoiding style was associated with higher suicidal ideation. Problem-solving skills appraisal (including the three components of problem-solving confidence, approach-avoidance style, and personal control of emotion) partially mediated the relationship between hardiness and suicidal ideation. These findings underline the importance of studying mediating processes that explain how hardiness affects suicidal ideation.
Cloud-based large-scale air traffic flow optimization
NASA Astrophysics Data System (ADS)
Cao, Yi
The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model that can be used for both offline historical traffic data analysis and online traffic flow optimization. It provides an efficient and robust platform for easy deployment and implementation. A small cloud consisting of five workstations was configured and used to demonstrate the advantages of cloud computing in dealing with large-scale parallelizable traffic problems.
Wakeling, Helen C
2007-09-01
This study examined the reliability and validity of the Social Problem-Solving Inventory--Revised (SPSI-R; D'Zurilla, Nezu, & Maydeu-Olivares, 2002) with a population of incarcerated sexual offenders. An availability sample of 499 adult male sexual offenders was used. The SPSI-R had good reliability measured by internal consistency and test-retest reliability, and adequate validity. Construct validity was determined via factor analysis. An exploratory factor analysis extracted a two-factor model. This model was then tested against the theory-driven five-factor model using confirmatory factor analysis. The five-factor model was selected as the better fitting of the two, and confirmed the model according to social problem-solving theory (D'Zurilla & Nezu, 1982). The SPSI-R had good convergent validity; significant correlations were found between SPSI-R subscales and measures of self-esteem, impulsivity, and locus of control. SPSI-R subscales were however found to significantly correlate with a measure of socially desirable responding. This finding is discussed in relation to recent research suggesting that impression management may not invalidate self-report measures (e.g. Mills & Kroner, 2005). The SPSI-R was sensitive to sexual offender intervention, with problem-solving improving pre to post-treatment in both rapists and child molesters. The study concludes that the SPSI-R is a reasonably internally valid and appropriate tool to assess problem-solving in sexual offenders. However future research should cross-validate the SPSI-R with other behavioural outcomes to examine the external validity of the measure. Furthermore, future research should utilise a control group to determine treatment impact.
A Conditional Curie-Weiss Model for Stylized Multi-group Binary Choice with Social Interaction
NASA Astrophysics Data System (ADS)
Opoku, Alex Akwasi; Edusei, Kwame Owusu; Ansah, Richard Kwame
2018-04-01
This paper proposes a conditional Curie-Weiss model as a model for decision making in a stylized society made up of binary decision makers that face a particular dichotomous choice between two options. Following Brock and Durlauf (Discrete choice with social interaction I: theory, 1955), we set-up both socio-economic and statistical mechanical models for the choice problem. We point out when both the socio-economic and statistical mechanical models give rise to the same self-consistent equilibrium mean choice level(s). Phase diagram of the associated statistical mechanical model and its socio-economic implications are discussed.
Saha, Tulshi D; Compton, Wilson M; Chou, S Patricia; Smith, Sharon; Ruan, W June; Huang, Boji; Pickering, Roger P; Grant, Bridget F
2012-04-01
Prior research has demonstrated the dimensionality of alcohol, nicotine and cannabis use disorders criteria. The purpose of this study was to examine the unidimensionality of DSM-IV cocaine, amphetamine and prescription drug abuse and dependence criteria and to determine the impact of elimination of the legal problems criterion on the information value of the aggregate criteria. Factor analyses and Item Response Theory (IRT) analyses were used to explore the unidimensionality and psychometric properties of the illicit drug use criteria using a large representative sample of the U.S. population. All illicit drug abuse and dependence criteria formed unidimensional latent traits. For amphetamines, cocaine, sedatives, tranquilizers and opioids, IRT models fit better for models without legal problems criterion than models with legal problems criterion and there were no differences in the information value of the IRT models with and without the legal problems criterion, supporting the elimination of that criterion. Consistent with findings for alcohol, nicotine and cannabis, amphetamine, cocaine, sedative, tranquilizer and opioid abuse and dependence criteria reflect underlying unitary dimensions of severity. The legal problems criterion associated with each of these substance use disorders can be eliminated with no loss in informational value and an advantage of parsimony. Taken together, these findings support the changes to substance use disorder diagnoses recommended by the American Psychiatric Association's DSM-5 Substance and Related Disorders Workgroup. Published by Elsevier Ireland Ltd.
Diagrams benefit symbolic problem-solving.
Chu, Junyi; Rittle-Johnson, Bethany; Fyfe, Emily R
2017-06-01
The format of a mathematics problem often influences students' problem-solving performance. For example, providing diagrams in conjunction with story problems can benefit students' understanding, choice of strategy, and accuracy on story problems. However, it remains unclear whether providing diagrams in conjunction with symbolic equations can benefit problem-solving performance as well. We tested the impact of diagram presence on students' performance on algebra equation problems to determine whether diagrams increase problem-solving success. We also examined the influence of item- and student-level factors to test the robustness of the diagram effect. We worked with 61 seventh-grade students who had received 2 months of pre-algebra instruction. Students participated in an experimenter-led classroom session. Using a within-subjects design, students solved algebra problems in two matched formats (equation and equation-with-diagram). The presence of diagrams increased equation-solving accuracy and the use of informal strategies. This diagram benefit was independent of student ability and item complexity. The benefits of diagrams found previously for story problems generalized to symbolic problems. The findings are consistent with cognitive models of problem-solving and suggest that diagrams may be a useful additional representation of symbolic problems. © 2017 The British Psychological Society.
Voegtlin, T; Verschure, P F
1999-01-01
This paper argues for the development of synthetic approaches towards the study of brain and behavior as a complement to the more traditional empirical mode of research. As an example we present our own work on learning and problem solving which relates to the behavioral paradigms of classical and operant conditioning. We define the concept of learning in the context of behavior and lay out the basic methodological requirements a model needs to satisfy, which includes evaluations using robots. In addition, we define a number of design principles neuronal models should obey to be considered relevant. We present in detail the construction of a neural model of short- and long-term memory which can be applied to an artificial behaving system. The presented model (DAC4) provides a novel self-consistent implementation of these processes, which satisfies our principles. This model will be interpreted towards the present understanding of the neuronal substrate of memory.
Analysis of an operator-differential model for magnetostrictive energy harvesting
NASA Astrophysics Data System (ADS)
Davino, D.; Krejčí, P.; Pimenov, A.; Rachinskii, D.; Visone, C.
2016-10-01
We present a model of, and analysis of an optimization problem for, a magnetostrictive harvesting device which converts mechanical energy of the repetitive process such as vibrations of the smart material to electrical energy that is then supplied to an electric load. The model combines a lumped differential equation for a simple electronic circuit with an operator model for the complex constitutive law of the magnetostrictive material. The operator based on the formalism of the phenomenological Preisach model describes nonlinear saturation effects and hysteresis losses typical of magnetostrictive materials in a thermodynamically consistent fashion. We prove well-posedness of the full operator-differential system and establish global asymptotic stability of the periodic regime under periodic mechanical forcing that represents mechanical vibrations due to varying environmental conditions. Then we show the existence of an optimal solution for the problem of maximization of the output power with respect to a set of controllable parameters (for the periodically forced system). Analytical results are illustrated with numerical examples of an optimal solution.
Calculation of flow about posts and powerhead model. [space shuttle main engine
NASA Technical Reports Server (NTRS)
Anderson, P. G.; Farmer, R. C.
1985-01-01
A three dimensional analysis of the non-uniform flow around the liquid oxygen (LOX) posts in the Space Shuttle Main Engine (SSME) powerhead was performed to determine possible factors contributing to the failure of the posts. Also performed was three dimensional numerical fluid flow analysis of the high pressure fuel turbopump (HPFTP) exhaust system, consisting of the turnaround duct (TAD), two-duct hot gas manifold (HGM), and the Version B transfer ducts. The analysis was conducted in the following manner: (1) modeling the flow around a single and small clusters (2 to 10) of posts; (2) modeling the velocity field in the cross plane; and (3) modeling the entire flow region with a three dimensional network type model. Shear stress functions which will permit viscous analysis without requiring excessive numbers of computational grid points were developed. These wall functions, laminar and turbulent, have been compared to standard Blasius solutions and are directly applicable to the cylinder in cross flow class of problems to which the LOX post problem belongs.
von Krogh, Gunn; Nåden, Dagfinn; Aasland, Olaf Gjerløw
2012-10-01
To present the results from the test site application of the documentation model KPO (quality assurance, problem solving and caring) designed to impact the quality of nursing information in electronic patient record (EPR). The KPO model was developed by means of consensus group and clinical testing. Four documentation arenas and eight content categories, nursing terminologies and a decision-support system were designed to impact the completeness, comprehensiveness and consistency of nursing information. The testing was performed in a pre-test/post-test time series design, three times at a one-year interval. Content analysis of nursing documentation was accomplished through the identification, interpretation and coding of information units. Data from the pre-test and post-test 2 were subjected to statistical analyses. To estimate the differences, paired t-tests were used. At post-test 2, the information is found to be more complete, comprehensive and consistent than at pre-test. The findings indicate that documentation arenas combining work flow and content categories deduced from theories on nursing practice can influence the quality of nursing information. The KPO model can be used as guide when shifting from paper-based to electronic-based nursing documentation with the aim of obtaining complete, comprehensive and consistent nursing information. © 2012 Blackwell Publishing Ltd.
State-of-charge estimation in lithium-ion batteries: A particle filter approach
NASA Astrophysics Data System (ADS)
Tulsyan, Aditya; Tsai, Yiting; Gopaluni, R. Bhushan; Braatz, Richard D.
2016-11-01
The dynamics of lithium-ion batteries are complex and are often approximated by models consisting of partial differential equations (PDEs) relating the internal ionic concentrations and potentials. The Pseudo two-dimensional model (P2D) is one model that performs sufficiently accurately under various operating conditions and battery chemistries. Despite its widespread use for prediction, this model is too complex for standard estimation and control applications. This article presents an original algorithm for state-of-charge estimation using the P2D model. Partial differential equations are discretized using implicit stable algorithms and reformulated into a nonlinear state-space model. This discrete, high-dimensional model (consisting of tens to hundreds of states) contains implicit, nonlinear algebraic equations. The uncertainty in the model is characterized by additive Gaussian noise. By exploiting the special structure of the pseudo two-dimensional model, a novel particle filter algorithm that sweeps in time and spatial coordinates independently is developed. This algorithm circumvents the degeneracy problems associated with high-dimensional state estimation and avoids the repetitive solution of implicit equations by defining a 'tether' particle. The approach is illustrated through extensive simulations.
Non-convex Statistical Optimization for Sparse Tensor Graphical Model
Sun, Wei; Wang, Zhaoran; Liu, Han; Cheng, Guang
2016-01-01
We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covariance has a Kronecker product structure. The penalized maximum likelihood estimation of this model involves minimizing a non-convex objective function. In spite of the non-convexity of this estimation problem, we prove that an alternating minimization algorithm, which iteratively estimates each sparse precision matrix while fixing the others, attains an estimator with the optimal statistical rate of convergence as well as consistent graph recovery. Notably, such an estimator achieves estimation consistency with only one tensor sample, which is unobserved in previous work. Our theoretical results are backed by thorough numerical studies. PMID:28316459
A large eddy simulation scheme for turbulent reacting flows
NASA Technical Reports Server (NTRS)
Gao, Feng
1993-01-01
The recent development of the dynamic subgrid-scale (SGS) model has provided a consistent method for generating localized turbulent mixing models and has opened up great possibilities for applying the large eddy simulation (LES) technique to real world problems. Given the fact that the direct numerical simulation (DNS) can not solve for engineering flow problems in the foreseeable future (Reynolds 1989), the LES is certainly an attractive alternative. It seems only natural to bring this new development in SGS modeling to bear on the reacting flows. The major stumbling block for introducing LES to reacting flow problems has been the proper modeling of the reaction source terms. Various models have been proposed, but none of them has a wide range of applicability. For example, some of the models in combustion have been based on the flamelet assumption which is only valid for relatively fast reactions. Some other models have neglected the effects of chemical reactions on the turbulent mixing time scale, which is certainly not valid for fast and non-isothermal reactions. The probability density function (PDF) method can be usefully employed to deal with the modeling of the reaction source terms. In order to fit into the framework of LES, a new PDF, the large eddy PDF (LEPDF), is introduced. This PDF provides an accurate representation for the filtered chemical source terms and can be readily calculated in the simulations. The details of this scheme are described.
Ground-state energies and charge radii of medium-mass nuclei in the unitary-model-operator approach
NASA Astrophysics Data System (ADS)
Miyagi, Takayuki; Abe, Takashi; Okamoto, Ryoji; Otsuka, Takaharu
2014-09-01
In nuclear structure theory, one of the most fundamental problems is to understand the nuclear structure based on nuclear forces. This attempt has been enabled due to the progress of the computational power and nuclear many-body approaches. However, it is difficult to apply the first-principle methods to medium-mass region, because calculations demand the huge model space as increasing the number of nucleons. The unitary-model-operator approach (UMOA) is one of the methods which can be applied to medium-mass nuclei. The essential point of the UMOA is to construct the effective Hamiltonian which does not induce the two-particle-two-hole excitations. A many-body problem is reduced to the two-body subsystem problem in an entire many-body system with the two-body effective interaction and one-body potential determined self-consistently. In this presentation, we will report the numerical results of ground-state energies and charge radii of 16O, 40Ca, and 56Ni in the UMOA, and discuss the saturation property by comparing our results with those in the other many-body methods and also experimental data. In nuclear structure theory, one of the most fundamental problems is to understand the nuclear structure based on nuclear forces. This attempt has been enabled due to the progress of the computational power and nuclear many-body approaches. However, it is difficult to apply the first-principle methods to medium-mass region, because calculations demand the huge model space as increasing the number of nucleons. The unitary-model-operator approach (UMOA) is one of the methods which can be applied to medium-mass nuclei. The essential point of the UMOA is to construct the effective Hamiltonian which does not induce the two-particle-two-hole excitations. A many-body problem is reduced to the two-body subsystem problem in an entire many-body system with the two-body effective interaction and one-body potential determined self-consistently. In this presentation, we will report the numerical results of ground-state energies and charge radii of 16O, 40Ca, and 56Ni in the UMOA, and discuss the saturation property by comparing our results with those in the other many-body methods and also experimental data. The part of numerical calculation has been done on the NEC SX8R at RCNP, Osaka University. This work was supported in part by MEXT SPIRE and JICFuS. It was also supported in part by the Program in part for Leading Graduate Schools, MEXT, Japan.
Global D-brane models with stabilised moduli and light axions
NASA Astrophysics Data System (ADS)
Cicoli, Michele
2014-03-01
We review recent attempts to try to combine global issues of string compactifications, like moduli stabilisation, with local issues, like semi-realistic D-brane constructions. We list the main problems encountered, and outline a possible solution which allows globally consistent embeddings of chiral models. We also argue that this stabilisation mechanism leads to an axiverse. We finally illustrate our general claims in a concrete example where the Calabi-Yau manifold is explicitly described by toric geometry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Z.; Bessa, M. A.; Liu, W.K.
A predictive computational theory is shown for modeling complex, hierarchical materials ranging from metal alloys to polymer nanocomposites. The theory can capture complex mechanisms such as plasticity and failure that span across multiple length scales. This general multiscale material modeling theory relies on sound principles of mathematics and mechanics, and a cutting-edge reduced order modeling method named self-consistent clustering analysis (SCA) [Zeliang Liu, M.A. Bessa, Wing Kam Liu, “Self-consistent clustering analysis: An efficient multi-scale scheme for inelastic heterogeneous materials,” Comput. Methods Appl. Mech. Engrg. 306 (2016) 319–341]. SCA reduces by several orders of magnitude the computational cost of micromechanical andmore » concurrent multiscale simulations, while retaining the microstructure information. This remarkable increase in efficiency is achieved with a data-driven clustering method. Computationally expensive operations are performed in the so-called offline stage, where degrees of freedom (DOFs) are agglomerated into clusters. The interaction tensor of these clusters is computed. In the online or predictive stage, the Lippmann-Schwinger integral equation is solved cluster-wise using a self-consistent scheme to ensure solution accuracy and avoid path dependence. To construct a concurrent multiscale model, this scheme is applied at each material point in a macroscale structure, replacing a conventional constitutive model with the average response computed from the microscale model using just the SCA online stage. A regularized damage theory is incorporated in the microscale that avoids the mesh and RVE size dependence that commonly plagues microscale damage calculations. The SCA method is illustrated with two cases: a carbon fiber reinforced polymer (CFRP) structure with the concurrent multiscale model and an application to fatigue prediction for additively manufactured metals. For the CFRP problem, a speed up estimated to be about 43,000 is achieved by using the SCA method, as opposed to FE2, enabling the solution of an otherwise computationally intractable problem. The second example uses a crystal plasticity constitutive law and computes the fatigue potency of extrinsic microscale features such as voids. This shows that local stress and strain are capture sufficiently well by SCA. This model has been incorporated in a process-structure-properties prediction framework for process design in additive manufacturing.« less
Bouzat, Sebastián
2016-01-01
One-dimensional models coupling a Langevin equation for the cargo position to stochastic stepping dynamics for the motors constitute a relevant framework for analyzing multiple-motor microtubule transport. In this work we explore the consistence of these models focusing on the effects of the thermal noise. We study how to define consistent stepping and detachment rates for the motors as functions of the local forces acting on them in such a way that the cargo velocity and run-time match previously specified functions of the external load, which are set on the base of experimental results. We show that due to the influence of the thermal fluctuations this is not a trivial problem, even for the single-motor case. As a solution, we propose a motor stepping dynamics which considers memory on the motor force. This model leads to better results for single-motor transport than the approaches previously considered in the literature. Moreover, it gives a much better prediction for the stall force of the two-motor case, highly compatible with the experimental findings. We also analyze the fast fluctuations of the cargo position and the influence of the viscosity, comparing the proposed model to the standard one, and we show how the differences on the single-motor dynamics propagate to the multiple motor situations. Finally, we find that the one-dimensional character of the models impede an appropriate description of the fast fluctuations of the cargo position at small loads. We show how this problem can be solved by considering two-dimensional models.
Problem-based learning: effects on student’s scientific reasoning skills in science
NASA Astrophysics Data System (ADS)
Wulandari, F. E.; Shofiyah, N.
2018-04-01
This research aimed to develop instructional package of problem-based learning to enhance student’s scientific reasoning from concrete to formal reasoning skills level. The instructional package was developed using the Dick and Carey Model. Subject of this study was instructional package of problem-based learning which was consisting of lesson plan, handout, student’s worksheet, and scientific reasoning test. The instructional package was tried out on 4th semester science education students of Universitas Muhammadiyah Sidoarjo by using the one-group pre-test post-test design. The data of scientific reasoning skills was collected by making use of the test. The findings showed that the developed instructional package reflecting problem-based learning was feasible to be implemented in classroom. Furthermore, through applying the problem-based learning, students could dominate formal scientific reasoning skills in terms of functionality and proportional reasoning, control variables, and theoretical reasoning.
A study of fuzzy logic ensemble system performance on face recognition problem
NASA Astrophysics Data System (ADS)
Polyakova, A.; Lipinskiy, L.
2017-02-01
Some problems are difficult to solve by using a single intelligent information technology (IIT). The ensemble of the various data mining (DM) techniques is a set of models which are able to solve the problem by itself, but the combination of which allows increasing the efficiency of the system as a whole. Using the IIT ensembles can improve the reliability and efficiency of the final decision, since it emphasizes on the diversity of its components. The new method of the intellectual informational technology ensemble design is considered in this paper. It is based on the fuzzy logic and is designed to solve the classification and regression problems. The ensemble consists of several data mining algorithms: artificial neural network, support vector machine and decision trees. These algorithms and their ensemble have been tested by solving the face recognition problems. Principal components analysis (PCA) is used for feature selection.
Hahn, Austin M; Tirabassi, Christine K; Simons, Raluca M; Simons, Jeffrey S
2015-11-01
This study tested a path model of relationships between military sexual trauma (MST), combat exposure, negative urgency, posttraumatic stress disorder (PTSD) symptoms, and alcohol use and related problems. The sample consisted of 86 Operation Enduring Freedom/Operation Iraqi Freedom (OEF/OIF) veterans who reported drinking at least one alcoholic beverage per week. PTSD mediated the relationships between MST and alcohol-related problems, negative urgency and alcohol-related problems, and combat exposure and alcohol-related problems. In addition, negative urgency had a direct effect on alcohol problems. These results indicate that MST, combat exposure, and negative urgency independently predict PTSD symptoms and PTSD symptoms mediate their relationship with alcohol-related problems. Findings support previous literature on the effect of combat exposure and negative urgency on PTSD and subsequent alcohol-related problems. The current study also contributes to the limited research regarding the relationship between MST, PSTD, and alcohol use and related problems. Clinical interventions aimed at reducing emotional dysregulation and posttraumatic stress symptomology may subsequently improve alcohol-related outcomes. (c) 2015 APA, all rights reserved).
Tirabassi, Christine K.; Simons, Raluca M.; Simons, Jeffrey S.
2015-01-01
This study tested a path model of relationships between military sexual trauma (MST), combat exposure, negative urgency, posttraumatic stress disorder (PTSD) symptoms, and alcohol use and related problems. The sample consisted of 86 OEF/OIF veterans who reported drinking at least one alcoholic beverage per week. PTSD mediated the relationships between MST and alcohol-related problems, negative urgency and alcohol-related problems, as well as combat exposure and alcohol-related problems. In addition, negative urgency had a direct effect on alcohol problems. These results indicate that MST, combat exposure, and negative urgency independently predict PTSD symptoms and PTSD symptoms mediate their relationship with alcohol-related problems. Findings support previous literature on the effect of combat exposure and negative urgency on PTSD and subsequent alcohol-related problems. The current study also contributes to the limited research regarding the relationship between MST, PSTD, and alcohol use and related problems. Clinical interventions aimed at reducing emotional dysregulation and posttraumatic stress symptomology may subsequently improve alcohol related outcomes. PMID:26524279
Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart
2011-01-01
We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the “model-free” variational analysis (VA)-based image enhancement approach and the “model-based” descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations. PMID:22163859
Inflationary magnetogenesis without the strong coupling problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferreira, Ricardo J.Z.; Jain, Rajeev Kumar; Sloth, Martin S., E-mail: ferreira@cp3.dias.sdu.dk, E-mail: jain@cp3.dias.sdu.dk, E-mail: sloth@cp3.dias.sdu.dk
2013-10-01
The simplest gauge invariant models of inflationary magnetogenesis are known to suffer from the problems of either large backreaction or strong coupling, which make it difficult to self-consistently achieve cosmic magnetic fields from inflation with a field strength larger than 10{sup −32}G today on the Mpc scale. Such a strength is insufficient to act as seed for the galactic dynamo effect, which requires a magnetic field larger than 10{sup −20}G. In this paper we analyze simple extensions of the minimal model, which avoid both the strong coupling and back reaction problems, in order to generate sufficiently large magnetic fields onmore » the Mpc scale today. First we study the possibility that the coupling function which breaks the conformal invariance of electromagnetism is non-monotonic with sharp features. Subsequently, we consider the effect of lowering the energy scale of inflation jointly with a scenario of prolonged reheating where the universe is dominated by a stiff fluid for a short period after inflation. In the latter case, a systematic study shows upper bounds for the magnetic field strength today on the Mpc scale of 10{sup −13}G for low scale inflation and 10{sup −25}G for high scale inflation, thus improving on the previous result by 7-19 orders of magnitude. These results are consistent with the strong coupling and backreaction constraints.« less
Making objective decisions in mechanical engineering problems
NASA Astrophysics Data System (ADS)
Raicu, A.; Oanta, E.; Sabau, A.
2017-08-01
Decision making process has a great influence in the development of a given project, the goal being to select an optimal choice in a given context. Because of its great importance, the decision making was studied using various science methods, finally being conceived the game theory that is considered the background for the science of logical decision making in various fields. The paper presents some basic ideas regarding the game theory in order to offer the necessary information to understand the multiple-criteria decision making (MCDM) problems in engineering. The solution is to transform the multiple-criteria problem in a one-criterion decision problem, using the notion of utility, together with the weighting sum model or the weighting product model. The weighted importance of the criteria is computed using the so-called Step method applied to a relation of preferences between the criteria. Two relevant examples from engineering are also presented. The future directions of research consist of the use of other types of criteria, the development of computer based instruments for decision making general problems and to conceive a software module based on expert system principles to be included in the Wiki software applications for polymeric materials that are already operational.
Students’ Representation in Mathematical Word Problem-Solving: Exploring Students’ Self-efficacy
NASA Astrophysics Data System (ADS)
Sahendra, A.; Budiarto, M. T.; Fuad, Y.
2018-01-01
This descriptive qualitative research aims at investigating student represented in mathematical word problem solving based on self-efficacy. The research subjects are two eighth graders at a school in Surabaya with equal mathematical ability consisting of two female students with high and low self-efficacy. The subjects were chosen based on the results of test of mathematical ability, documentation of the result of middle test in even semester of 2016/2017 academic year, and results of questionnaire of mathematics word problem in terms of self-efficacy scale. The selected students were asked to do mathematical word problem solving and be interviewed. The result of this study shows that students with high self-efficacy tend to use multiple representations of sketches and mathematical models, whereas students with low self-efficacy tend to use single representation of sketches or mathematical models only in mathematical word problem-solving. This study emphasizes that teachers should pay attention of student’s representation as a consideration of designing innovative learning in order to increase the self-efficacy of each student to achieve maximum mathematical achievement although it still requires adjustment to the school situation and condition.
Jennings, Kristen S; Goguen, Kandice N; Britt, Thomas W; Jeffirs, Stephanie M; Wilkes, Jack R; Brady, Ashley R; Pittman, Rebecca A; DiMuzio, Danielle J
2017-11-01
Many college students experience a mental health problem yet do not seek treatment from a mental health professional. In the present study, we examined how perceived barriers (stigma perceptions, negative attitudes about treatment, and perceptions of practical barriers), as well as the Big Five personality traits, relate to treatment seeking among college students reporting a current mental health problem. The sample consisted of 261 college students, 115 of which reported experiencing a current problem. Results of a series of logistic regressions revealed that perceived stigma from others (OR = .32), self-stigma (OR = .29), negative attitudes about treatment (OR = .27), and practical barriers (OR = .34) were all associated with a lower likelihood of having sought treatment among students experiencing a problem. Of the five-factor model personality traits, only Neuroticism was associated with a higher likelihood of having sought treatment when experiencing a mental health problem (OR = 2.71). When we considered all significant predictors in a final stepwise conditional model, only self-stigma, practical barriers, and Neuroticism remained significant unique predictors. Implications for addressing barriers to treatment and encouraging treatment seeking among college students are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Permutation flow-shop scheduling problem to optimize a quadratic objective function
NASA Astrophysics Data System (ADS)
Ren, Tao; Zhao, Peng; Zhang, Da; Liu, Bingqian; Yuan, Huawei; Bai, Danyu
2017-09-01
A flow-shop scheduling model enables appropriate sequencing for each job and for processing on a set of machines in compliance with identical processing orders. The objective is to achieve a feasible schedule for optimizing a given criterion. Permutation is a special setting of the model in which the processing order of the jobs on the machines is identical for each subsequent step of processing. This article addresses the permutation flow-shop scheduling problem to minimize the criterion of total weighted quadratic completion time. With a probability hypothesis, the asymptotic optimality of the weighted shortest processing time schedule under a consistency condition (WSPT-CC) is proven for sufficiently large-scale problems. However, the worst case performance ratio of the WSPT-CC schedule is the square of the number of machines in certain situations. A discrete differential evolution algorithm, where a new crossover method with multiple-point insertion is used to improve the final outcome, is presented to obtain high-quality solutions for moderate-scale problems. A sequence-independent lower bound is designed for pruning in a branch-and-bound algorithm for small-scale problems. A set of random experiments demonstrates the performance of the lower bound and the effectiveness of the proposed algorithms.
Interpersonal problems across levels of the psychopathology hierarchy.
Girard, Jeffrey M; Wright, Aidan G C; Beeney, Joseph E; Lazarus, Sophie A; Scott, Lori N; Stepp, Stephanie D; Pilkonis, Paul A
2017-11-01
We examined the relationship between psychopathology and interpersonal problems in a sample of 825 clinical and community participants. Sixteen psychiatric diagnoses and five transdiagnostic dimensions were examined in relation to self-reported interpersonal problems. The structural summary method was used with the Inventory of Interpersonal Problems Circumplex Scales to examine interpersonal problem profiles for each diagnosis and dimension. We built a structural model of mental disorders including factors corresponding to detachment (avoidant personality, social phobia, major depression), internalizing (dependent personality, borderline personality, panic disorder, posttraumatic stress, major depression), disinhibition (antisocial personality, drug dependence, alcohol dependence, borderline personality), dominance (histrionic personality, narcissistic personality, paranoid personality), and compulsivity (obsessive-compulsive personality). All dimensions showed good interpersonal prototypicality (e.g., detachment was defined by a socially avoidant/nonassertive interpersonal profile) except for internalizing, which was diffusely associated with elevated interpersonal distress. The findings for individual disorders were largely consistent with the dimension that each disorder loaded on, with the exception of the internalizing and dominance disorders, which were interpersonally heterogeneous. These results replicate previous findings and provide novel insights into social dysfunction in psychopathology by wedding the power of hierarchical (i.e., dimensional) modeling and interpersonal circumplex assessment. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Böttger, B.; Eiken, J.; Apel, M.
2009-10-01
Performing microstructure simulation of technical casting processes suffers from the strong interdependency between latent heat release due to local microstructure formation and heat diffusion on the macroscopic scale: local microstructure formation depends on the macroscopic heat fluxes and, in turn, the macroscopic temperature solution depends on the latent heat release, and therefore on the microstructure formation, in all parts of the casting. A self-consistent homoenthalpic approximation to this micro-macro problem is proposed, based on the assumption of a common enthalpy-temperature relation for the whole casting which is used for the description of latent heat production on the macroscale. This enthalpy-temperature relation is iteratively obtained by phase-field simulations on the microscale, thus taking into account the specific morphological impact on the latent heat production. This new approach is discussed and compared to other approximations for the coupling of the macroscopic heat flux to complex microstructure models. Simulations are performed for the binary alloy Al-3at%Cu, using a multiphase-field solidification model which is coupled to a thermodynamic database. Microstructure formation is simulated for several positions in a simple model plate casting, using a one-dimensional macroscopic temperature solver which can be directly coupled to the microscopic phase-field simulation tool.
NASA Astrophysics Data System (ADS)
Silalahi, R. L. R.; Mustaniroh, S. A.; Ikasari, D. M.; Sriulina, R. P.
2018-03-01
UD. Bunda Foods is an SME located in the district of Sidoarjo. UD. Bunda Foods has problems of maintaining its milkfish’s quality assurance and developing marketing strategies. Improving those problems enables UD. Bunda Foods to compete with other similar SMEs and to market its product for further expansion of their business. The objectives of this study were to determine the model of the institutional structure of the milkfish supply chain, to determine the elements, the sub-elements, and the relationship among each element. The method used in this research was Interpretive Structural Modeling (ISM), involving 5 experts as respondents consisting of 1 practitioner, 1 academician, and 3 government organisation employees. The results showed that there were two key elements include requirement and goals elements. Based on the Drive Power-Dependence (DP-D) matrix, the key sub-elements of requirement element, consisted of raw material continuity, appropriate marketing strategy, and production capital, were positioned in the Linkage sector quadrant. The DP-D matrix for the key sub-elements of the goal element also showed a similar position. The findings suggested several managerial implications to be carried out by UD. Bunda Foods include establishing good relationships with all involved institutions, obtaining capital assistance, and attending the marketing training provided by the government.
NL(q) Theory: A Neural Control Framework with Global Asymptotic Stability Criteria.
Vandewalle, Joos; De Moor, Bart L.R.; Suykens, Johan A.K.
1997-06-01
In this paper a framework for model-based neural control design is presented, consisting of nonlinear state space models and controllers, parametrized by multilayer feedforward neural networks. The models and closed-loop systems are transformed into so-called NL(q) system form. NL(q) systems represent a large class of nonlinear dynamical systems consisting of q layers with alternating linear and static nonlinear operators that satisfy a sector condition. For such NL(q)s sufficient conditions for global asymptotic stability, input/output stability (dissipativity with finite L(2)-gain) and robust stability and performance are presented. The stability criteria are expressed as linear matrix inequalities. In the analysis problem it is shown how stability of a given controller can be checked. In the synthesis problem two methods for neural control design are discussed. In the first method Narendra's dynamic backpropagation for tracking on a set of specific reference inputs is modified with an NL(q) stability constraint in order to ensure, e.g., closed-loop stability. In a second method control design is done without tracking on specific reference inputs, but based on the input/output stability criteria itself, within a standard plant framework as this is done, for example, in H( infinity ) control theory and &mgr; theory. Copyright 1997 Elsevier Science Ltd.
An Integrated Framework for Model-Based Distributed Diagnosis and Prognosis
NASA Technical Reports Server (NTRS)
Bregon, Anibal; Daigle, Matthew J.; Roychoudhury, Indranil
2012-01-01
Diagnosis and prognosis are necessary tasks for system reconfiguration and fault-adaptive control in complex systems. Diagnosis consists of detection, isolation and identification of faults, while prognosis consists of prediction of the remaining useful life of systems. This paper presents a novel integrated framework for model-based distributed diagnosis and prognosis, where system decomposition is used to enable the diagnosis and prognosis tasks to be performed in a distributed way. We show how different submodels can be automatically constructed to solve the local diagnosis and prognosis problems. We illustrate our approach using a simulated four-wheeled rover for different fault scenarios. Our experiments show that our approach correctly performs distributed fault diagnosis and prognosis in an efficient and robust manner.
NASA Astrophysics Data System (ADS)
Ballesteros, Guillermo; Redondo, Javier; Ringwald, Andreas; Tamarit, Carlos
2017-08-01
We present a minimal extension of the Standard Model (SM) providing a consistent picture of particle physics from the electroweak scale to the Planck scale and of cosmology from inflation until today. Three right-handed neutrinos Ni, a new color triplet Q and a complex SM-singlet scalar σ, whose vacuum expectation value vσ ~ 1011 GeV breaks lepton number and a Peccei-Quinn symmetry simultaneously, are added to the SM. At low energies, the model reduces to the SM, augmented by seesaw generated neutrino masses and mixing, plus the axion. The latter solves the strong CP problem and accounts for the cold dark matter in the Universe. The inflaton is comprised by a mixture of σ and the SM Higgs, and reheating of the Universe after inflation proceeds via the Higgs portal. Baryogenesis occurs via thermal leptogenesis. Thus, five fundamental problems of particle physics and cosmology are solved at one stroke in this unified Standard Model—axion—seesaw—Higgs portal inflation (SMASH) model. It can be probed decisively by upcoming cosmic microwave background and axion dark matter experiments.
NASA Astrophysics Data System (ADS)
Amsallem, David; Tezaur, Radek; Farhat, Charbel
2016-12-01
A comprehensive approach for real-time computations using a database of parametric, linear, projection-based reduced-order models (ROMs) based on arbitrary underlying meshes is proposed. In the offline phase of this approach, the parameter space is sampled and linear ROMs defined by linear reduced operators are pre-computed at the sampled parameter points and stored. Then, these operators and associated ROMs are transformed into counterparts that satisfy a certain notion of consistency. In the online phase of this approach, a linear ROM is constructed in real-time at a queried but unsampled parameter point by interpolating the pre-computed linear reduced operators on matrix manifolds and therefore computing an interpolated linear ROM. The proposed overall model reduction framework is illustrated with two applications: a parametric inverse acoustic scattering problem associated with a mockup submarine, and a parametric flutter prediction problem associated with a wing-tank system. The second application is implemented on a mobile device, illustrating the capability of the proposed computational framework to operate in real-time.
Pinto Mariano, Adriano; Bastos Borba Costa, Caliane; de Franceschi de Angelis, Dejanira; Maugeri Filho, Francisco; Pires Atala, Daniel Ibraim; Wolf Maciel, Maria Regina; Maciel Filho, Rubens
2009-11-01
In this work, the mathematical optimization of a continuous flash fermentation process for the production of biobutanol was studied. The process consists of three interconnected units, as follows: fermentor, cell-retention system (tangential microfiltration), and vacuum flash vessel (responsible for the continuous recovery of butanol from the broth). The objective of the optimization was to maximize butanol productivity for a desired substrate conversion. Two strategies were compared for the optimization of the process. In one of them, the process was represented by a deterministic model with kinetic parameters determined experimentally and, in the other, by a statistical model obtained using the factorial design technique combined with simulation. For both strategies, the problem was written as a nonlinear programming problem and was solved with the sequential quadratic programming technique. The results showed that despite the very similar solutions obtained with both strategies, the problems found with the strategy using the deterministic model, such as lack of convergence and high computational time, make the use of the optimization strategy with the statistical model, which showed to be robust and fast, more suitable for the flash fermentation process, being recommended for real-time applications coupling optimization and control.
Lattice Boltzmann Methods to Address Fundamental Boiling and Two-Phase Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uddin, Rizwan
2012-01-01
This report presents the progress made during the fourth (no cost extension) year of this three-year grant aimed at the development of a consistent Lattice Boltzmann formulation for boiling and two-phase flows. During the first year, a consistent LBM formulation for the simulation of a two-phase water-steam system was developed. Results of initial model validation in a range of thermo-dynamic conditions typical for Boiling Water Reactors (BWRs) were shown. Progress was made on several fronts during the second year. Most important of these included the simulation of the coalescence of two bubbles including the surface tension effects. Work during themore » third year focused on the development of a new lattice Boltzmann model, called the artificial interface lattice Boltzmann model (AILB model) for the 3 simulation of two-phase dynamics. The model is based on the principle of free energy minimization and invokes the Gibbs-Duhem equation in the formulation of non-ideal forcing function. This was reported in detail in the last progress report. Part of the efforts during the last (no-cost extension) year were focused on developing a parallel capability for the 2D as well as for the 3D codes developed in this project. This will be reported in the final report. Here we report the work carried out on testing the AILB model for conditions including the thermal effects. A simplified thermal LB model, based on the thermal energy distribution approach, was developed. The simplifications are made after neglecting the viscous heat dissipation and the work done by pressure in the original thermal energy distribution model. Details of the model are presented here, followed by a discussion of the boundary conditions, and then results for some two-phase thermal problems.« less
Modelling chemo-hydro-mechanical behaviour of unsaturated clays: a feasibility study
NASA Astrophysics Data System (ADS)
Liu, Z.; Boukpeti, N.; Li, X.; Collin, F.; Radu, J.-P.; Hueckel, T.; Charlier, R.
2005-08-01
Effective capabilities of combined chemo-elasto-plastic and unsaturated soil models to simulate chemo-hydro-mechanical (CHM) behaviour of clays are examined in numerical simulations through selected boundary value problems. The objective is to investigate the feasibility of approaching such complex material behaviour numerically by combining two existing models. The chemo-mechanical effects are described using the concept of chemical softening consisting of reduction of the pre-consolidation pressure proposed originally by Hueckel (Can. Geotech. J. 1992; 29:1071-1086; Int. J. Numer. Anal. Methods Geomech. 1997; 21:43-72). An additional chemical softening mechanism is considered, consisting in a decrease of cohesion with an increase in contaminant concentration. The influence of partial saturation on the constitutive behaviour is modelled following Barcelona basic model (BBM) formulation (Géotech. 1990; 40(3):405-430; Can. Geotech. J. 1992; 29:1013-1032).The equilibrium equations combined with the CHM constitutive relations, and the governing equations for flow of fluids and contaminant transport, are solved numerically using finite element. The emphasis is laid on understanding the role that the individual chemical effects such as chemo-elastic swelling, or chemo-plastic consolidation, or finally, chemical loss of cohesion have in the overall response of the soil mass. The numerical problems analysed concern the chemical effects in response to wetting of a clay specimen with an organic liquid in rigid wall consolidometer, during biaxial loading up to failure, and in response to fresh water influx during tunnel excavation in swelling clay.
Examining the latent structure of anxiety sensitivity in adolescents using factor mixture modeling.
Allan, Nicholas P; MacPherson, Laura; Young, Kevin C; Lejuez, Carl W; Schmidt, Norman B
2014-09-01
Anxiety sensitivity has been implicated as an important risk factor, generalizable to most anxiety disorders. In adults, factor mixture modeling has been used to demonstrate that anxiety sensitivity is best conceptualized as categorical between individuals. That is, whereas most adults appear to possess normative levels of anxiety sensitivity, a small subset of the population appears to possess abnormally high levels of anxiety sensitivity. Further, those in the high anxiety sensitivity group are at increased risk of having high levels of anxiety and of having an anxiety disorder. This study was designed to determine whether these findings extend to adolescents. Factor mixture modeling was used to examine the best fitting model of anxiety sensitivity in a sample of 277 adolescents (M age = 11.0 years, SD = 0.81). Consistent with research in adults, the best fitting model consisted of 2 classes, 1 containing adolescents with high levels of anxiety sensitivity (n = 25) and another containing adolescents with normative levels of anxiety sensitivity (n = 252). Examination of anxiety sensitivity subscales revealed that the social concerns subscale was not important for classification of individuals. Convergent and discriminant validity of anxiety sensitivity classes were found in that membership in the high anxiety sensitivity class was associated with higher mean levels of anxiety symptoms, controlling for depression and externalizing problems, and was not associated with higher mean levels of depression or externalizing symptoms controlling for anxiety problems. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Solving the flexible job shop problem by hybrid metaheuristics-based multiagent model
NASA Astrophysics Data System (ADS)
Nouri, Houssem Eddine; Belkahla Driss, Olfa; Ghédira, Khaled
2018-03-01
The flexible job shop scheduling problem (FJSP) is a generalization of the classical job shop scheduling problem that allows to process operations on one machine out of a set of alternative machines. The FJSP is an NP-hard problem consisting of two sub-problems, which are the assignment and the scheduling problems. In this paper, we propose how to solve the FJSP by hybrid metaheuristics-based clustered holonic multiagent model. First, a neighborhood-based genetic algorithm (NGA) is applied by a scheduler agent for a global exploration of the search space. Second, a local search technique is used by a set of cluster agents to guide the research in promising regions of the search space and to improve the quality of the NGA final population. The efficiency of our approach is explained by the flexible selection of the promising parts of the search space by the clustering operator after the genetic algorithm process, and by applying the intensification technique of the tabu search allowing to restart the search from a set of elite solutions to attain new dominant scheduling solutions. Computational results are presented using four sets of well-known benchmark literature instances. New upper bounds are found, showing the effectiveness of the presented approach.
Improving Predictions of Multiple Binary Models in ILP
2014-01-01
Despite the success of ILP systems in learning first-order rules from small number of examples and complexly structured data in various domains, they struggle in dealing with multiclass problems. In most cases they boil down a multiclass problem into multiple black-box binary problems following the one-versus-one or one-versus-rest binarisation techniques and learn a theory for each one. When evaluating the learned theories of multiple class problems in one-versus-rest paradigm particularly, there is a bias caused by the default rule toward the negative classes leading to an unrealistic high performance beside the lack of prediction integrity between the theories. Here we discuss the problem of using one-versus-rest binarisation technique when it comes to evaluating multiclass data and propose several methods to remedy this problem. We also illustrate the methods and highlight their link to binary tree and Formal Concept Analysis (FCA). Our methods allow learning of a simple, consistent, and reliable multiclass theory by combining the rules of the multiple one-versus-rest theories into one rule list or rule set theory. Empirical evaluation over a number of data sets shows that our proposed methods produce coherent and accurate rule models from the rules learned by the ILP system of Aleph. PMID:24696657
Emery, Noah N; Simons, Jeffrey S
2017-08-01
This study tested a model linking sensitivity to punishment (SP) and reward (SR) to marijuana use and problems via affect lability and poor control. A 6-month prospective design was used in a sample of 2,270 young-adults (64% female). The hypothesized SP × SR interaction did not predict affect lability or poor control, but did predict use likelihood at baseline. At low levels of SR, SP was associated with an increased likelihood of abstaining, which was attenuated as SR increased. SP and SR displayed positive main effects on both affect lability and poor control. Affect lability and poor control, in turn, mediated effects on the marijuana outcomes. Poor control predicted both increased marijuana use and, controlling for use level, greater intensity of problems. Affect lability predicted greater intensity of problems, but was not associated with use level. There were few prospective effects. SR consistently predicted greater marijuana use and problems. SP however, exhibited both risk and protective pathways. Results indicate that SP is associated with a decreased likelihood of marijuana use. However, once use is initiated SP is associated with increased risk of problems, in part, due to its effects on both affect and behavioral dysregulation. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Stochastic Optimization For Water Resources Allocation
NASA Astrophysics Data System (ADS)
Yamout, G.; Hatfield, K.
2003-12-01
For more than 40 years, water resources allocation problems have been addressed using deterministic mathematical optimization. When data uncertainties exist, these methods could lead to solutions that are sub-optimal or even infeasible. While optimization models have been proposed for water resources decision-making under uncertainty, no attempts have been made to address the uncertainties in water allocation problems in an integrated approach. This paper presents an Integrated Dynamic, Multi-stage, Feedback-controlled, Linear, Stochastic, and Distributed parameter optimization approach to solve a problem of water resources allocation. It attempts to capture (1) the conflict caused by competing objectives, (2) environmental degradation produced by resource consumption, and finally (3) the uncertainty and risk generated by the inherently random nature of state and decision parameters involved in such a problem. A theoretical system is defined throughout its different elements. These elements consisting mainly of water resource components and end-users are described in terms of quantity, quality, and present and future associated risks and uncertainties. Models are identified, modified, and interfaced together to constitute an integrated water allocation optimization framework. This effort is a novel approach to confront the water allocation optimization problem while accounting for uncertainties associated with all its elements; thus resulting in a solution that correctly reflects the physical problem in hand.
ERIC Educational Resources Information Center
Stewart, John; Miller, Mayo; Audo, Christine; Stewart, Gay
2012-01-01
This study examined the evolution of student responses to seven contextually different versions of two Force Concept Inventory questions in an introductory physics course at the University of Arkansas. The consistency in answering the closely related questions evolved little over the seven-question exam. A model for the state of student knowledge…
ERIC Educational Resources Information Center
Righthand, Herbert
This institute was designed to study the needs and problems of vocational teaching in metropolitan areas and to recommend model teacher preparation practices. A total of 60 participants, representing 23 states, Washington, D.C., and the Virgin Islands, took part in this program, which consisted of general sessions, homogeneous and heterogeneous…
A Biopsychosocial Model of Disordered Eating and the Pursuit of Muscularity in Adolescent Boys
ERIC Educational Resources Information Center
Ricciardelli, Lina A.; McCabe, Marita P.
2004-01-01
This review provides an evaluation of the correlates and/or risk factors associated with disordered eating and the pursuit of muscularity among adolescent boys. One of the main conclusions is that similar factors and processes are associated with both behavioral problems. Several factors found to be consistently associated with disordered eating…
A New Test of Linear Hypotheses in OLS Regression under Heteroscedasticity of Unknown Form
ERIC Educational Resources Information Center
Cai, Li; Hayes, Andrew F.
2008-01-01
When the errors in an ordinary least squares (OLS) regression model are heteroscedastic, hypothesis tests involving the regression coefficients can have Type I error rates that are far from the nominal significance level. Asymptotically, this problem can be rectified with the use of a heteroscedasticity-consistent covariance matrix (HCCM)…
ERIC Educational Resources Information Center
Kjobli, John; Hagen, Kristine Amlund
2009-01-01
The present study examined maternal and paternal parenting practices as mediators of the link between interparental collaboration and children's externalizing behavior. Parent gender was tested as a moderator of the associations. A clinical sample consisting of 136 children with externalizing problems and their families participated in the study.…
Edgerton, Jason D; Keough, Matthew T; Roberts, Lance W
2018-02-21
This study examines whether there are multiple joint trajectories of depression and problem gambling co-development in a sample of emerging adults. Data were from the Manitoba Longitudinal Study of Young Adults (n = 679), which was collected in 4 waves across 5 years (age 18-20 at baseline). Parallel process latent class growth modeling was used to identified 5 joint trajectory classes: low decreasing gambling, low increasing depression (81%); low stable gambling, moderate decreasing depression (9%); low stable gambling, high decreasing depression (5%); low stable gambling, moderate stable depression (3%); moderate stable problem gambling, no depression (2%). There was no evidence of reciprocal growth in problem gambling and depression in any of the joint classes. Multinomial logistic regression analyses of baseline risk and protective factors found that only neuroticism, escape-avoidance coping, and perceived level of family social support were significant predictors of joint trajectory class membership. Consistent with the pathways model framework, we observed that individuals in the problem gambling only class were more likely using gambling as a stable way to cope with negative emotions. Similarly, high levels of neuroticism and low levels of family support were associated with increased odds of being in a class with moderate to high levels of depressive symptoms (but low gambling problems). The results suggest that interventions for problem gambling and/or depression need to focus on promoting more adaptive coping skills among more "at-risk" young adults, and such interventions should be tailored in relation to specific subtypes of comorbid mental illness.
Modelling the control of interceptive actions.
Beek, P J; Dessing, J C; Peper, C E; Bullock, D
2003-01-01
In recent years, several phenomenological dynamical models have been formulated that describe how perceptual variables are incorporated in the control of motor variables. We call these short-route models as they do not address how perception-action patterns might be constrained by the dynamical properties of the sensory, neural and musculoskeletal subsystems of the human action system. As an alternative, we advocate a long-route modelling approach in which the dynamics of these subsystems are explicitly addressed and integrated to reproduce interceptive actions. The approach is exemplified through a discussion of a recently developed model for interceptive actions consisting of a neural network architecture for the online generation of motor outflow commands, based on time-to-contact information and information about the relative positions and velocities of hand and ball. This network is shown to be consistent with both behavioural and neurophysiological data. Finally, some problems are discussed with regard to the question of how the motor outflow commands (i.e. the intended movement) might be modulated in view of the musculoskeletal dynamics. PMID:14561342
Radioactive models of type 1 supernovae
NASA Astrophysics Data System (ADS)
Schurmann, S. R.
1983-04-01
In recent years, considerable progress has been made toward understanding Type I supernovae within the context of radioactive energy input. Much effort has gone into determining the peak magnitude of the supernovae, particularly in the B-band, and its relation to the Hubble constant. If the distances inferred for Type I events are at all accurate, and/or the Hubble constant has a value near 50 km per s per Mpc, it is clear that models must reach a peak magnitude approximately -20 in order to be consistent. The present investigation is concerned with models which achieve peak magnitudes near this value and contain 0.8 solar mass of Ni-56. The B-band light curve declines much more rapidly after peak than the bolometric light curve. The mass and velocity of Ni-56 (at least for the A models) are within the region defined by Axelrod (1980) for configurations which produce acceptable spectra at late times. The models are consistent with the absence of a neutron star after the explosion. There remain, however, many difficult problems.
Radioactive models of type 1 supernovae
NASA Technical Reports Server (NTRS)
Schurmann, S. R.
1983-01-01
In recent years, considerable progress has been made toward understanding Type I supernovae within the context of radioactive energy input. Much effort has gone into determining the peak magnitude of the supernovae, particularly in the B-band, and its relation to the Hubble constant. If the distances inferred for Type I events are at all accurate, and/or the Hubble constant has a value near 50 km per s per Mpc, it is clear that models must reach a peak magnitude approximately -20 in order to be consistent. The present investigation is concerned with models which achieve peak magnitudes near this value and contain 0.8 solar mass of Ni-56. The B-band light curve declines much more rapidly after peak than the bolometric light curve. The mass and velocity of Ni-56 (at least for the A models) are within the region defined by Axelrod (1980) for configurations which produce acceptable spectra at late times. The models are consistent with the absence of a neutron star after the explosion. There remain, however, many difficult problems.
Color appearance for photorealistic image synthesis
NASA Astrophysics Data System (ADS)
Marini, Daniele; Rizzi, Alessandro; Rossi, Maurizio
2000-12-01
Photorealistic Image Synthesis is a relevant research and application field in computer graphics, whose aim is to produce synthetic images that are undistinguishable from real ones. Photorealism is based upon accurate computational models of light material interaction, that allow us to compute the spectral intensity light field of a geometrically described scene. The fundamental methods are ray tracing and radiosity. While radiosity allows us to compute the diffuse component of the emitted and reflected light, applying ray tracing in a two pass solution we can also cope with non diffuse properties of the model surfaces. Both methods can be implemented to generate an accurate photometric distribution of light of the simulated environment. A still open problem is the visualization phase, whose purpose is to display the final result of the simulated mode on a monitor screen or on a printed paper. The tone reproduction problem consists of finding the best solution to compress the extended dynamic range of the computed light field into the limited range of the displayable colors. Recently some scholars have addressed this problem considering the perception stage of image formation, so including a model of the human visual system in the visualization process. In this paper we present a working hypothesis to solve the tone reproduction problem of synthetic image generation, integrating Retinex perception model into the photo realistic image synthesis context.
Computational method for analysis of polyethylene biodegradation
NASA Astrophysics Data System (ADS)
Watanabe, Masaji; Kawai, Fusako; Shibata, Masaru; Yokoyama, Shigeo; Sudate, Yasuhiro
2003-12-01
In a previous study concerning the biodegradation of polyethylene, we proposed a mathematical model based on two primary factors: the direct consumption or absorption of small molecules and the successive weight loss of large molecules due to β-oxidation. Our model is an initial value problem consisting of a differential equation whose independent variable is time. Its unknown variable represents the total weight of all the polyethylene molecules that belong to a molecular-weight class specified by a parameter. In this paper, we describe a numerical technique to introduce experimental results into analysis of our model. We first establish its mathematical foundation in order to guarantee its validity, by showing that the initial value problem associated with the differential equation has a unique solution. Our computational technique is based on a linear system of differential equations derived from the original problem. We introduce some numerical results to illustrate our technique as a practical application of the linear approximation. In particular, we show how to solve the inverse problem to determine the consumption rate and the β-oxidation rate numerically, and illustrate our numerical technique by analyzing the GPC patterns of polyethylene wax obtained before and after 5 weeks cultivation of a fungus, Aspergillus sp. AK-3. A numerical simulation based on these degradation rates confirms that the primary factors of the polyethylene biodegradation posed in modeling are indeed appropriate.
NASA Astrophysics Data System (ADS)
Tanaka, Yoshiyuki; Klemann, Volker; Okuno, Jun'ichi
2009-09-01
Normal mode approaches for calculating viscoelastic responses of self-gravitating and compressible spherical earth models have an intrinsic problem of determining the roots of the secular equation and the associated residues in the Laplace domain. To bypass this problem, a method based on numerical inverse Laplace integration was developed by T anaka et al. (2006, 2007) for computations of viscoelastic deformation caused by an internal dislocation. The advantage of this approach is that the root-finding problem is avoided without imposing additional constraints on the governing equations and earth models. In this study, we apply the same algorithm to computations of viscoelastic responses to a surface load and show that the results obtained by this approach agree well with those obtained by a time-domain approach that does not need determinations of the normal modes in the Laplace domain. Using the elastic earth model PREM and a convex viscosity profile, we calculate viscoelastic load Love numbers ( h, l, k) for compressible and incompressible models. Comparisons between the results show that effects due to compressibility are consistent with results obtained by previous studies and that the rate differences between the two models total 10-40%. This will serve as an independent method to confirm results obtained by time-domain approaches and will usefully increase the reliability when modeling postglacial rebound.
On the representability problem and the physical meaning of coarse-grained models
NASA Astrophysics Data System (ADS)
Wagner, Jacob W.; Dama, James F.; Durumeric, Aleksander E. P.; Voth, Gregory A.
2016-07-01
In coarse-grained (CG) models where certain fine-grained (FG, i.e., atomistic resolution) observables are not directly represented, one can nonetheless identify indirect the CG observables that capture the FG observable's dependence on CG coordinates. Often, in these cases it appears that a CG observable can be defined by analogy to an all-atom or FG observable, but the similarity is misleading and significantly undermines the interpretation of both bottom-up and top-down CG models. Such problems emerge especially clearly in the framework of the systematic bottom-up CG modeling, where a direct and transparent correspondence between FG and CG variables establishes precise conditions for consistency between CG observables and underlying FG models. Here we present and investigate these representability challenges and illustrate them via the bottom-up conceptual framework for several simple analytically tractable polymer models. The examples provide special focus on the observables of configurational internal energy, entropy, and pressure, which have been at the root of controversy in the CG literature, as well as discuss observables that would seem to be entirely missing in the CG representation but can nonetheless be correlated with CG behavior. Though we investigate these problems in the framework of systematic coarse-graining, the lessons apply to top-down CG modeling also, with crucial implications for simulation at constant pressure and surface tension and for the interpretations of structural and thermodynamic correlations for comparison to experiment.
Ridenour, TY A.; Caldwell, Linda L.; Coatsworth, J. Douglas; Gold, Melanie A.
2011-01-01
Problem behavior theory posits that tolerance of deviance is an antecedent to antisocial behavior and substance use. In contrast, cognitive dissonance theory implies that acceptability of a behavior may increase after experiencing the behavior. Using structural equation modeling, this investigation tested whether changes in tolerance of deviance precede changes in conduct disorder criteria or substance use or vice versa, or if they change concomitantly. Two-year longitudinal data from 246 8- to 16-year-olds suggested that tolerance of deviance increases after conduct disorder criteria or substance use in 8-to-10- and 11-to-12-year-olds. These results were consistent with cognitive dissonance theory. In 13-to-16- year-olds, no directionality was suggested, consistent with neither theory. These results were replicated in boys and girls and for different types of conduct disorder criteria aggression (covert behavior), deceitfulness and vandalism (overt behavior), and serious rule-breaking (authority conflict). The age-specific directionality between tolerance of deviance and conduct disorder criteria or substance use is consistent with unique etiologies between early onset versus adolescent-onset subtypes of behavior problems. PMID:22180721
Ridenour, Ty A; Caldwell, Linda L; Coatsworth, J Douglas; Gold, Melanie A
2011-03-20
Problem behavior theory posits that tolerance of deviance is an antecedent to antisocial behavior and substance use. In contrast, cognitive dissonance theory implies that acceptability of a behavior may increase after experiencing the behavior. Using structural equation modeling, this investigation tested whether changes in tolerance of deviance precede changes in conduct disorder criteria or substance use or vice versa, or if they change concomitantly. Two-year longitudinal data from 246 8- to 16-year-olds suggested that tolerance of deviance increases after conduct disorder criteria or substance use in 8-to-10- and 11-to-12-year-olds. These results were consistent with cognitive dissonance theory. In 13-to-16- year-olds, no directionality was suggested, consistent with neither theory. These results were replicated in boys and girls and for different types of conduct disorder criteria aggression (covert behavior), deceitfulness and vandalism (overt behavior), and serious rule-breaking (authority conflict). The age-specific directionality between tolerance of deviance and conduct disorder criteria or substance use is consistent with unique etiologies between early onset versus adolescent-onset subtypes of behavior problems.
NASA Astrophysics Data System (ADS)
Iqbal, Z.; Azhar, Ehtsham; Mehmood, Zaffar; Maraj, E. N.
2018-01-01
Boundary layer stagnation point flow of Casson fluid over a Riga plate of variable thickness is investigated in present article. Riga plate is an electromagnetic actuator consists of enduring magnets and gyrated aligned array of alternating electrodes mounted on a plane surface. Physical problem is modeled and simplified under appropriate transformations. Effects of thermal radiation and viscous dissipation are incorporated. These differential equations are solved by Keller Box Scheme using MATLAB. Comparison is given with shooting techniques along with Range-Kutta Fehlberg method of order 5. Graphical and tabulated analysis is drawn. The results reveal that Eckert number, radiation and fluid parameters enhance temperature whereas they contribute in lowering rate of heat transfer. The numerical outcomes of present analysis depicts that Keller Box Method is capable and consistent to solve proposed nonlinear problem with high accuracy.
"On Second Thoughts…": Changes of Mind as an Indication of Competing Knowledge Structures
NASA Astrophysics Data System (ADS)
Wilson, Kate F.; Low, David J.
2015-09-01
A review of student answers to diagnostic questions concerned with Newton's Laws showed a tendency for some students to change their answer to a question when the following question caused them to think more about the situation. We investigate this behavior and interpret it in the framework of the resource model; in particular, a weak Newton's Third Law structure being dominated by an inconsistent Newton's Second Law (or "Net Force") structure, in the absence of a strong, consistent Newtonian structure. This observation highlights the hidden problem in instruction where the implicit use of Newton's Third Law is dominated by the explicit conceptual and mathematical application of Newton's Second Law, both within individual courses and across a degree program. To facilitate students' development of a consistent Newtonian knowledge structure, it is important that instructors highlight the interrelated nature of Newton's Laws in problem solving.
Human connectome module pattern detection using a new multi-graph MinMax cut model.
De, Wang; Wang, Yang; Nie, Feiping; Yan, Jingwen; Cai, Weidong; Saykin, Andrew J; Shen, Li; Huang, Heng
2014-01-01
Many recent scientific efforts have been devoted to constructing the human connectome using Diffusion Tensor Imaging (DTI) data for understanding the large-scale brain networks that underlie higher-level cognition in human. However, suitable computational network analysis tools are still lacking in human connectome research. To address this problem, we propose a novel multi-graph min-max cut model to detect the consistent network modules from the brain connectivity networks of all studied subjects. A new multi-graph MinMax cut model is introduced to solve this challenging computational neuroscience problem and the efficient optimization algorithm is derived. In the identified connectome module patterns, each network module shows similar connectivity patterns in all subjects, which potentially associate to specific brain functions shared by all subjects. We validate our method by analyzing the weighted fiber connectivity networks. The promising empirical results demonstrate the effectiveness of our method.
Tensor-GMRES method for large sparse systems of nonlinear equations
NASA Technical Reports Server (NTRS)
Feng, Dan; Pulliam, Thomas H.
1994-01-01
This paper introduces a tensor-Krylov method, the tensor-GMRES method, for large sparse systems of nonlinear equations. This method is a coupling of tensor model formation and solution techniques for nonlinear equations with Krylov subspace projection techniques for unsymmetric systems of linear equations. Traditional tensor methods for nonlinear equations are based on a quadratic model of the nonlinear function, a standard linear model augmented by a simple second order term. These methods are shown to be significantly more efficient than standard methods both on nonsingular problems and on problems where the Jacobian matrix at the solution is singular. A major disadvantage of the traditional tensor methods is that the solution of the tensor model requires the factorization of the Jacobian matrix, which may not be suitable for problems where the Jacobian matrix is large and has a 'bad' sparsity structure for an efficient factorization. We overcome this difficulty by forming and solving the tensor model using an extension of a Newton-GMRES scheme. Like traditional tensor methods, we show that the new tensor method has significant computational advantages over the analogous Newton counterpart. Consistent with Krylov subspace based methods, the new tensor method does not depend on the factorization of the Jacobian matrix. As a matter of fact, the Jacobian matrix is never needed explicitly.
NASA Astrophysics Data System (ADS)
Valdes-Parada, F. J.; Ostvar, S.; Wood, B. D.; Miller, C. T.
2017-12-01
Modeling of hierarchical systems such as porous media can be performed by different approaches that bridge microscale physics to the macroscale. Among the several alternatives available in the literature, the thermodynamically constrained averaging theory (TCAT) has emerged as a robust modeling approach that provides macroscale models that are consistent across scales. For specific closure relation forms, TCAT models are expressed in terms of parameters that depend upon the physical system under study. These parameters are usually obtained from inverse modeling based upon either experimental data or direct numerical simulation at the pore scale. Other upscaling approaches, such as the method of volume averaging, involve an a priori scheme for parameter estimation for certain microscale and transport conditions. In this work, we show how such a predictive scheme can be implemented in TCAT by studying the simple problem of single-phase passive diffusion in rigid and homogeneous porous media. The components of the effective diffusivity tensor are predicted for several porous media by solving ancillary boundary-value problems in periodic unit cells. The results are validated through a comparison with data from direct numerical simulation. This extension of TCAT constitutes a useful advance for certain classes of problems amenable to this estimation approach.
3D gravity inversion and uncertainty assessment of basement relief via Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Pallero, J. L. G.; Fernández-Martínez, J. L.; Bonvalot, S.; Fudym, O.
2017-04-01
Nonlinear gravity inversion in sedimentary basins is a classical problem in applied geophysics. Although a 2D approximation is widely used, 3D models have been also proposed to better take into account the basin geometry. A common nonlinear approach to this 3D problem consists in modeling the basin as a set of right rectangular prisms with prescribed density contrast, whose depths are the unknowns. Then, the problem is iteratively solved via local optimization techniques from an initial model computed using some simplifications or being estimated using prior geophysical models. Nevertheless, this kind of approach is highly dependent on the prior information that is used, and lacks from a correct solution appraisal (nonlinear uncertainty analysis). In this paper, we use the family of global Particle Swarm Optimization (PSO) optimizers for the 3D gravity inversion and model appraisal of the solution that is adopted for basement relief estimation in sedimentary basins. Synthetic and real cases are illustrated, showing that robust results are obtained. Therefore, PSO seems to be a very good alternative for 3D gravity inversion and uncertainty assessment of basement relief when used in a sampling while optimizing approach. That way important geological questions can be answered probabilistically in order to perform risk assessment in the decisions that are made.
Problem Based Learning and the scientific process
NASA Astrophysics Data System (ADS)
Schuchardt, Daniel Shaner
This research project was developed to inspire students to constructively use problem based learning and the scientific process to learn middle school science content. The student population in this study consisted of male and female seventh grade students. Students were presented with authentic problems that are connected to physical and chemical properties of matter. The intent of the study was to have students use the scientific process of looking at existing knowledge, generating learning issues or questions about the problems, and then developing a course of action to research and design experiments to model resolutions to the authentic problems. It was expected that students would improve their ability to actively engage with others in a problem solving process to achieve a deeper understanding of Michigan's 7th Grade Level Content Expectations, the Next Generation Science Standards, and a scientific process. Problem based learning was statistically effective in students' learning of the scientific process. Students statistically showed improvement on pre to posttest scores. The teaching method of Problem Based Learning was effective for seventh grade science students at Dowagiac Middle School.
Woodman, Ashley C.; Mawdsley, Helena P.; Hauser-Cram, Penny
2014-01-01
Parents of children with developmental disabilities (DD) are at increased risk of experiencing psychological stress compared to other parents. Children’s high levels of internalizing and externalizing problems have been found to contribute to this elevated level of stress. Few studies have considered the reverse direction of effects, however, in families where a child has a DD. The present study investigated transactional relations between child behavior problems and maternal stress within 176 families raising a child with early diagnosed DD. There was evidence of both child-driven and parent-driven effects over the 15-year study period, spanning from early childhood (age 3) to adolescence (age 18), consistent with transactional models of development. Parent-child transactions were found to vary across different life phases and with different domains of behavior problems. PMID:25462487
The Timing of School Transitions and Early Adolescent Problem Behavior
Lippold, Melissa A.; Powers, Christopher J.; Syvertsen, Amy K.; Feinberg, Mark E.; Greenberg, Mark T.
2013-01-01
This longitudinal study investigates whether rural adolescents who transition to a new school in sixth grade have higher levels of risky behavior than adolescents who transition in seventh grade. Our findings indicate that later school transitions had little effect on problem behavior between sixth and ninth grades. Cross-sectional analyses found a small number of temporary effects of transition timing on problem behavior: Spending an additional year in elementary school was associated with higher levels of deviant behavior in the Fall of Grade 6 and higher levels of antisocial peer associations in Grade 8. However, transition effects were not consistent across waves and latent growth curve models found no effects of transition timing on the trajectory of problem behavior. We discuss policy implications and compare our findings with other research on transition timing. PMID:24089584
Huyghebaert, Tiphaine; Gillet, Nicolas; Beltou, Nicolas; Tellier, Fanny; Fouquereau, Evelyne
2018-06-14
This study investigated the mediating role of sleeping problems in the relationship between workload and outcomes (emotional exhaustion, presenteeism, job satisfaction, and performance), and overcommitment was examined as a moderator in the relationship between workload and sleeping problems. We conducted an empirical study using a sample of 884 teachers. Consistent with our predictions, results revealed that the positive indirect effects of workload on emotional exhaustion and presenteeism, and the negative indirect effects of workload on job satisfaction and performance, through sleeping problems, were only significant among overcommitted teachers. Workload and overcommitment were also directly related to all four outcomes, precisely, they both positively related to emotional exhaustion and presenteeism and negatively related to job satisfaction and performance. Theoretical contributions and perspectives and implications for practice are discussed. Copyright © 2018 John Wiley & Sons, Ltd.
Finite element analysis of time-independent superconductivity. Ph.D. Thesis Final Report
NASA Technical Reports Server (NTRS)
Schuler, James J.
1993-01-01
The development of electromagnetic (EM) finite elements based upon a generalized four-potential variational principle is presented. The use of the four-potential variational principle allows for downstream coupling of EM fields with the thermal, mechanical, and quantum effects exhibited by superconducting materials. The use of variational methods to model an EM system allows for a greater range of applications than just the superconducting problem. The four-potential variational principle can be used to solve a broader range of EM problems than any of the currently available formulations. It also reduces the number of independent variables from six to four while easily dealing with conductor/insulator interfaces. This methodology was applied to a range of EM field problems. Results from all these problems predict EM quantities exceptionally well and are consistent with the expected physical behavior.
Lengua, L J; Wolchik, S A; Sandler, I N; West, S G
2000-06-01
Investigated the interaction between parenting and temperament in predicting adjustment problems in children of divorce. The study utilized a sample of 231 mothers and children, 9 to 12 years old, who had experienced divorce within the previous 2 years. Both mothers' and children's reports on parenting, temperament, and adjustment variables were obtained and combined to create cross-reporter measures of the variables. Parenting and temperament were directly and independently related to outcomes consistent with an additive model of their effects. Significant interactions indicated that parental rejection was more strongly related to adjustment problems for children low in positive emotionality, and inconsistent discipline was more strongly related to adjustment problems for children high in impulsivity. These findings suggest that children who are high in impulsivity may be at greater risk for developing problems, whereas positive emotionality may operate as a protective factor, decreasing the risk of adjustment problems in response to negative parenting.
NASA Astrophysics Data System (ADS)
Conti, Roberto; Meli, Enrico; Pugi, Luca; Malvezzi, Monica; Bartolini, Fabio; Allotta, Benedetto; Rindi, Andrea; Toni, Paolo
2012-05-01
Scaled roller rigs used for railway applications play a fundamental role in the development of new technologies and new devices, combining the hardware in the loop (HIL) benefits with the reduction of the economic investments. The main problem of the scaled roller rig with respect to the full scale ones is the improved complexity due to the scaling factors. For this reason, before building the test rig, the development of a software model of the HIL system can be useful to analyse the system behaviour in different operative conditions. One has to consider the multi-body behaviour of the scaled roller rig, the controller and the model of the virtual vehicle, whose dynamics has to be reproduced on the rig. The main purpose of this work is the development of a complete model that satisfies the previous requirements and in particular the performance analysis of the controller and of the dynamical behaviour of the scaled roller rig when some disturbances are simulated with low adhesion conditions. Since the scaled roller rig will be used to simulate degraded adhesion conditions, accurate and realistic wheel-roller contact model also has to be included in the model. The contact model consists of two parts: the contact point detection and the adhesion model. The first part is based on a numerical method described in some previous studies for the wheel-rail case and modified to simulate the three-dimensional contact between revolute surfaces (wheel-roller). The second part consists in the evaluation of the contact forces by means of the Hertz theory for the normal problem and the Kalker theory for the tangential problem. Some numerical tests were performed, in particular low adhesion conditions were simulated, and bogie hunting and dynamical imbalance of the wheelsets were introduced. The tests were devoted to verify the robustness of control system with respect to some of the more frequent disturbances that may influence the roller rig dynamics. In particular we verified that the wheelset imbalance could significantly influence system performance, and to reduce the effect of this disturbance a multistate filter was designed.
Ozone Sensitivity to Varying Greenhouse Gases and Ozone-Depleting Substances in CCMI-1 Simulations
NASA Technical Reports Server (NTRS)
Morgenstern, Olaf; Stone, Kane A.; Schofield, Robyn; Akiyoshi, Hideharu; Yamashita, Yousuke; Kinnison, Douglas E.; Garcia, Rolando R.; Sudo, Kengo; Plummer, David A.; Scinocca, John;
2018-01-01
Ozone fields simulated for the first phase of the Chemistry-Climate Model Initiative (CCMI-1) will be used as forcing data in the 6th Coupled Model Intercomparison Project. Here we assess, using reference and sensitivity simulations produced for CCMI-1, the suitability of CCMI-1 model results for this process, investigating the degree of consistency amongst models regarding their responses to variations in individual forcings. We consider the influences of methane, nitrous oxide, a combination of chlorinated or brominated ozone-depleting substances, and a combination of carbon dioxide and other greenhouse gases. We find varying degrees of consistency in the models' responses in ozone to these individual forcings, including some considerable disagreement. In particular, the response of total-column ozone to these forcings is less consistent across the multi-model ensemble than profile comparisons. We analyse how stratospheric age of air, a commonly used diagnostic of stratospheric transport, responds to the forcings. For this diagnostic we find some salient differences in model behaviour, which may explain some of the findings for ozone. The findings imply that the ozone fields derived from CCMI-1 are subject to considerable uncertainties regarding the impacts of these anthropogenic forcings. We offer some thoughts on how to best approach the problem of generating a consensus ozone database from a multi-model ensemble such as CCMI-1.
Ozone sensitivity to varying greenhouse gases and ozone-depleting substances in CCMI-1 simulations
NASA Astrophysics Data System (ADS)
Morgenstern, Olaf; Stone, Kane A.; Schofield, Robyn; Akiyoshi, Hideharu; Yamashita, Yousuke; Kinnison, Douglas E.; Garcia, Rolando R.; Sudo, Kengo; Plummer, David A.; Scinocca, John; Oman, Luke D.; Manyin, Michael E.; Zeng, Guang; Rozanov, Eugene; Stenke, Andrea; Revell, Laura E.; Pitari, Giovanni; Mancini, Eva; Di Genova, Glauco; Visioni, Daniele; Dhomse, Sandip S.; Chipperfield, Martyn P.
2018-01-01
Ozone fields simulated for the first phase of the Chemistry-Climate Model Initiative (CCMI-1) will be used as forcing data in the 6th Coupled Model Intercomparison Project. Here we assess, using reference and sensitivity simulations produced for CCMI-1, the suitability of CCMI-1 model results for this process, investigating the degree of consistency amongst models regarding their responses to variations in individual forcings. We consider the influences of methane, nitrous oxide, a combination of chlorinated or brominated ozone-depleting substances, and a combination of carbon dioxide and other greenhouse gases. We find varying degrees of consistency in the models' responses in ozone to these individual forcings, including some considerable disagreement. In particular, the response of total-column ozone to these forcings is less consistent across the multi-model ensemble than profile comparisons. We analyse how stratospheric age of air, a commonly used diagnostic of stratospheric transport, responds to the forcings. For this diagnostic we find some salient differences in model behaviour, which may explain some of the findings for ozone. The findings imply that the ozone fields derived from CCMI-1 are subject to considerable uncertainties regarding the impacts of these anthropogenic forcings. We offer some thoughts on how to best approach the problem of generating a consensus ozone database from a multi-model ensemble such as CCMI-1.
Abdullah, Norazlin; Yusof, Yus A.; Talib, Rosnita A.
2017-01-01
Abstract This study has modeled the rheological behavior of thermosonic extracted pink‐fleshed guava, pink‐fleshed pomelo, and soursop juice concentrates at different concentrations and temperatures. The effects of concentration on consistency coefficient (K) and flow behavior index (n) of the fruit juice concentrates was modeled using a master curve which utilized the concentration‐temperature shifting to allow a general prediction of rheological behaviors covering a wide concentration. For modeling the effects of temperature on K and n, the integration of two functions from the Arrhenius and logistic sigmoidal growth equations has provided a new model which gave better description of the properties. It also alleviated the problems of negative region when using the Arrhenius model alone. The fitted regression using this new model has improved coefficient of determination, R 2 values above 0.9792 as compared to using the Arrhenius and logistic sigmoidal models alone, which presented minimum R 2 of 0.6243 and 0.9440, respectively. Practical applications In general, juice concentrate is a better form of food for transportation, preservation, and ingredient. Models are necessary to predict the effects of processing factors such as concentration and temperature on the rheological behavior of juice concentrates. The modeling approach allows prediction of behaviors and determination of processing parameters. The master curve model introduced in this study simplifies and generalized rheological behavior of juice concentrates over a wide range of concentration when temperature factor is insignificant. The proposed new mathematical model from the combination of the Arrhenius and logistic sigmoidal growth models has improved and extended description of rheological properties of fruit juice concentrates. It also solved problems of negative values of consistency coefficient and flow behavior index prediction using existing model, the Arrhenius equation. These rheological data modeling provide good information for the juice processing and equipment manufacturing needs. PMID:29479123
NASA Astrophysics Data System (ADS)
Halbe, Johannes; Pahl-Wostl, Claudia; Adamowski, Jan
2018-01-01
Multiple barriers constrain the widespread application of participatory methods in water management, including the more technical focus of most water agencies, additional cost and time requirements for stakeholder involvement, as well as institutional structures that impede collaborative management. This paper presents a stepwise methodological framework that addresses the challenges of context-sensitive initiation, design and institutionalization of participatory modeling processes. The methodological framework consists of five successive stages: (1) problem framing and stakeholder analysis, (2) process design, (3) individual modeling, (4) group model building, and (5) institutionalized participatory modeling. The Management and Transition Framework is used for problem diagnosis (Stage One), context-sensitive process design (Stage Two) and analysis of requirements for the institutionalization of participatory water management (Stage Five). Conceptual modeling is used to initiate participatory modeling processes (Stage Three) and ensure a high compatibility with quantitative modeling approaches (Stage Four). This paper describes the proposed participatory model building (PMB) framework and provides a case study of its application in Québec, Canada. The results of the Québec study demonstrate the applicability of the PMB framework for initiating and designing participatory model building processes and analyzing barriers towards institutionalization.
A practical method to assess model sensitivity and parameter uncertainty in C cycle models
NASA Astrophysics Data System (ADS)
Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy
2015-04-01
The carbon cycle combines multiple spatial and temporal scales, from minutes to hours for the chemical processes occurring in plant cells to several hundred of years for the exchange between the atmosphere and the deep ocean and finally to millennia for the formation of fossil fuels. Together with our knowledge of the transformation processes involved in the carbon cycle, many Earth Observation systems are now available to help improving models and predictions using inverse modelling techniques. A generic inverse problem consists in finding a n-dimensional state vector x such that h(x) = y, for a given N-dimensional observation vector y, including random noise, and a given model h. The problem is well posed if the three following conditions hold: 1) there exists a solution, 2) the solution is unique and 3) the solution depends continuously on the input data. If at least one of these conditions is violated the problem is said ill-posed. The inverse problem is often ill-posed, a regularization method is required to replace the original problem with a well posed problem and then a solution strategy amounts to 1) constructing a solution x, 2) assessing the validity of the solution, 3) characterizing its uncertainty. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Intercomparison experiments have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF) to estimate model parameters and initial carbon stocks for DALEC using eddy covariance measurements of net ecosystem exchange of CO2 and leaf area index observations. Most results agreed on the fact that parameters and initial stocks directly related to fast processes were best estimated with narrow confidence intervals, whereas those related to slow processes were poorly estimated with very large uncertainties. While other studies have tried to overcome this difficulty by adding complementary data streams or by considering longer observation windows no systematic analysis has been carried out so far to explain the large differences among results. We consider adjoint based methods to investigate inverse problems using DALEC and various data streams. Using resolution matrices we study the nature of the inverse problems (solution existence, uniqueness and stability) and show how standard regularization techniques affect resolution and stability properties. Instead of using standard prior information as a penalty term in the cost function to regularize the problems we constraint the parameter space using ecological balance conditions and inequality constraints. The efficiency and rapidity of this approach allows us to compute ensembles of solutions to the inverse problems from which we can establish the robustness of the variational method and obtain non Gaussian posterior distributions for the model parameters and initial carbon stocks.
Appraisal of geodynamic inversion results: a data mining approach
NASA Astrophysics Data System (ADS)
Baumann, T. S.
2016-11-01
Bayesian sampling based inversions require many thousands or even millions of forward models, depending on how nonlinear or non-unique the inverse problem is, and how many unknowns are involved. The result of such a probabilistic inversion is not a single `best-fit' model, but rather a probability distribution that is represented by the entire model ensemble. Often, a geophysical inverse problem is non-unique, and the corresponding posterior distribution is multimodal, meaning that the distribution consists of clusters with similar models that represent the observations equally well. In these cases, we would like to visualize the characteristic model properties within each of these clusters of models. However, even for a moderate number of inversion parameters, a manual appraisal for a large number of models is not feasible. This poses the question whether it is possible to extract end-member models that represent each of the best-fit regions including their uncertainties. Here, I show how a machine learning tool can be used to characterize end-member models, including their uncertainties, from a complete model ensemble that represents a posterior probability distribution. The model ensemble used here results from a nonlinear geodynamic inverse problem, where rheological properties of the lithosphere are constrained from multiple geophysical observations. It is demonstrated that by taking vertical cross-sections through the effective viscosity structure of each of the models, the entire model ensemble can be classified into four end-member model categories that have a similar effective viscosity structure. These classification results are helpful to explore the non-uniqueness of the inverse problem and can be used to compute representative data fits for each of the end-member models. Conversely, these insights also reveal how new observational constraints could reduce the non-uniqueness. The method is not limited to geodynamic applications and a generalized MATLAB code is provided to perform the appraisal analysis.
NASA Astrophysics Data System (ADS)
Ortega Gelabert, Olga; Zlotnik, Sergio; Afonso, Juan Carlos; Díez, Pedro
2017-04-01
The determination of the present-day physical state of the thermal and compositional structure of the Earth's lithosphere and sub-lithospheric mantle is one of the main goals in modern lithospheric research. All this data is essential to build Earth's evolution models and to reproduce many geophysical observables (e.g. elevation, gravity anomalies, travel time data, heat flow, etc) together with understanding the relationship between them. Determining the lithospheric state involves the solution of high-resolution inverse problems and, consequently, the solution of many direct models is required. The main objective of this work is to contribute to the existing inversion techniques in terms of improving the estimation of the elevation (topography) by including a dynamic component arising from sub-lithospheric mantle flow. In order to do so, we implement an efficient Reduced Order Method (ROM) built upon classic Finite Elements. ROM allows to reduce significantly the computational cost of solving a family of problems, for example all the direct models that are required in the solution of the inverse problem. The strategy of the method consists in creating a (reduced) basis of solutions, so that when a new problem has to be solved, its solution is sought within the basis instead of attempting to solve the problem itself. In order to check the Reduced Basis approach, we implemented the method in a 3D domain reproducing a portion of Earth that covers up to 400 km depth. Within the domain the Stokes equation is solved with realistic viscosities and densities. The different realizations (the family of problems) is created by varying viscosities and densities in a similar way as it would happen in an inversion problem. The Reduced Basis method is shown to be an extremely efficiently solver for the Stokes equation in this context.
Scheduler Design Criteria: Requirements and Considerations
NASA Technical Reports Server (NTRS)
Lee, Hanbong
2016-01-01
This presentation covers fundamental requirements and considerations for developing schedulers in airport operations. We first introduce performance and functional requirements for airport surface schedulers. Among various optimization problems in airport operations, we focus on airport surface scheduling problem, including runway and taxiway operations. We then describe a basic methodology for airport surface scheduling such as node-link network model and scheduling algorithms previously developed. Next, we explain how to design a mathematical formulation in more details, which consists of objectives, decision variables, and constraints. Lastly, we review other considerations, including optimization tools, computational performance, and performance metrics for evaluation.
Treatment of substance misuse in older women: using a brief intervention model.
Finfgeld-Connett, Deborah L
2004-08-01
Alcohol and benzodiazepine misuse is a significant problem in older women for a number of reasons such as physiological changes, outdated prescribing practices, and failure to identify hazardous use. In addition, treatment barriers involving the health-care system, conflicting information, and ageism also exist. Substance misuse among older women is predicted to become a bigger problem as the baby boom generation ages. Brief interventions that consist of assessment, feedback, responsibility, advice, menu, empathy, and self-efficacy, or A-FRAMES, have the potential to reduce alcohol and benzodiazepine misuse among older women in a cost-effective manner.
Cooperative vehicle routing problem: an opportunity for cost saving
NASA Astrophysics Data System (ADS)
Zibaei, Sedighe; Hafezalkotob, Ashkan; Ghashami, Seyed Sajad
2016-09-01
In this paper, a novel methodology is proposed to solve a cooperative multi-depot vehicle routing problem. We establish a mathematical model for multi-owner VRP in which each owner (i.e. player) manages single or multiple depots. The basic idea consists of offering an option that owners cooperatively manage the VRP to save their costs. We present cooperative game theory techniques for cost saving allocations which are obtained from various coalitions of owners. The methodology is illustrated with a numerical example in which different coalitions of the players are evaluated along with the results of cooperation and cost saving allocation methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, Peter W.; Ismail, Ahmed; /Stanford U., Phys. Dept.
We present a simple solution to the little hierarchy problem in the minimal supersymmetric standard model: a vectorlike fourth generation. With O(1) Yukawa couplings for the new quarks, the Higgs mass can naturally be above 114 GeV. Unlike a chiral fourth generation, a vectorlike generation can solve the little hierarchy problem while remaining consistent with precision electroweak and direct production constraints, and maintaining the success of the grand unified framework. The new quarks are predicted to lie between 300-600 GeV and will thus be discovered or ruled out at the LHC. This scenario suggests exploration of several novel collider signatures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, Peter W.; Ismail, Ahmed; Saraswat, Prashant
We present a simple solution to the little hierarchy problem in the minimal supersymmetric standard model: a vectorlike fourth generation. With O(1) Yukawa couplings for the new quarks, the Higgs mass can naturally be above 114 GeV. Unlike a chiral fourth generation, a vectorlike generation can solve the little hierarchy problem while remaining consistent with precision electroweak and direct production constraints, and maintaining the success of the grand unified framework. The new quarks are predicted to lie between {approx}300-600 GeV and will thus be discovered or ruled out at the LHC. This scenario suggests exploration of several novel collider signatures.
Solving cyclical nurse scheduling problem using preemptive goal programming
NASA Astrophysics Data System (ADS)
Sundari, V. E.; Mardiyati, S.
2017-07-01
Nurse scheduling system in a hospital is being modeled as a preemptive goal programming problem that is solved by using LINGO software with the objective function to minimize deviation variable at each goal. The scheduling is done cyclically, so every nurse is treated fairly since they have the same work shift portion with the other nurses. By paying attention to the hospital's rules regarding nursing work shift cyclically, it can be obtained that numbers of nurse needed in every ward are 18 nurses and the numbers of scheduling periods are 18 periods where every period consists of 21 days.
NASA Astrophysics Data System (ADS)
Yerizon; Jazwinarti; Yarman
2018-01-01
Students have difficulties experience in the course Introduction to Operational Research (PRO). The purpose of this study is to analyze the requirement of students in the developing lecturing materials PRO based Problem Based Learning which is valid, practice, and effective. Lecture materials are developed based on Plomp’s model. The development process of this device consists of 3 phases: front-end analysis/preliminary research, development/prototype phase and assessment phase. Preliminary analysis was obtained by observation and interview. From the research, it is found that students need the student’s worksheet (LKM) for several reasons: 1) no LKM available, 2) presentation of subject not yet based on real problem, 3) experiencing difficulties from current learning source.
NASA Astrophysics Data System (ADS)
Frommer, Joshua B.
This work develops and implements a solution framework that allows for an integrated solution to a resource allocation system-of-systems problem associated with designing vehicles for integration into an existing fleet to extend that fleet's capability while improving efficiency. Typically, aircraft design focuses on using a specific design mission while a fleet perspective would provide a broader capability. Aspects of design for both the vehicles and missions may be, for simplicity, deterministic in nature or, in a model that reflects actual conditions, uncertain. Toward this end, the set of tasks or goals for the to-be-planned system-of-systems will be modeled more accurately with non-deterministic values, and the designed platforms will be evaluated using reliability analysis. The reliability, defined as the probability of a platform or set of platforms to complete possible missions, will contribute to the fitness of the overall system. The framework includes building surrogate models for metrics such as capability and cost, and includes the ideas of reliability in the overall system-level design space. The concurrent design and allocation system-of-systems problem is a multi-objective mixed integer nonlinear programming (MINLP) problem. This study considered two system-of-systems problems that seek to simultaneously design new aircraft and allocate these aircraft into a fleet to provide a desired capability. The Coast Guard's Integrated Deepwater System program inspired the first problem, which consists of a suite of search-and-find missions for aircraft based on descriptions from the National Search and Rescue Manual. The second represents suppression of enemy air defense operations similar to those carried out by the U.S. Air Force, proposed as part of the Department of Defense Network Centric Warfare structure, and depicted in MILSTD-3013. The two problems seem similar, with long surveillance segments, but because of the complex nature of aircraft design, the analysis of the vehicle for high-speed attack combined with a long loiter period is considerably different from that for quick cruise to an area combined with a low speed search. However, the framework developed to solve this class of system-of-systems problem handles both scenarios and leads to a solution type for this kind of problem. On the vehicle-level of the problem, different technology can have an impact on the fleet-level. One such technology is Morphing, the ability to change shape, which is an ideal candidate technology for missions with dissimilar segments, such as the aforementioned two. A framework, using surrogate models based on optimally-sized aircraft, and using probabilistic parameters to define a concept of operations, is investigated; this has provided insight into the setup of the optimization problem, the use of the reliability metric, and the measurement of fleet level impacts of morphing aircraft. The research consisted of four phases. The two initial phases built and defined the framework to solve system-of-systems problem; these investigations used the search-and-find scenario as the example application. The first phase included the design of fixed-geometry and morphing aircraft for a range of missions and evaluated the aircraft capability using non-deterministic mission parameters. The second phase introduced the idea of multiple aircraft in a fleet, but only considered a fleet consisting of one aircraft type. The third phase incorporated the simultaneous design of a new vehicle and allocation into a fleet for the search-and-find scenario; in this phase, multiple types of aircraft are considered. The fourth phase repeated the simultaneous new aircraft design and fleet allocation for the SEAD scenario to show that the approach is not specific to the search-and-find scenario. The framework presented in this work appears to be a viable approach for concurrently designing and allocating constituents in a system, specifically aircraft in a fleet. The research also shows that new technology impact can be assessed at the fleet level using conceptual design principles.
Automation of reverse engineering process in aircraft modeling and related optimization problems
NASA Technical Reports Server (NTRS)
Li, W.; Swetits, J.
1994-01-01
During the year of 1994, the engineering problems in aircraft modeling were studied. The initial concern was to obtain a surface model with desirable geometric characteristics. Much of the effort during the first half of the year was to find an efficient way of solving a computationally difficult optimization model. Since the smoothing technique in the proposal 'Surface Modeling and Optimization Studies of Aerodynamic Configurations' requires solutions of a sequence of large-scale quadratic programming problems, it is important to design algorithms that can solve each quadratic program in a few interactions. This research led to three papers by Dr. W. Li, which were submitted to SIAM Journal on Optimization and Mathematical Programming. Two of these papers have been accepted for publication. Even though significant progress has been made during this phase of research and computation times was reduced from 30 min. to 2 min. for a sample problem, it was not good enough for on-line processing of digitized data points. After discussion with Dr. Robert E. Smith Jr., it was decided not to enforce shape constraints in order in order to simplify the model. As a consequence, P. Dierckx's nonparametric spline fitting approach was adopted, where one has only one control parameter for the fitting process - the error tolerance. At the same time the surface modeling software developed by Imageware was tested. Research indicated a substantially improved fitting of digitalized data points can be achieved if a proper parameterization of the spline surface is chosen. A winning strategy is to incorporate Dierckx's surface fitting with a natural parameterization for aircraft parts. The report consists of 4 chapters. Chapter 1 provides an overview of reverse engineering related to aircraft modeling and some preliminary findings of the effort in the second half of the year. Chapters 2-4 are the research results by Dr. W. Li on penalty functions and conjugate gradient methods for quadratic programming problems.
Numerical Modeling of the Photothermal Processing for Bubble Forming around Nanowire in a Liquid
Chaari, Anis; Giraud-Moreau, Laurence
2014-01-01
An accurate computation of the temperature is an important factor in determining the shape of a bubble around a nanowire immersed in a liquid. The study of the physical phenomenon consists in solving a photothermic coupled problem between light and nanowire. The numerical multiphysic model is used to study the variations of the temperature and the shape of the created bubble by illumination of the nanowire. The optimization process, including an adaptive remeshing scheme, is used to solve the problem through a finite element method. The study of the shape evolution of the bubble is made taking into account the physical and geometrical parameters of the nanowire. The relation between the sizes and shapes of the bubble and nanowire is deduced. PMID:24795538
An evolving effective stress approach to anisotropic distortional hardening
Lester, B. T.; Scherzinger, W. M.
2018-03-11
A new yield surface with an evolving effective stress definition is proposed for consistently and efficiently describing anisotropic distortional hardening. Specifically, a new internal state variable is introduced to capture the thermodynamic evolution between different effective stress definitions. The corresponding yield surface and evolution equations of the internal variables are derived from thermodynamic considerations enabling satisfaction of the second law. A closest point projection return mapping algorithm for the proposed model is formulated and implemented for use in finite element analyses. Finally, select constitutive and larger scale boundary value problems are solved to explore the capabilities of the model andmore » examine the impact of distortional hardening on constitutive and structural responses. Importantly, these simulations demonstrate the tractability of the proposed formulation in investigating large-scale problems of interest.« less
An evolving effective stress approach to anisotropic distortional hardening
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, B. T.; Scherzinger, W. M.
A new yield surface with an evolving effective stress definition is proposed for consistently and efficiently describing anisotropic distortional hardening. Specifically, a new internal state variable is introduced to capture the thermodynamic evolution between different effective stress definitions. The corresponding yield surface and evolution equations of the internal variables are derived from thermodynamic considerations enabling satisfaction of the second law. A closest point projection return mapping algorithm for the proposed model is formulated and implemented for use in finite element analyses. Finally, select constitutive and larger scale boundary value problems are solved to explore the capabilities of the model andmore » examine the impact of distortional hardening on constitutive and structural responses. Importantly, these simulations demonstrate the tractability of the proposed formulation in investigating large-scale problems of interest.« less
Davis, Sean D; Butler, Mark H
2004-07-01
Enactments are a potential common clinical process factor contributing to positive outcomes in many relational therapies. Enactments provide therapists a medium for mediating relationships through simultaneous experiential intervention and change at multiple levels of relationships--including specific relationship disagreements and problems, interaction process surrounding these issues, and underlying emotions and attachment issues confounded with those problems. We propose a model of enactments in marriage and family therapy, consisting of three components--initiation operations, intervention operations, and evaluation operations. We offer a conceptual framework to help clinicians know when and to what purpose to use this model of enactments. We provide an operational description of each component of an enactment, exemplifying them using a hypothetical clinical vignette. Directions for future research are suggested.
Developing a short measure of organizational justice: a multisample health professionals study.
Elovainio, Marko; Heponiemi, Tarja; Kuusio, Hannamaria; Sinervo, Timo; Hintsa, Taina; Aalto, Anna-Mari
2010-11-01
To develop and test the validity of a short version of the original questionnaire measuring organizational justice. The study samples comprised working physicians (N = 2792) and registered nurses (n = 2137) from the Finnish Health Professionals study. Structural equation modelling was applied to test structural validity, using the justice scales. Furthermore, criterion validity was explored with well-being (sleeping problems) and health indicators (psychological distress/self-rated health). The short version of the organizational justice questionnaire (eight items) provides satisfactory psychometric properties (internal consistency, a good model fit of the data). All scales were associated with an increased risk of sleeping problems and psychological distress, indicating satisfactory criterion validity. This short version of the organizational justice questionnaire provides a useful tool for epidemiological studies focused on health-adverse effects of work environment.
NASA Astrophysics Data System (ADS)
Ema, Yohei; Hagihara, Daisuke; Hamaguchi, Koichi; Moroi, Takeo; Nakayama, Kazunori
2018-04-01
Recently, a new minimal extension of the Standard Model has been proposed, where a spontaneously broken, flavor-dependent global U(1) symmetry is introduced. It not only explains the hierarchical flavor structure in the quark and lepton sector, but also solves the strong CP problem by identifying the Nambu-Goldstone boson as the QCD axion, which we call flaxion. In this work, we consider supersymmetric extensions of the flaxion scenario. We study the CP and flavor violations due to supersymmetric particles, the effects of R-parity violations, the cosmological gravitino and axino problems, and the cosmological evolution of the scalar partner of the flaxion, sflaxion. We also propose an attractor-like inflationary model where the flaxion multiplet contains the inflaton field, and show that a consistent cosmological scenario can be obtained, including inflation, leptogenesis, and dark matter.
Coarse-Graining of Polymer Dynamics via Energy Renormalization
NASA Astrophysics Data System (ADS)
Xia, Wenjie; Song, Jake; Phelan, Frederick; Douglas, Jack; Keten, Sinan
The computational prediction of the properties of polymeric materials to serve the needs of materials design and prediction of their performance is a grand challenge due to the prohibitive computational times of all-atomistic (AA) simulations. Coarse-grained (CG) modeling is an essential strategy for making progress on this problem. While there has been intense activity in this area, effective methods of coarse-graining have been slow to develop. Our approach to this fundamental problem starts from the observation that integrating out degrees of freedom of the AA model leads to a strong modification of the configurational entropy and cohesive interaction. Based on this observation, we propose a temperature-dependent systematic renormalization of the cohesive interaction in the CG modeling to recover the thermodynamic modifications in the system and the dynamics of the AA model. Here, we show that this energy renormalization approach to CG can faithfully estimate the diffusive, segmental and glassy dynamics of the AA model over a large temperature range spanning from the Arrhenius melt to the non-equilibrium glassy states. Our proposed CG strategy offers a promising strategy for developing thermodynamically consistent CG models with temperature transferability.
Perception of Health Problems Among Competitive Runners
Jelvegård, Sara; Timpka, Toomas; Bargoria, Victor; Gauffin, Håkan; Jacobsson, Jenny
2016-01-01
Background: Approximately 2 of every 3 competitive runners sustain at least 1 health problem each season. Most of these problems are nontraumatic injuries with gradual onset. The main known risk indicator for sustaining a new running-related injury episode is a history of a previous injury, suggesting that behavioral habits are part of the causal mechanisms. Purpose: Identification of elements associated with purposeful interpretations of body perceptions and balanced behavioral responses may supply vital information for prevention of health problems in runners. This study set out to explore competitive runners’ cognitive appraisals of perceived symptoms on injury and illness and how these appraisals are transformed into behavior. Study Design: Cross-sectional study; Level of evidence, 3. Methods: The study population consisted of Swedish middle- and long-distance runners from the national top 15 list. Qualitative research methods were used to categorize interview data and perform a thematic analysis. The categories resulting from the analysis were used to construct an explanatory model. Results: Saturation of the thematic classification required that data from 8 male and 6 female runners (age range, 20-36 years) were collected. Symptoms interpreted to be caused by illness or injury with a sudden onset were found to lead to immediate action and changes to training and competition programs (activity pacing). In contrast, perceptions interpreted to be due to injuries with gradual onset led to varied behavioral reactions. These behavioral responses were planned with regard to short-term consequences and were characterized by indifference and neglect of long-term implications, consistent with an overactivity behavioral pattern. The latter pattern was consistent with a psychological adaptation to stimuli that is presented progressively to the athlete. Conclusion: Competitive runners appraise whether a health problem requires immediate withdrawal from training based on whether the problem is interpreted as an illness and/or has a sudden onset. The ensuing behaviors follow 2 distinct patterns that can be termed “activity pacing” and “overactivity.” PMID:28210643
Effect of Configuration Pitching Motion on Twin Tail Buffet Response
NASA Technical Reports Server (NTRS)
Sheta, Essam F.; Kandil, Osama A.
1998-01-01
The effect of dynamic pitch-up motion of delta wing on twin-tail buffet response is investigated. The computational model consists of a delta wing-twin tail configuration. The computations are carried out on a dynamic multi-block grid structure. This multidisciplinary problem is solved using three sets of equations which consists of the unsteady Navier-Stokes equations, the aeroelastic equations, and the grid displacement equations. The configuration is pitched-up from zero up to 60 deg. angle of attack, and the freestream Mach number and Reynolds number are 0.3 and 1.25 million, respectively. With the twin tail fixed as rigid surfaces and with no-forced pitch-up motion, the problem is solved for the initial flow conditions. Next, the problem is solved for the twin-tail response for uncoupled bending and torsional vibrations due to the unsteady loads on the twin tail and due to the forced pitch-up motion. The dynamic pitch-up problem is also solved for the flow response with the twin tail kept rigid. The configuration is investigated for inboard position of the twin tail which corresponds to a separation distance between the twin tail of 33% wing chord. The computed results are compared with the available experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skalozub, A.S.; Tsaune, A.Ya.
1994-12-01
A new approach for analyzing the highly excited vibration-rotation (VR) states of nonrigid molecules is suggested. It is based on the separation of the vibrational and rotational terms in the molecular VR Hamiltonian by introducing periodic auxiliary fields. These fields transfer different interactions within a molecule and are treated in terms of the mean-field approximation. As a result, the solution of the stationary Schroedinger equation with the VR Hamiltonian amounts to a quantization of the Berry phase in a problem of the molecular angular-momentum motion in a certain periodic VR field (rotational problem). The quantization procedure takes into account themore » motion of the collective vibrational variables in the appropriate VR potentials (vibrational problem). The quantization rules, the mean-field configurations of auxiliary interactions, and the solutions to the Schrodinger equations for the vibrational and rotational problems are self-consistently connected with one another. The potentialities of the theory are demonstrated by the bending-rotation interaction modeled by the Bunker-Landsberg potential function in the H{sub 2} molecule. The calculations are compared with both the results of the exact computations and those of other approximate methods. 32 refs., 4 tabs.« less
Detecting consistent patterns of directional adaptation using differential selection codon models.
Parto, Sahar; Lartillot, Nicolas
2017-06-23
Phylogenetic codon models are often used to characterize the selective regimes acting on protein-coding sequences. Recent methodological developments have led to models explicitly accounting for the interplay between mutation and selection, by modeling the amino acid fitness landscape along the sequence. However, thus far, most of these models have assumed that the fitness landscape is constant over time. Fluctuations of the fitness landscape may often be random or depend on complex and unknown factors. However, some organisms may be subject to systematic changes in selective pressure, resulting in reproducible molecular adaptations across independent lineages subject to similar conditions. Here, we introduce a codon-based differential selection model, which aims to detect and quantify the fine-grained consistent patterns of adaptation at the protein-coding level, as a function of external conditions experienced by the organism under investigation. The model parameterizes the global mutational pressure, as well as the site- and condition-specific amino acid selective preferences. This phylogenetic model is implemented in a Bayesian MCMC framework. After validation with simulations, we applied our method to a dataset of HIV sequences from patients with known HLA genetic background. Our differential selection model detects and characterizes differentially selected coding positions specifically associated with two different HLA alleles. Our differential selection model is able to identify consistent molecular adaptations as a function of repeated changes in the environment of the organism. These models can be applied to many other problems, ranging from viral adaptation to evolution of life-history strategies in plants or animals.
Kiesewetter, Jan; Ebersbach, René; Görlitz, Anja; Holzer, Matthias; Fischer, Martin R; Schmidmaier, Ralf
2013-01-01
Problem-solving in terms of clinical reasoning is regarded as a key competence of medical doctors. Little is known about the general cognitive actions underlying the strategies of problem-solving among medical students. In this study, a theory-based model was used and adapted in order to investigate the cognitive actions in which medical students are engaged when dealing with a case and how patterns of these actions are related to the correct solution. Twenty-three medical students worked on three cases on clinical nephrology using the think-aloud method. The transcribed recordings were coded using a theory-based model consisting of eight different cognitive actions. The coded data was analysed using time sequences in a graphical representation software. Furthermore the relationship between the coded data and accuracy of diagnosis was investigated with inferential statistical methods. The observation of all main actions in a case elaboration, including evaluation, representation and integration, was considered a complete model and was found in the majority of cases (56%). This pattern significantly related to the accuracy of the case solution (φ = 0.55; p<.001). Extent of prior knowledge was neither related to the complete model nor to the correct solution. The proposed model is suitable to empirically verify the cognitive actions of problem-solving of medical students. The cognitive actions evaluation, representation and integration are crucial for the complete model and therefore for the accuracy of the solution. The educational implication which may be drawn from this study is to foster students reasoning by focusing on higher level reasoning.
Jacob, Louis; Haro, Josep Maria; Koyanagi, Ai
2018-05-24
Studies on the association between psychotic experiences (PEs) and problem gambling are lacking. Thus, we examined the association between PEs and problem gambling in the general UK population. This study used community-based, cross-sectional data from the 2007 Adult Psychiatric Morbidity Survey (APMS) (n = 7403). Ten items from the DSM-IV criteria and the British Gambling Prevalence Survey studies were used to ascertain problem gambling among individuals who gambled in the past 12 months. Respondents were classified as no problem (0 criteria), at-risk (1 or 2 criteria) and problem gambling (≥3 criteria). Past 12-month PE was assessed with the Psychosis Screening Questionnaire. Multivariable logistic regression models were constructed to assess the association between gambling status (exposure variable) and PE (outcome variable). The final sample consisted of 7363 people aged ≥16 years with no definite or probable psychosis [mean (SD) age 46.4 (18.6) years; 51.2% females]. The prevalence of PE in those with no problem, at-risk, and problem gambling were 5.1%, 11.1%, and 29.7%, respectively. In the model adjusted for sociodemographics, common mental disorders and risky health behaviors, at-risk (OR = 1.88; 95% CI: 1.11-3.19) and problem gambling (OR = 4.64; 95% CI: 1.78-12.13) were associated with an increased odds for PE. Problem gambling and PE tend to co-exist. Further research is needed to gain a better understanding of the mechanisms that underlie the association observed. Copyright © 2018 Elsevier B.V. All rights reserved.
Strategic and non-strategic problem gamblers differ on decision-making under risk and ambiguity.
Lorains, Felicity K; Dowling, Nicki A; Enticott, Peter G; Bradshaw, John L; Trueblood, Jennifer S; Stout, Julie C
2014-07-01
To analyse problem gamblers' decision-making under conditions of risk and ambiguity, investigate underlying psychological factors associated with their choice behaviour and examine whether decision-making differed in strategic (e.g., sports betting) and non-strategic (e.g., electronic gaming machine) problem gamblers. Cross-sectional study. Out-patient treatment centres and university testing facilities in Victoria, Australia. Thirty-nine problem gamblers and 41 age, gender and estimated IQ-matched controls. Decision-making tasks included the Iowa Gambling Task (IGT) and a loss aversion task. The Prospect Valence Learning (PVL) model was used to provide an explanation of cognitive, motivational and response style factors involved in IGT performance. Overall, problem gamblers performed more poorly than controls on both the IGT (P = 0.04) and the loss aversion task (P = 0.01), and their IGT decisions were associated with heightened attention to gains (P = 0.003) and less consistency (P = 0.002). Strategic problem gamblers did not differ from matched controls on either decision-making task, but non-strategic problem gamblers performed worse on both the IGT (P = 0.006) and the loss aversion task (P = 0.02). Furthermore, we found differences in the PVL model parameters underlying strategic and non-strategic problem gamblers' choices on the IGT. Problem gamblers demonstrated poor decision-making under conditions of risk and ambiguity. Strategic (e.g. sports betting, poker) and non-strategic (e.g. electronic gaming machines) problem gamblers differed in decision-making and the underlying psychological processes associated with their decisions. © 2014 Society for the Study of Addiction.
Knowledge acquisition and learning process description in context of e-learning
NASA Astrophysics Data System (ADS)
Kiselev, B. G.; Yakutenko, V. A.; Yuriev, M. A.
2017-01-01
This paper investigates the problem of design of e-learning and MOOC systems. It describes instructional design-based approaches to e-learning systems design: IMS Learning Design, MISA and TELOS. To solve this problem we present Knowledge Field of Educational Environment with Competence boundary conditions - instructional engineering method for self-learning systems design. It is based on the simplified TELOS approach and enables a user to create their individual learning path by choosing prerequisite and target competencies. The paper provides the ontology model for the described instructional engineering method, real life use cases and the classification of the presented model. Ontology model consists of 13 classes and 15 properties. Some of them are inherited from Knowledge Field of Educational Environment and some are new and describe competence boundary conditions and knowledge validation objects. Ontology model uses logical constraints and is described using OWL 2 standard. To give TELOS users better understanding of our approach we list mapping between TELOS and KFEEC.
The consistency of disjunctive assertions.
Johnson-Laird, P N; Lotstein, Max; Byrne, Ruth M J
2012-07-01
In two experiments, we established a new phenomenon in reasoning from disjunctions of the grammatical form either A or else B, where A and B are clauses. When individuals have to assess whether pairs of assertions can be true at the same time, they tend to focus on the truth of each clause of an exclusive disjunction (and ignore the concurrent falsity of the other clause). Hence, they succumb to illusions of consistency and of inconsistency with pairs consisting of a disjunction and a conjunction (Experiment 1), and with simpler problems consisting of pairs of disjunctions, such as eIther there is a pie or else there is a cake and Either there isn't a pie or else there is a cake (Experiment 2), that appear to be consistent with one another, but in fact are not. These results corroborate the theory that reasoning depends on envisaging models of possibilities.
NASA Astrophysics Data System (ADS)
Tan, Kian Lam; Lim, Chen Kim
2017-10-01
With the explosive growth of online information such as email messages, news articles, and scientific literature, many institutions and museums are converting their cultural collections from physical data to digital format. However, this conversion resulted in the issues of inconsistency and incompleteness. Besides, the usage of inaccurate keywords also resulted in short query problem. Most of the time, the inconsistency and incompleteness are caused by the aggregation fault in annotating a document itself while the short query problem is caused by naive user who has prior knowledge and experience in cultural heritage domain. In this paper, we presented an approach to solve the problem of inconsistency, incompleteness and short query by incorporating the Term Similarity Matrix into the Language Model. Our approach is tested on the Cultural Heritage in CLEF (CHiC) collection which consists of short queries and documents. The results show that the proposed approach is effective and has improved the accuracy in retrieval time.
Donati, Maria Anna; Chiesi, Francesca; Primi, Caterina
2013-02-01
This study aimed at testing a model in which cognitive, dispositional, and social factors were integrated into a single perspective as predictors of gambling behavior. We also aimed at providing further evidence of gender differences related to adolescent gambling. Participants were 994 Italian adolescents (64% Males; Mean age = 16.57). Hierarchical logistic regressions attested the predictive power of the considered factors on at-risk/problem gambling - measured by administering the South Oaks Gambling Screen-Revised for Adolescents (SOGS-RA) - in both boys and girls. Sensation seeking and superstitious thinking were consistent predictors across gender, while probabilistic reasoning ability, the perception of the economic profitability of gambling, and peer gambling behavior were found to be predictors only among male adolescents, whereas parental gambling behavior had a predictive power in female adolescents. Findings are discussed referring to practical implications for preventive efforts toward adolescents' gambling problems. Copyright © 2012 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Roubíček, Tomáš; Tomassetti, Giuseppe
2018-06-01
A theory of elastic magnets is formulated under possible diffusion and heat flow governed by Fick's and Fourier's laws in the deformed (Eulerian) configuration, respectively. The concepts of nonlocal nonsimple materials and viscous Cahn-Hilliard equations are used. The formulation of the problem uses Lagrangian (reference) configuration while the transport processes are pulled back. Except the static problem, the demagnetizing energy is ignored and only local non-self-penetration is considered. The analysis as far as existence of weak solutions of the (thermo) dynamical problem is performed by a careful regularization and approximation by a Galerkin method, suggesting also a numerical strategy. Either ignoring or combining particular aspects, the model has numerous applications as ferro-to-paramagnetic transformation in elastic ferromagnets, diffusion of solvents in polymers possibly accompanied by magnetic effects (magnetic gels), or metal-hydride phase transformation in some intermetallics under diffusion of hydrogen accompanied possibly by magnetic effects (and in particular ferro-to-antiferromagnetic phase transformation), all in the full thermodynamical context under large strains.
Coupling biology and oceanography in models.
Fennel, W; Neumann, T
2001-08-01
The dynamics of marine ecosystems, i.e. the changes of observable chemical-biological quantities in space and time, are driven by biological and physical processes. Predictions of future developments of marine systems need a theoretical framework, i.e. models, solidly based on research and understanding of the different processes involved. The natural way to describe marine systems theoretically seems to be the embedding of chemical-biological models into circulation models. However, while circulation models are relatively advanced the quantitative theoretical description of chemical-biological processes lags behind. This paper discusses some of the approaches and problems in the development of consistent theories and indicates the beneficial potential of the coupling of marine biology and oceanography in models.
Short-term electric power demand forecasting based on economic-electricity transmission model
NASA Astrophysics Data System (ADS)
Li, Wenfeng; Bai, Hongkun; Liu, Wei; Liu, Yongmin; Wang, Yubin Mao; Wang, Jiangbo; He, Dandan
2018-04-01
Short-term electricity demand forecasting is the basic work to ensure safe operation of the power system. In this paper, a practical economic electricity transmission model (EETM) is built. With the intelligent adaptive modeling capabilities of Prognoz Platform 7.2, the econometric model consists of three industrial added value and income levels is firstly built, the electricity demand transmission model is also built. By multiple regression, moving averages and seasonal decomposition, the problem of multiple correlations between variables is effectively overcome in EETM. The validity of EETM is proved by comparison with the actual value of Henan Province. Finally, EETM model is used to forecast the electricity consumption of the 1-4 quarter of 2018.
Ma, Julie; Grogan-Kaylor, Andrew
2017-01-01
Neighborhood and parenting influences on early behavioral outcomes are strongly dependent upon a child's stage of development. However, little research has jointly considered the longitudinal associations of neighborhood and parenting processes with behavior problems in early childhood. To address this limitation, this study explores the associations of neighborhood collective efficacy and maternal corporal punishment with the longitudinal patterns of early externalizing and internalizing behavior problems. The study sample consisted of 3,705 families from a nationally representative cohort study of urban families. Longitudinal multilevel models examined the associations of collective efficacy and corporal punishment with behavior problems at age 3, as well as with patterns of behavior problems between the ages 3 to 5. Interactions between the main predictors and child age tested whether neighborhood and parent relationships with child behavior varied over time. Mediation analysis examined whether neighborhood influences on child behavior were mediated by parenting. The models controlled for a comprehensive set of possible confounders at the child, parent, and neighborhood levels. Results indicate that both maternal corporal punishment and low neighborhood collective efficacy were significantly associated with increased behavior problems. The significant interaction between collective efficacy and child age with internalizing problems suggests that neighborhood influences on internalizing behavior were stronger for younger children. The indirect effect of low collective efficacy on behavior problems through corporal punishment was not significant. These findings highlight the importance of multilevel interventions that promote both neighborhood collective efficacy and non-physical discipline in early childhood. PMID:28425727
Deighton, Jessica; Humphrey, Neil; Belsky, Jay; Boehnke, Jan; Vostanis, Panos; Patalay, Praveetha
2018-03-01
There is a growing appreciation that child functioning in different domains, levels, or systems are interrelated over time. Here, we investigate links between internalizing symptoms, externalizing problems, and academic attainment during middle childhood and early adolescence, drawing on two large data sets (child: mean age 8.7 at enrolment, n = 5,878; adolescent: mean age 11.7, n = 6,388). Using a 2-year cross-lag design, we test three hypotheses - adjustment erosion, academic incompetence, and shared risk - while also examining the moderating influence of gender. Multilevel structural equation models provided consistent evidence of the deleterious effect of externalizing problems on later academic achievement in both cohorts, supporting the adjustment-erosion hypothesis. Evidence supporting the academic-incompetence hypothesis was restricted to the middle childhood cohort, revealing links between early academic failure and later internalizing symptoms. In both cohorts, inclusion of shared-risk variables improved model fit and rendered some previously established cross-lag pathways non-significant. Implications of these findings are discussed, and study strengths and limitations noted. Statement of contribution What is already known on this subject? Longitudinal research and in particular developmental cascades literature make the case for weaker associations between internalizing symptoms and academic performance than between externalizing problems and academic performance. Findings vary in terms of the magnitude and inferred direction of effects. Inconsistencies may be explained by different age ranges, prevalence of small-to-modest sample sizes, and large time lags between measurement points. Gender differences remain underexamined. What does this study add? The present study used cross-lagged models to examine longitudinal associations in age groups (middle child and adolescence) in a large-scale British sample. The large sample size not only allows for improvements on previous measurement models (e.g., allowing the analysis to account for nesting, and estimation of latent variables) but also allows for examination of gender differences. The findings clarify the role of shared-risk factors in accounting for associations between internalizing, externalizing, and academic performance, by demonstrating that shared-risk factors do not fully account for relationships between internalizing, externalizing, and academic achievement. Specifically, some pathways between mental health and academic attainment consistently remain, even after shared-risk variables have been accounted for. Findings also present consistent support for the potential impact of behavioural problems on children's academic attainment. The negative relationship between low academic attainment and subsequent internalizing symptoms for younger children is also noteworthy. © 2017 The British Psychological Society.
Non-erotic thoughts, attentional focus, and sexual problems in a community sample.
Nelson, Andrea L; Purdon, Christine
2011-04-01
According to Barlow's model of sexual dysfunction, anxiety in sexual situations leads to attentional focus on sexual performance at the expense of erotic cues, which compromises sexual arousal. This negative experience will enhance anxiety in future sexual situations, and non-erotic thoughts (NETs) relevant to performance will receive attentional priority. Previous research with student samples (Purdon & Holdaway, 2006; Purdon & Watson, 2010) has found that people experience many types of NETs in addition to performance-relevant thoughts, and that, consistent with Barlow's model, the frequency of and anxiety evoked by these thoughts is positively associated with sexual problems. Extending this previous work, the current study found that, in a community sample of women (N = 81) and men (N = 72) in long-term relationships, women were more likely to report body image concerns and external consequences of the sexual activity, while men were more likely to report performance-related concerns. Equally likely among men and women were thoughts about emotional consequences of the sexual activity. Regardless of thought content, experiencing more frequent NETs was associated with more sexual problems in both women and men. Moreover, as per Barlow's model, greater negative affect in anticipation of and during sexual activity predicted greater frequency of NETs and greater anxiety in response to NETs was associated with greater difficulty dismissing the thoughts. However, greater difficulty in refocusing on erotic thoughts during sexual activity uniquely predicted more sexual problems above the frequency and dismissability of NETs. Together, these data support the cognitive interference mechanism implicated by Barlow's causal model of sexual dysfunction and have implications for the treatment of sexual problems.
Association Fields via Cuspless Sub-Riemannian Geodesics in SE(2).
Duits, R; Boscain, U; Rossi, F; Sachkov, Y
To model association fields that underly perceptional organization (gestalt) in psychophysics we consider the problem P curve of minimizing [Formula: see text] for a planar curve having fixed initial and final positions and directions. Here κ ( s ) is the curvature of the curve with free total length ℓ . This problem comes from a model of geometry of vision due to Petitot (in J. Physiol. Paris 97:265-309, 2003; Math. Inf. Sci. Humaines 145:5-101, 1999), and Citti & Sarti (in J. Math. Imaging Vis. 24(3):307-326, 2006). In previous work we proved that the range [Formula: see text] of the exponential map of the underlying geometric problem formulated on SE(2) consists of precisely those end-conditions ( x fin , y fin , θ fin ) that can be connected by a globally minimizing geodesic starting at the origin ( x in , y in , θ in )=(0,0,0). From the applied imaging point of view it is relevant to analyze the sub-Riemannian geodesics and [Formula: see text] in detail. In this article we show that [Formula: see text] is contained in half space x ≥0 and (0, y fin )≠(0,0) is reached with angle π ,show that the boundary [Formula: see text] consists of endpoints of minimizers either starting or ending in a cusp,analyze and plot the cones of reachable angles θ fin per spatial endpoint ( x fin , y fin ),relate the endings of association fields to [Formula: see text] and compute the length towards a cusp,analyze the exponential map both with the common arc-length parametrization t in the sub-Riemannian manifold [Formula: see text] and with spatial arc-length parametrization s in the plane [Formula: see text]. Surprisingly, s -parametrization simplifies the exponential map, the curvature formulas, the cusp-surface, and the boundary value problem,present a novel efficient algorithm solving the boundary value problem,show that sub-Riemannian geodesics solve Petitot's circle bundle model (cf. Petitot in J. Physiol. Paris 97:265-309, [2003]),show a clear similarity with association field lines and sub-Riemannian geodesics.
Effect of conductor geometry on source localization: Implications for epilepsy studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlitt, H.; Heller, L.; Best, E.
1994-07-01
We shall discuss the effects of conductor geometry on source localization for applications in epilepsy studies. The most popular conductor model for clinical MEG studies is a homogeneous sphere. However, several studies have indicated that a sphere is a poor model for the head when the sources are deep, as is the case for epileptic foci in the mesial temporal lobe. We believe that replacing the spherical model with a more realistic one in the inverse fitting procedure will improve the accuracy of localizing epileptic sources. In order to include a realistic head model in the inverse problem, we mustmore » first solve the forward problem for the realistic conductor geometry. We create a conductor geometry model from MR images, and then solve the forward problem via a boundary integral equation for the electric potential due to a specified primary source. One the electric potential is known, the magnetic field can be calculated directly. The most time-intensive part of the problem is generating the conductor model; fortunately, this needs to be done only once for each patient. It takes little time to change the primary current and calculate a new magnetic field for use in the inverse fitting procedure. We present the results of a series of computer simulations in which we investigate the localization accuracy due to replacing the spherical model with the realistic head model in the inverse fitting procedure. The data to be fit consist of a computer generated magnetic field due to a known current dipole in a realistic head model, with added noise. We compare the localization errors when this field is fit using a spherical model to the fit using a realistic head model. Using a spherical model is comparable to what is usually done when localizing epileptic sources in humans, where the conductor model used in the inverse fitting procedure does not correspond to the actual head.« less
Strange particles from NEXUS 3
NASA Astrophysics Data System (ADS)
Werner, K.; Liu, F. M.; Ostapchenko, S.; Pierog, T.
2004-01-01
After discussing conceptual problems with the conventional string model, we present a new approach, based on a theoretically consistent multiple scattering formalism. First results for strange particle production in proton-proton scattering at 158 GeV and 200 GeV centre-of-mass (cms) are discussed. This paper was presented at Strange Quark Matter Conference, Atlantic Beach, North Carolina, 12-17 March 2003.
ERIC Educational Resources Information Center
Dwyer, Christopher P.; Hogan, Michael J.; Harney, Owen M.; Kavanagh, Caroline
2017-01-01
Critical thinking (CT) is a metacognitive process, consisting of a number of sub-skills and dispositions that, when used appropriately, increases the chances of producing a logical conclusion to an argument or solution to a problem. Though the CT literature argues that dispositions are as important to CT as is the ability to perform CT skills, the…
An Analysis of Internet Addiction Levels of Individuals according to Various Variables
ERIC Educational Resources Information Center
Sahin, Cengiz
2011-01-01
The concept of internet addiction refers to the excessive use of internet which in turn causes various problems in individual, social and professional aspects. The aim of this study was to determine internet addiction levels of internet users from all age groups. The study used survey model. Study group of the study consisted of a total of 596…
A three-stage heuristic for harvest scheduling with access road network development
Mark M. Clark; Russell D. Meller; Timothy P. McDonald
2000-01-01
In this article we present a new model for the scheduling of forest harvesting with spatial and temporal constraints. Our approach is unique in that we incorporate access road network development into the harvest scheduling selection process. Due to the difficulty of solving the problem optimally, we develop a heuristic that consists of a solution construction stage...
Markov Chains For Testing Redundant Software
NASA Technical Reports Server (NTRS)
White, Allan L.; Sjogren, Jon A.
1990-01-01
Preliminary design developed for validation experiment that addresses problems unique to assuring extremely high quality of multiple-version programs in process-control software. Approach takes into account inertia of controlled system in sense it takes more than one failure of control program to cause controlled system to fail. Verification procedure consists of two steps: experimentation (numerical simulation) and computation, with Markov model for each step.
ERIC Educational Resources Information Center
Currie, Winifred
Reported are results of screening over 1,000 eighth or ninth grade students for learning disabilities, and suggested is an intervention program utilizing available local resources. The Currie-Milonas Screening Test is described as consisting of eight subtests to identify problems in the basic skills of reading, writing, language, or mathematics.…
The Relationship between the Amount of Learning and Time (The Example of Equations)
ERIC Educational Resources Information Center
Kesan, Cenk; Kaya, Deniz; Ok, Gokce; Erkus, Yusuf
2016-01-01
The main purpose of this study is to determine the amount of time-dependent learning of "solving problems that require establishing of single variable equations of the first order" of the seventh grade students. The study, adopting the screening model, consisted of a total of 84 students, including 42 female and 42 male students at the…
Elastic parabolic equation solutions for underwater acoustic problems using seismic sources.
Frank, Scott D; Odom, Robert I; Collis, Jon M
2013-03-01
Several problems of current interest involve elastic bottom range-dependent ocean environments with buried or earthquake-type sources, specifically oceanic T-wave propagation studies and interface wave related analyses. Additionally, observed deep shadow-zone arrivals are not predicted by ray theoretic methods, and attempts to model them with fluid-bottom parabolic equation solutions suggest that it may be necessary to account for elastic bottom interactions. In order to study energy conversion between elastic and acoustic waves, current elastic parabolic equation solutions must be modified to allow for seismic starting fields for underwater acoustic propagation environments. Two types of elastic self-starter are presented. An explosive-type source is implemented using a compressional self-starter and the resulting acoustic field is consistent with benchmark solutions. A shear wave self-starter is implemented and shown to generate transmission loss levels consistent with the explosive source. Source fields can be combined to generate starting fields for source types such as explosions, earthquakes, or pile driving. Examples demonstrate the use of source fields for shallow sources or deep ocean-bottom earthquake sources, where down slope conversion, a known T-wave generation mechanism, is modeled. Self-starters are interpreted in the context of the seismic moment tensor.
Bayesian multi-task learning for decoding multi-subject neuroimaging data.
Marquand, Andre F; Brammer, Michael; Williams, Steven C R; Doyle, Orla M
2014-05-15
Decoding models based on pattern recognition (PR) are becoming increasingly important tools for neuroimaging data analysis. In contrast to alternative (mass-univariate) encoding approaches that use hierarchical models to capture inter-subject variability, inter-subject differences are not typically handled efficiently in PR. In this work, we propose to overcome this problem by recasting the decoding problem in a multi-task learning (MTL) framework. In MTL, a single PR model is used to learn different but related "tasks" simultaneously. The primary advantage of MTL is that it makes more efficient use of the data available and leads to more accurate models by making use of the relationships between tasks. In this work, we construct MTL models where each subject is modelled by a separate task. We use a flexible covariance structure to model the relationships between tasks and induce coupling between them using Gaussian process priors. We present an MTL method for classification problems and demonstrate a novel mapping method suitable for PR models. We apply these MTL approaches to classifying many different contrasts in a publicly available fMRI dataset and show that the proposed MTL methods produce higher decoding accuracy and more consistent discriminative activity patterns than currently used techniques. Our results demonstrate that MTL provides a promising method for multi-subject decoding studies by focusing on the commonalities between a group of subjects rather than the idiosyncratic properties of different subjects. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
A Tractable Disequilbrium Framework for Integrating Computational Thermodynamics and Geodynamics
NASA Astrophysics Data System (ADS)
Spiegelman, M. W.; Tweed, L. E. L.; Evans, O.; Kelemen, P. B.; Wilson, C. R.
2017-12-01
The consistent integration of computational thermodynamics and geodynamics is essential for exploring and understanding a wide range of processes from high-PT magma dynamics in the convecting mantle to low-PT reactive alteration of the brittle crust. Nevertheless, considerable challenges remain for coupling thermodynamics and fluid-solid mechanics within computationally tractable and insightful models. Here we report on a new effort, part of the ENKI project, that provides a roadmap for developing flexible geodynamic models of varying complexity that are thermodynamically consistent with established thermodynamic models. The basic theory is derived from the disequilibrium thermodynamics of De Groot and Mazur (1984), similar to Rudge et. al (2011, GJI), but extends that theory to include more general rheologies, multiple solid (and liquid) phases and explicit chemical reactions to describe interphase exchange. Specifying stoichiometric reactions clearly defines the compositions of reactants and products and allows the affinity of each reaction (A = -Δ/Gr) to be used as a scalar measure of disequilibrium. This approach only requires thermodynamic models to return chemical potentials of all components and phases (as well as thermodynamic quantities for each phase e.g. densities, heat capacity, entropies), but is not constrained to be in thermodynamic equilibrium. Allowing meta-stable phases mitigates some of the computational issues involved with the introduction and exhaustion of phases. Nevertheless, for closed systems, these problems are guaranteed to evolve to the same equilibria predicted by equilibrium thermodynamics. Here we illustrate the behavior of this theory for a range of simple problems (constructed with our open-source model builder TerraFERMA) that model poro-viscous behavior in the well understood Fo-Fa binary phase loop. Other contributions in this session will explore a range of models with more petrologically interesting phase diagrams as well as other rheologies.
Where do golf driver swings go wrong? Factors influencing driver swing consistency.
Zhang, X; Shan, G
2014-10-01
One of the challenging skills in golfing is the driver swing. There have been a large number of studies characterizing golf swings, yielding insightful instructions on how to swing well. As a result, achieving a sub-18 handicap is no longer the top problem for golfers. Instead, players are now most troubled by a lack of consistency during swing execution. The goal of this study was to determine how to consistently execute good golf swings. Using 3D motion capture and full-body biomechanical modeling, 22 experienced golfers were analysed. For characterizing both successful and failed swings, 19 selected parameters (13 angles, 4 time parameters, and 2 distances) were used. The results showed that 14 parameters are highly sensitive and/or prone to motor control variations. These parameters sensitized five distinct areas of swing to variation: (a) ball positioning, (b) transverse club angle, (c) transition, (d) wrist control, and (e) posture migration between takeaway and impact. Suggestions were provided for how to address these five distinct problem areas. We hope our findings on how to achieve consistency in golf swings will benefit all levels of golf pedagogy and help maintain/develop interests to involve more golf/physical activity for a healthy lifestyle. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Model and Algorithm for Substantiating Solutions for Organization of High-Rise Construction Project
NASA Astrophysics Data System (ADS)
Anisimov, Vladimir; Anisimov, Evgeniy; Chernysh, Anatoliy
2018-03-01
In the paper the models and the algorithm for the optimal plan formation for the organization of the material and logistical processes of the high-rise construction project and their financial support are developed. The model is based on the representation of the optimization procedure in the form of a non-linear problem of discrete programming, which consists in minimizing the execution time of a set of interrelated works by a limited number of partially interchangeable performers while limiting the total cost of performing the work. The proposed model and algorithm are the basis for creating specific organization management methodologies for the high-rise construction project.
Computational Everyday Life Human Behavior Model as Servicable Knowledge
NASA Astrophysics Data System (ADS)
Motomura, Yoichi; Nishida, Yoshifumi
A project called `Open life matrix' is not only a research activity but also real problem solving as an action research. This concept is realized by large-scale data collection, probabilistic causal structure model construction and information service providing using the model. One concrete outcome of this project is childhood injury prevention activity in new team consist of hospital, government, and many varieties of researchers. The main result from the project is a general methodology to apply probabilistic causal structure models as servicable knowledge for action research. In this paper, the summary of this project and future direction to emphasize action research driven by artificial intelligence technology are discussed.
Conformity and Dissonance in Generalized Voter Models
NASA Astrophysics Data System (ADS)
Page, Scott E.; Sander, Leonard M.; Schneider-Mizell, Casey M.
2007-09-01
We generalize the voter model to include social forces that produce conformity among voters and avoidance of cognitive dissonance of opinions within a voter. The time for both conformity and consistency (which we call the exit time) is, in general, much longer than for either process alone. We show that our generalized model can be applied quite widely: it is a form of Wright's island model of population genetics, and is related to problems in the physical sciences. We give scaling arguments, numerical simulations, and analytic estimates for the exit time for a range of relative strengths in the tendency to conform and to avoid dissonance.
Kutepov, A. L.
2015-07-22
Self-consistent solutions of Hedin's equations (HE) for the two-site Hubbard model (HM) have been studied. They have been found for three-point vertices of increasing complexity (Γ = 1 (GW approximation), Γ₁ from the first-order perturbation theory, and the exact vertex Γ E). Comparison is made between the cases when an additional quasiparticle (QP) approximation for Green's functions is applied during the self-consistent iterative solving of HE and when QP approximation is not applied. Results obtained with the exact vertex are directly related to the present open question—which approximation is more advantageous for future implementations, GW + DMFT or QPGW +more » DMFT. It is shown that in a regime of strong correlations only the originally proposed GW + DMFT scheme is able to provide reliable results. Vertex corrections based on Perturbation Theory systematically improve the GW results when full self-consistency is applied. The application of QP self-consistency combined with PT vertex corrections shows similar problems to the case when the exact vertex is applied combined with QP sc. An analysis of Ward Identity violation is performed for all studied in this work's approximations and its relation to the general accuracy of the schemes used is provided.« less
Kutepov, A L
2015-08-12
Self-consistent solutions of Hedin's equations (HE) for the two-site Hubbard model (HM) have been studied. They have been found for three-point vertices of increasing complexity (Γ = 1 (GW approximation), Γ1 from the first-order perturbation theory, and the exact vertex Γ(E)). Comparison is made between the cases when an additional quasiparticle (QP) approximation for Green's functions is applied during the self-consistent iterative solving of HE and when QP approximation is not applied. The results obtained with the exact vertex are directly related to the present open question-which approximation is more advantageous for future implementations, GW + DMFT or QPGW + DMFT. It is shown that in a regime of strong correlations only the originally proposed GW + DMFT scheme is able to provide reliable results. Vertex corrections based on perturbation theory (PT) systematically improve the GW results when full self-consistency is applied. The application of QP self-consistency combined with PT vertex corrections shows similar problems to the case when the exact vertex is applied combined with QP sc. An analysis of Ward Identity violation is performed for all studied in this work's approximations and its relation to the general accuracy of the schemes used is provided.
A simple model for the dependence on local detonation speed of the product entropy
NASA Astrophysics Data System (ADS)
Hetherington, David C.; Whitworth, Nicholas J.
2012-03-01
The generation of a burn time field as a pre-processing step ahead of a hydrocode calculation has been mostly upgraded in the explosives modelling community from the historical model of singlespeed programmed burn to DSD/WBL (Detonation Shock Dynamics / Whitham Bdzil Lambourn). The problem with this advance is that the previously conventional approach to the hydrodynamic stage of the model results in the entropy of the detonation products (s) having the wrong correlation with detonation speed (D). Instead of being higher where D is lower, the conventional method leads to s being lower where D is lower, resulting in a completely fictitious enhancement of available energy where the burn is degraded! A technique is described which removes this deficiency of the historical model when used with a DSD-generated burn time field. By treating the conventional JWL equation as a semi-empirical expression for the local expansion isentrope, and constraining the local parameter set for consistency with D, it is possible to obtain the two desirable outcomes that the model of the detonation wave is internally consistent, and s is realistically correlated with D.
A Simple Model for the Dependence on Local Detonation Speed (D) of the Product Entropy (S)
NASA Astrophysics Data System (ADS)
Hetherington, David
2011-06-01
The generation of a burn time field as a pre-processing step ahead of a hydrocode calculation has been mostly upgraded in the explosives modelling community from the historical model of single-speed programmed burn to DSD. However, with this advance has come the problem that the previously conventional approach to the hydrodynamic stage of the model results in S having the wrong correlation with D. Instead of being higher where the detonation speed is lower, i.e. where reaction occurs at lower compression, the conventional method leads to S being lower where D is lower, resulting in a completely fictitious enhancement of available energy where the burn is degraded! A technique is described which removes this deficiency of the historical model when used with a DSD-generated burn time field. By treating the conventional JWL equation as a semi-empirical expression for the local expansion isentrope, and constraining the local parameter set for consistency with D, it is possible to obtain the two desirable outcomes that the model of the detonation wave is internally consistent, and S is realistically correlated with D.