Sample records for independent set problem

  1. Ranking Specific Sets of Objects.

    PubMed

    Maly, Jan; Woltran, Stefan

    2017-01-01

    Ranking sets of objects based on an order between the single elements has been thoroughly studied in the literature. In particular, it has been shown that it is in general impossible to find a total ranking - jointly satisfying properties as dominance and independence - on the whole power set of objects. However, in many applications certain elements from the entire power set might not be required and can be neglected in the ranking process. For instance, certain sets might be ruled out due to hard constraints or are not satisfying some background theory. In this paper, we treat the computational problem whether an order on a given subset of the power set of elements satisfying different variants of dominance and independence can be found, given a ranking on the elements. We show that this problem is tractable for partial rankings and NP-complete for total rankings.

  2. The Impact of Problem Sets on Student Learning

    ERIC Educational Resources Information Center

    Kim, Myeong Hwan; Cho, Moon-Heum; Leonard, Karen Moustafa

    2012-01-01

    The authors examined the role of problem sets on student learning in university microeconomics. A total of 126 students participated in the study in consecutive years. independent samples t test showed that students who were not given answer keys outperformed students who were given answer keys. Multiple regression analysis showed that, along with…

  3. GreedyMAX-type Algorithms for the Maximum Independent Set Problem

    NASA Astrophysics Data System (ADS)

    Borowiecki, Piotr; Göring, Frank

    A maximum independent set problem for a simple graph G = (V,E) is to find the largest subset of pairwise nonadjacent vertices. The problem is known to be NP-hard and it is also hard to approximate. Within this article we introduce a non-negative integer valued function p defined on the vertex set V(G) and called a potential function of a graph G, while P(G) = max v ∈ V(G) p(v) is called a potential of G. For any graph P(G) ≤ Δ(G), where Δ(G) is the maximum degree of G. Moreover, Δ(G) - P(G) may be arbitrarily large. A potential of a vertex lets us get a closer insight into the properties of its neighborhood which leads to the definition of the family of GreedyMAX-type algorithms having the classical GreedyMAX algorithm as their origin. We establish a lower bound 1/(P + 1) for the performance ratio of GreedyMAX-type algorithms which favorably compares with the bound 1/(Δ + 1) known to hold for GreedyMAX. The cardinality of an independent set generated by any GreedyMAX-type algorithm is at least sum_{vin V(G)} (p(v)+1)^{-1}, which strengthens the bounds of Turán and Caro-Wei stated in terms of vertex degrees.

  4. Experimentation in machine discovery

    NASA Technical Reports Server (NTRS)

    Kulkarni, Deepak; Simon, Herbert A.

    1990-01-01

    KEKADA, a system that is capable of carrying out a complex series of experiments on problems from the history of science, is described. The system incorporates a set of experimentation strategies that were extracted from the traces of the scientists' behavior. It focuses on surprises to constrain its search, and uses its strategies to generate hypotheses and to carry out experiments. Some strategies are domain independent, whereas others incorporate knowledge of a specific domain. The domain independent strategies include magnification, determining scope, divide and conquer, factor analysis, and relating different anomalous phenomena. KEKADA represents an experiment as a set of independent and dependent entities, with apparatus variables and a goal. It represents a theory either as a sequence of processes or as abstract hypotheses. KEKADA's response is described to a particular problem in biochemistry. On this and other problems, the system is capable of carrying out a complex series of experiments to refine domain theories. Analysis of the system and its behavior on a number of different problems has established its generality, but it has also revealed the reasons why the system would not be a good experimental scientist.

  5. Integrated Network Decompositions and Dynamic Programming for Graph Optimization (INDDGO)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The INDDGO software package offers a set of tools for finding exact solutions to graph optimization problems via tree decompositions and dynamic programming algorithms. Currently the framework offers serial and parallel (distributed memory) algorithms for finding tree decompositions and solving the maximum weighted independent set problem. The parallel dynamic programming algorithm is implemented on top of the MADNESS task-based runtime.

  6. A synchronous game for binary constraint systems

    NASA Astrophysics Data System (ADS)

    Kim, Se-Jin; Paulsen, Vern; Schafhauser, Christopher

    2018-03-01

    Recently, Slofstra proved that the set of quantum correlations is not closed. We prove that the set of synchronous quantum correlations is not closed, which implies his result, by giving an example of a synchronous game that has a perfect quantum approximate strategy but no perfect quantum strategy. We also exhibit a graph for which the quantum independence number and the quantum approximate independence number are different. We prove new characterisations of synchronous quantum approximate correlations and synchronous quantum spatial correlations. We solve the synchronous approximation problem of Dykema and the second author, which yields a new equivalence of Connes' embedding problem in terms of synchronous correlations.

  7. Dynamic programming methods for concurrent design and dynamic allocation of vehicles embedded in a system-of-systems

    NASA Astrophysics Data System (ADS)

    Nusawardhana

    2007-12-01

    Recent developments indicate a changing perspective on how systems or vehicles should be designed. Such transition comes from the way decision makers in defense related agencies address complex problems. Complex problems are now often posed in terms of the capabilities desired, rather than in terms of requirements for a single systems. As a result, the way to provide a set of capabilities is through a collection of several individual, independent systems. This collection of individual independent systems is often referred to as a "System of Systems'' (SoS). Because of the independent nature of the constituent systems in an SoS, approaches to design an SoS, and more specifically, approaches to design a new system as a member of an SoS, will likely be different than the traditional design approaches for complex, monolithic (meaning the constituent parts have no ability for independent operation) systems. Because a system of system evolves over time, this simultaneous system design and resource allocation problem should be investigated in a dynamic context. Such dynamic optimization problems are similar to conventional control problems. However, this research considers problems which not only seek optimizing policies but also seek the proper system or vehicle to operate under these policies. This thesis presents a framework and a set of analytical tools to solve a class of SoS problems that involves the simultaneous design of a new system and allocation of the new system along with existing systems. Such a class of problems belongs to the problems of concurrent design and control of a new systems with solutions consisting of both optimal system design and optimal control strategy. Rigorous mathematical arguments show that the proposed framework solves the concurrent design and control problems. Many results exist for dynamic optimization problems of linear systems. In contrary, results on optimal nonlinear dynamic optimization problems are rare. The proposed framework is equipped with the set of analytical tools to solve several cases of nonlinear optimal control problems: continuous- and discrete-time nonlinear problems with applications on both optimal regulation and tracking. These tools are useful when mathematical descriptions of dynamic systems are available. In the absence of such a mathematical model, it is often necessary to derive a solution based on computer simulation. For this case, a set of parameterized decision may constitute a solution. This thesis presents a method to adjust these parameters based on the principle of stochastic approximation simultaneous perturbation using continuous measurements. The set of tools developed here mostly employs the methods of exact dynamic programming. However, due to the complexity of SoS problems, this research also develops suboptimal solution approaches, collectively recognized as approximate dynamic programming solutions, for large scale problems. The thesis presents, explores, and solves problems from an airline industry, in which a new aircraft is to be designed and allocated along with an existing fleet of aircraft. Because the life cycle of an aircraft is on the order of 10 to 20 years, this problem is to be addressed dynamically so that the new aircraft design is the best design for the fleet over a given time horizon.

  8. Distributed-Memory Fast Maximal Independent Set

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanewala Appuhamilage, Thejaka Amila J.; Zalewski, Marcin J.; Lumsdaine, Andrew

    The Maximal Independent Set (MIS) graph problem arises in many applications such as computer vision, information theory, molecular biology, and process scheduling. The growing scale of MIS problems suggests the use of distributed-memory hardware as a cost-effective approach to providing necessary compute and memory resources. Luby proposed four randomized algorithms to solve the MIS problem. All those algorithms are designed focusing on shared-memory machines and are analyzed using the PRAM model. These algorithms do not have direct efficient distributed-memory implementations. In this paper, we extend two of Luby’s seminal MIS algorithms, “Luby(A)” and “Luby(B),” to distributed-memory execution, and we evaluatemore » their performance. We compare our results with the “Filtered MIS” implementation in the Combinatorial BLAS library for two types of synthetic graph inputs.« less

  9. An Examination of High School Students' Online Engagement in Mathematics Problems

    ERIC Educational Resources Information Center

    Lim, Woong; Son, Ji-Won; Gregson, Susan; Kim, Jihye

    2018-01-01

    This article examines high school students' engagement in a set of trigonometry problems. Students completed this task independently in an online environment with access to Internet search engines, online textbooks, and YouTube videos. The findings imply that students have the resourcefulness to solve procedure-based mathematics problems in an…

  10. Promoting Homework Independence for Students with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Hampshire, Patricia Korzekwa; Butera, Gretchen D.; Dustin, Timothy J.

    2014-01-01

    For students with autism, homework time may be especially challenging due to problems in self-organization and difficulties generalizing skills from one setting to another. Although often problematic, homework can provide a valuable context for teaching organizational skills that become essential as students become more independent. By learning to…

  11. Graphs and matroids weighted in a bounded incline algebra.

    PubMed

    Lu, Ling-Xia; Zhang, Bei

    2014-01-01

    Firstly, for a graph weighted in a bounded incline algebra (or called a dioid), a longest path problem (LPP, for short) is presented, which can be considered the uniform approach to the famous shortest path problem, the widest path problem, and the most reliable path problem. The solutions for LPP and related algorithms are given. Secondly, for a matroid weighted in a linear matroid, the maximum independent set problem is studied.

  12. Role of diversity in ICA and IVA: theory and applications

    NASA Astrophysics Data System (ADS)

    Adalı, Tülay

    2016-05-01

    Independent component analysis (ICA) has been the most popular approach for solving the blind source separation problem. Starting from a simple linear mixing model and the assumption of statistical independence, ICA can recover a set of linearly-mixed sources to within a scaling and permutation ambiguity. It has been successfully applied to numerous data analysis problems in areas as diverse as biomedicine, communications, finance, geo- physics, and remote sensing. ICA can be achieved using different types of diversity—statistical property—and, can be posed to simultaneously account for multiple types of diversity such as higher-order-statistics, sample dependence, non-circularity, and nonstationarity. A recent generalization of ICA, independent vector analysis (IVA), generalizes ICA to multiple data sets and adds the use of one more type of diversity, statistical dependence across the data sets, for jointly achieving independent decomposition of multiple data sets. With the addition of each new diversity type, identification of a broader class of signals become possible, and in the case of IVA, this includes sources that are independent and identically distributed Gaussians. We review the fundamentals and properties of ICA and IVA when multiple types of diversity are taken into account, and then ask the question whether diversity plays an important role in practical applications as well. Examples from various domains are presented to demonstrate that in many scenarios it might be worthwhile to jointly account for multiple statistical properties. This paper is submitted in conjunction with the talk delivered for the "Unsupervised Learning and ICA Pioneer Award" at the 2016 SPIE Conference on Sensing and Analysis Technologies for Biomedical and Cognitive Applications.

  13. Prospective Effects of Violence Exposure across Multiple Contexts on Early Adolescents' Internalizing and Externalizing Problems

    ERIC Educational Resources Information Center

    Mrug, Sylvie; Windle, Michael

    2010-01-01

    Background: Violence exposure within each setting of community, school, or home has been linked with internalizing and externalizing problems. Although many children experience violence in multiple contexts, the effects of such cross-contextual exposure have not been studied. This study addresses this gap by examining independent and interactive…

  14. TemperSAT: A new efficient fair-sampling random k-SAT solver

    NASA Astrophysics Data System (ADS)

    Fang, Chao; Zhu, Zheng; Katzgraber, Helmut G.

    The set membership problem is of great importance to many applications and, in particular, database searches for target groups. Recently, an approach to speed up set membership searches based on the NP-hard constraint-satisfaction problem (random k-SAT) has been developed. However, the bottleneck of the approach lies in finding the solution to a large SAT formula efficiently and, in particular, a large number of independent solutions is needed to reduce the probability of false positives. Unfortunately, traditional random k-SAT solvers such as WalkSAT are biased when seeking solutions to the Boolean formulas. By porting parallel tempering Monte Carlo to the sampling of binary optimization problems, we introduce a new algorithm (TemperSAT) whose performance is comparable to current state-of-the-art SAT solvers for large k with the added benefit that theoretically it can find many independent solutions quickly. We illustrate our results by comparing to the currently fastest implementation of WalkSAT, WalkSATlm.

  15. Testing the statistical compatibility of independent data sets

    NASA Astrophysics Data System (ADS)

    Maltoni, M.; Schwetz, T.

    2003-08-01

    We discuss a goodness-of-fit method which tests the compatibility between statistically independent data sets. The method gives sensible results even in cases where the χ2 minima of the individual data sets are very low or when several parameters are fitted to a large number of data points. In particular, it avoids the problem that a possible disagreement between data sets becomes diluted by data points which are insensitive to the crucial parameters. A formal derivation of the probability distribution function for the proposed test statistics is given, based on standard theorems of statistics. The application of the method is illustrated on data from neutrino oscillation experiments, and its complementarity to the standard goodness-of-fit is discussed.

  16. On the Parameterized Complexity of Some Optimization Problems Related to Multiple-Interval Graphs

    NASA Astrophysics Data System (ADS)

    Jiang, Minghui

    We show that for any constant t ≥ 2, K -Independent Set and K-Dominating Set in t-track interval graphs are W[1]-hard. This settles an open question recently raised by Fellows, Hermelin, Rosamond, and Vialette. We also give an FPT algorithm for K-Clique in t-interval graphs, parameterized by both k and t, with running time max { t O(k), 2 O(klogk) } ·poly(n), where n is the number of vertices in the graph. This slightly improves the previous FPT algorithm by Fellows, Hermelin, Rosamond, and Vialette. Finally, we use the W[1]-hardness of K-Independent Set in t-track interval graphs to obtain the first parameterized intractability result for a recent bioinformatics problem called Maximal Strip Recovery (MSR). We show that MSR-d is W[1]-hard for any constant d ≥ 4 when the parameter is either the total length of the strips, or the total number of adjacencies in the strips, or the number of strips in the optimal solution.

  17. An Autograding (Student) Problem Management System for the Compeuwtir Ilittur8

    NASA Technical Reports Server (NTRS)

    Kohne, Glenn S.

    1996-01-01

    In order to develop analysis skills necessary in engineering disciplines, students need practice solving problems using specified analytical techniques. Unless homework is collected and graded, students tend not to spend much time or effort in performing it. Teachers do not, realistically, have the time to grade large numbers of homework problems on a regular basis. This paper presents and makes available a miracle cure. The Autograding Problem Management System (APMS) provides a discipline-independent mechanism for teachers to create (quickly and easily) sets of homework problems. The APMS system provides CRT and/or printed summaries of the graded student responses. This presentation will demonstrate both the speed and the drag-and-drop simplicity of using the APMS to create self-grading homework problem sets comprised of traditional types of problems and of problems which would not be possible without the use of computers.

  18. Using Empirical Data to Set Cutoff Scores.

    ERIC Educational Resources Information Center

    Hills, John R.

    Six experimental approaches to the problems of setting cutoff scores and choosing proper test length are briefly mentioned. Most of these methods share the premise that a test is a random sample of items, from a domain associated with a carefully specified objective. Each item is independent and is scored zero or one, with no provision for…

  19. Dynamic least-squares kernel density modeling of Fokker-Planck equations with application to neural population.

    PubMed

    Shotorban, Babak

    2010-04-01

    The dynamic least-squares kernel density (LSQKD) model [C. Pantano and B. Shotorban, Phys. Rev. E 76, 066705 (2007)] is used to solve the Fokker-Planck equations. In this model the probability density function (PDF) is approximated by a linear combination of basis functions with unknown parameters whose governing equations are determined by a global least-squares approximation of the PDF in the phase space. In this work basis functions are set to be Gaussian for which the mean, variance, and covariances are governed by a set of partial differential equations (PDEs) or ordinary differential equations (ODEs) depending on what phase-space variables are approximated by Gaussian functions. Three sample problems of univariate double-well potential, bivariate bistable neurodynamical system [G. Deco and D. Martí, Phys. Rev. E 75, 031913 (2007)], and bivariate Brownian particles in a nonuniform gas are studied. The LSQKD is verified for these problems as its results are compared against the results of the method of characteristics in nondiffusive cases and the stochastic particle method in diffusive cases. For the double-well potential problem it is observed that for low to moderate diffusivity the dynamic LSQKD well predicts the stationary PDF for which there is an exact solution. A similar observation is made for the bistable neurodynamical system. In both these problems least-squares approximation is made on all phase-space variables resulting in a set of ODEs with time as the independent variable for the Gaussian function parameters. In the problem of Brownian particles in a nonuniform gas, this approximation is made only for the particle velocity variable leading to a set of PDEs with time and particle position as independent variables. Solving these PDEs, a very good performance by LSQKD is observed for a wide range of diffusivities.

  20. Generalizations of the subject-independent feature set for music-induced emotion recognition.

    PubMed

    Lin, Yuan-Pin; Chen, Jyh-Horng; Duann, Jeng-Ren; Lin, Chin-Teng; Jung, Tzyy-Ping

    2011-01-01

    Electroencephalogram (EEG)-based emotion recognition has been an intensely growing field. Yet, how to achieve acceptable accuracy on a practical system with as fewer electrodes as possible is less concerned. This study evaluates a set of subject-independent features, based on differential power asymmetry of symmetric electrode pairs [1], with emphasis on its applicability to subject variability in music-induced emotion classification problem. Results of this study have evidently validated the feasibility of using subject-independent EEG features to classify four emotional states with acceptable accuracy in second-scale temporal resolution. These features could be generalized across subjects to detect emotion induced by music excerpts not limited to the music database that was used to derive the emotion-specific features.

  1. Teaching children with autism to explain how: A case for problem solving?

    PubMed

    Frampton, Sarah E; Alice Shillingsburg, M

    2018-04-01

    Few studies have applied Skinner's (1953) conceptualization of problem solving to teach socially significant behaviors to individuals with developmental disabilities. The current study used a multiple probe design across behavior (sets) to evaluate the effects of problem-solving strategy training (PSST) on the target behavior of explaining how to complete familiar activities. During baseline, none of the three participants with autism spectrum disorder (ASD) could respond to the problems presented to them (i.e., explain how to do the activities). Tact training of the actions in each activity alone was ineffective; however, all participants demonstrated independent explaining-how following PSST. Further, following PSST with Set 1, tact training alone was sufficient for at least one scenario in sets 2 and 3 for all 3 participants. Results have implications for generative responding for individuals with ASD and further the discussion regarding the role of problem solving in complex verbal behavior. © 2018 Society for the Experimental Analysis of Behavior.

  2. Variational Bayesian Learning for Wavelet Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Roussos, E.; Roberts, S.; Daubechies, I.

    2005-11-01

    In an exploratory approach to data analysis, it is often useful to consider the observations as generated from a set of latent generators or "sources" via a generally unknown mapping. For the noisy overcomplete case, where we have more sources than observations, the problem becomes extremely ill-posed. Solutions to such inverse problems can, in many cases, be achieved by incorporating prior knowledge about the problem, captured in the form of constraints. This setting is a natural candidate for the application of the Bayesian methodology, allowing us to incorporate "soft" constraints in a natural manner. The work described in this paper is mainly driven by problems in functional magnetic resonance imaging of the brain, for the neuro-scientific goal of extracting relevant "maps" from the data. This can be stated as a `blind' source separation problem. Recent experiments in the field of neuroscience show that these maps are sparse, in some appropriate sense. The separation problem can be solved by independent component analysis (ICA), viewed as a technique for seeking sparse components, assuming appropriate distributions for the sources. We derive a hybrid wavelet-ICA model, transforming the signals into a domain where the modeling assumption of sparsity of the coefficients with respect to a dictionary is natural. We follow a graphical modeling formalism, viewing ICA as a probabilistic generative model. We use hierarchical source and mixing models and apply Bayesian inference to the problem. This allows us to perform model selection in order to infer the complexity of the representation, as well as automatic denoising. Since exact inference and learning in such a model is intractable, we follow a variational Bayesian mean-field approach in the conjugate-exponential family of distributions, for efficient unsupervised learning in multi-dimensional settings. The performance of the proposed algorithm is demonstrated on some representative experiments.

  3. Iterative Strain-Gage Balance Calibration Data Analysis for Extended Independent Variable Sets

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2011-01-01

    A new method was developed that makes it possible to use an extended set of independent calibration variables for an iterative analysis of wind tunnel strain gage balance calibration data. The new method permits the application of the iterative analysis method whenever the total number of balance loads and other independent calibration variables is greater than the total number of measured strain gage outputs. Iteration equations used by the iterative analysis method have the limitation that the number of independent and dependent variables must match. The new method circumvents this limitation. It simply adds a missing dependent variable to the original data set by using an additional independent variable also as an additional dependent variable. Then, the desired solution of the regression analysis problem can be obtained that fits each gage output as a function of both the original and additional independent calibration variables. The final regression coefficients can be converted to data reduction matrix coefficients because the missing dependent variables were added to the data set without changing the regression analysis result for each gage output. Therefore, the new method still supports the application of the two load iteration equation choices that the iterative method traditionally uses for the prediction of balance loads during a wind tunnel test. An example is discussed in the paper that illustrates the application of the new method to a realistic simulation of temperature dependent calibration data set of a six component balance.

  4. Machine Learning Techniques in Optimal Design

    NASA Technical Reports Server (NTRS)

    Cerbone, Giuseppe

    1992-01-01

    Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution to the problem, is then obtained by solving in parallel each of the sub-problems in the set and computing the one with the minimum cost. In addition to speeding up the optimization process, our use of learning methods also relieves the expert from the burden of identifying rules that exactly pinpoint optimal candidate sub-problems. In real engineering tasks it is usually too costly to the engineers to derive such rules. Therefore, this paper also contributes to a further step towards the solution of the knowledge acquisition bottleneck [Feigenbaum, 1977] which has somewhat impaired the construction of rulebased expert systems.

  5. The Usher lifestyle survey: maintaining independence: a multi-centre study.

    PubMed

    Damen, Godelieve W J A; Krabbe, Paul F M; Kilsby, M; Mylanus, Emmanuel A M

    2005-12-01

    Patients with Usher syndrome face a special set of challenges in order to maintain their independence when their sight and hearing worsen. Three different types of Usher (I, II and III) are distinguished by differences in onset, progression and severity of hearing loss, and by the presence or absence of balance problems. In this study 93 Usher patients from seven European countries filled out a questionnaire on maintaining independence (60 patients type I, 25 patients type II, four patients type III and four patients type unknown). Results of Usher type I and II patients are presented. Following the Nordic definition of maintaining independence in deaf-blindness, three domains are investigated: access to information, communication and mobility. Research variables in this study are: age and type of Usher, considered hearing loss- and the number of retinitis pigmentosa-related sight problems. Usher type I patients tend to need more help than Usher type II patients and the amount of help that they need grows when patients get older or when considered hearing loss worsens. No patterns in results were seen for the number of retinitis pigmentosa related sight problems.

  6. An efficient quantum scheme for Private Set Intersection

    NASA Astrophysics Data System (ADS)

    Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun

    2016-01-01

    Private Set Intersection allows a client to privately compute set intersection with the collaboration of the server, which is one of the most fundamental and key problems within the multiparty collaborative computation of protecting the privacy of the parties. In this paper, we first present a cheat-sensitive quantum scheme for Private Set Intersection. Compared with classical schemes, our scheme has lower communication complexity, which is independent of the size of the server's set. Therefore, it is very suitable for big data services in Cloud or large-scale client-server networks.

  7. Eigenfunctions and Eigenvalues for a Scalar Riemann-Hilbert Problem Associated to Inverse Scattering

    NASA Astrophysics Data System (ADS)

    Pelinovsky, Dmitry E.; Sulem, Catherine

    A complete set of eigenfunctions is introduced within the Riemann-Hilbert formalism for spectral problems associated to some solvable nonlinear evolution equations. In particular, we consider the time-independent and time-dependent Schrödinger problems which are related to the KdV and KPI equations possessing solitons and lumps, respectively. Non-standard scalar products, orthogonality and completeness relations are derived for these problems. The complete set of eigenfunctions is used for perturbation theory and bifurcation analysis of eigenvalues supported by the potentials under perturbations. We classify two different types of bifurcations of new eigenvalues and analyze their characteristic features. One type corresponds to thresholdless generation of solitons in the KdV equation, while the other predicts a threshold for generation of lumps in the KPI equation.

  8. A Message Passing Approach to Side Chain Positioning with Applications in Protein Docking Refinement *

    PubMed Central

    Moghadasi, Mohammad; Kozakov, Dima; Mamonov, Artem B.; Vakili, Pirooz; Vajda, Sandor; Paschalidis, Ioannis Ch.

    2013-01-01

    We introduce a message-passing algorithm to solve the Side Chain Positioning (SCP) problem. SCP is a crucial component of protein docking refinement, which is a key step of an important class of problems in computational structural biology called protein docking. We model SCP as a combinatorial optimization problem and formulate it as a Maximum Weighted Independent Set (MWIS) problem. We then employ a modified and convergent belief-propagation algorithm to solve a relaxation of MWIS and develop randomized estimation heuristics that use the relaxed solution to obtain an effective MWIS feasible solution. Using a benchmark set of protein complexes we demonstrate that our approach leads to more accurate docking predictions compared to a baseline algorithm that does not solve the SCP. PMID:23515575

  9. Parallel group independent component analysis for massive fMRI data sets.

    PubMed

    Chen, Shaojie; Huang, Lei; Qiu, Huitong; Nebel, Mary Beth; Mostofsky, Stewart H; Pekar, James J; Lindquist, Martin A; Eloyan, Ani; Caffo, Brian S

    2017-01-01

    Independent component analysis (ICA) is widely used in the field of functional neuroimaging to decompose data into spatio-temporal patterns of co-activation. In particular, ICA has found wide usage in the analysis of resting state fMRI (rs-fMRI) data. Recently, a number of large-scale data sets have become publicly available that consist of rs-fMRI scans from thousands of subjects. As a result, efficient ICA algorithms that scale well to the increased number of subjects are required. To address this problem, we propose a two-stage likelihood-based algorithm for performing group ICA, which we denote Parallel Group Independent Component Analysis (PGICA). By utilizing the sequential nature of the algorithm and parallel computing techniques, we are able to efficiently analyze data sets from large numbers of subjects. We illustrate the efficacy of PGICA, which has been implemented in R and is freely available through the Comprehensive R Archive Network, through simulation studies and application to rs-fMRI data from two large multi-subject data sets, consisting of 301 and 779 subjects respectively.

  10. Dynamical basis sets for algebraic variational calculations in quantum-mechanical scattering theory

    NASA Technical Reports Server (NTRS)

    Sun, Yan; Kouri, Donald J.; Truhlar, Donald G.; Schwenke, David W.

    1990-01-01

    New basis sets are proposed for linear algebraic variational calculations of transition amplitudes in quantum-mechanical scattering problems. These basis sets are hybrids of those that yield the Kohn variational principle (KVP) and those that yield the generalized Newton variational principle (GNVP) when substituted in Schlessinger's stationary expression for the T operator. Trial calculations show that efficiencies almost as great as that of the GNVP and much greater than the KVP can be obtained, even for basis sets with the majority of the members independent of energy.

  11. Convergence of neural networks for programming problems via a nonsmooth Lojasiewicz inequality.

    PubMed

    Forti, Mauro; Nistri, Paolo; Quincampoix, Marc

    2006-11-01

    This paper considers a class of neural networks (NNs) for solving linear programming (LP) problems, convex quadratic programming (QP) problems, and nonconvex QP problems where an indefinite quadratic objective function is subject to a set of affine constraints. The NNs are characterized by constraint neurons modeled by ideal diodes with vertical segments in their characteristic, which enable to implement an exact penalty method. A new method is exploited to address convergence of trajectories, which is based on a nonsmooth Lojasiewicz inequality for the generalized gradient vector field describing the NN dynamics. The method permits to prove that each forward trajectory of the NN has finite length, and as a consequence it converges toward a singleton. Furthermore, by means of a quantitative evaluation of the Lojasiewicz exponent at the equilibrium points, the following results on convergence rate of trajectories are established: (1) for nonconvex QP problems, each trajectory is either exponentially convergent, or convergent in finite time, toward a singleton belonging to the set of constrained critical points; (2) for convex QP problems, the same result as in (1) holds; moreover, the singleton belongs to the set of global minimizers; and (3) for LP problems, each trajectory converges in finite time to a singleton belonging to the set of global minimizers. These results, which improve previous results obtained via the Lyapunov approach, are true independently of the nature of the set of equilibrium points, and in particular they hold even when the NN possesses infinitely many nonisolated equilibrium points.

  12. Device-independent tests of quantum channels

    NASA Astrophysics Data System (ADS)

    Dall'Arno, Michele; Brandsen, Sarah; Buscemi, Francesco

    2017-03-01

    We develop a device-independent framework for testing quantum channels. That is, we falsify a hypothesis about a quantum channel based only on an observed set of input-output correlations. Formally, the problem consists of characterizing the set of input-output correlations compatible with any arbitrary given quantum channel. For binary (i.e. two input symbols, two output symbols) correlations, we show that extremal correlations are always achieved by orthogonal encodings and measurements, irrespective of whether or not the channel preserves commutativity. We further provide a full, closed-form characterization of the sets of binary correlations in the case of: (i) any dihedrally covariant qubit channel (such as any Pauli and amplitude-damping channels) and (ii) any universally-covariant commutativity-preserving channel in an arbitrary dimension (such as any erasure, depolarizing, universal cloning and universal transposition channels).

  13. Device-independent tests of quantum channels.

    PubMed

    Dall'Arno, Michele; Brandsen, Sarah; Buscemi, Francesco

    2017-03-01

    We develop a device-independent framework for testing quantum channels. That is, we falsify a hypothesis about a quantum channel based only on an observed set of input-output correlations. Formally, the problem consists of characterizing the set of input-output correlations compatible with any arbitrary given quantum channel. For binary (i.e. two input symbols, two output symbols) correlations, we show that extremal correlations are always achieved by orthogonal encodings and measurements, irrespective of whether or not the channel preserves commutativity. We further provide a full, closed-form characterization of the sets of binary correlations in the case of: (i) any dihedrally covariant qubit channel (such as any Pauli and amplitude-damping channels) and (ii) any universally-covariant commutativity-preserving channel in an arbitrary dimension (such as any erasure, depolarizing, universal cloning and universal transposition channels).

  14. Systems-based biological concordance and predictive reproducibility of gene set discovery methods in cardiovascular disease.

    PubMed

    Azuaje, Francisco; Zheng, Huiru; Camargo, Anyela; Wang, Haiying

    2011-08-01

    The discovery of novel disease biomarkers is a crucial challenge for translational bioinformatics. Demonstration of both their classification power and reproducibility across independent datasets are essential requirements to assess their potential clinical relevance. Small datasets and multiplicity of putative biomarker sets may explain lack of predictive reproducibility. Studies based on pathway-driven discovery approaches have suggested that, despite such discrepancies, the resulting putative biomarkers tend to be implicated in common biological processes. Investigations of this problem have been mainly focused on datasets derived from cancer research. We investigated the predictive and functional concordance of five methods for discovering putative biomarkers in four independently-generated datasets from the cardiovascular disease domain. A diversity of biosignatures was identified by the different methods. However, we found strong biological process concordance between them, especially in the case of methods based on gene set analysis. With a few exceptions, we observed lack of classification reproducibility using independent datasets. Partial overlaps between our putative sets of biomarkers and the primary studies exist. Despite the observed limitations, pathway-driven or gene set analysis can predict potentially novel biomarkers and can jointly point to biomedically-relevant underlying molecular mechanisms. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. Source-independent full waveform inversion of seismic data

    DOEpatents

    Lee, Ki Ha

    2006-02-14

    A set of seismic trace data is collected in an input data set that is first Fourier transformed in its entirety into the frequency domain. A normalized wavefield is obtained for each trace of the input data set in the frequency domain. Normalization is done with respect to the frequency response of a reference trace selected from the set of seismic trace data. The normalized wavefield is source independent, complex, and dimensionless. The normalized wavefield is shown to be uniquely defined as the normalized impulse response, provided that a certain condition is met for the source. This property allows construction of the inversion algorithm disclosed herein, without any source or source coupling information. The algorithm minimizes the error between data normalized wavefield and the model normalized wavefield. The methodology is applicable to any 3-D seismic problem, and damping may be easily included in the process.

  16. A method and knowledge base for automated inference of patient problems from structured data in an electronic medical record.

    PubMed

    Wright, Adam; Pang, Justine; Feblowitz, Joshua C; Maloney, Francine L; Wilcox, Allison R; Ramelson, Harley Z; Schneider, Louise I; Bates, David W

    2011-01-01

    Accurate knowledge of a patient's medical problems is critical for clinical decision making, quality measurement, research, billing and clinical decision support. Common structured sources of problem information include the patient problem list and billing data; however, these sources are often inaccurate or incomplete. To develop and validate methods of automatically inferring patient problems from clinical and billing data, and to provide a knowledge base for inferring problems. We identified 17 target conditions and designed and validated a set of rules for identifying patient problems based on medications, laboratory results, billing codes, and vital signs. A panel of physicians provided input on a preliminary set of rules. Based on this input, we tested candidate rules on a sample of 100,000 patient records to assess their performance compared to gold standard manual chart review. The physician panel selected a final rule for each condition, which was validated on an independent sample of 100,000 records to assess its accuracy. Seventeen rules were developed for inferring patient problems. Analysis using a validation set of 100,000 randomly selected patients showed high sensitivity (range: 62.8-100.0%) and positive predictive value (range: 79.8-99.6%) for most rules. Overall, the inference rules performed better than using either the problem list or billing data alone. We developed and validated a set of rules for inferring patient problems. These rules have a variety of applications, including clinical decision support, care improvement, augmentation of the problem list, and identification of patients for research cohorts.

  17. Variational Trajectory Optimization Tool Set: Technical description and user's manual

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Queen, Eric M.; Cavanaugh, Michael D.; Wetzel, Todd A.; Moerder, Daniel D.

    1993-01-01

    The algorithms that comprise the Variational Trajectory Optimization Tool Set (VTOTS) package are briefly described. The VTOTS is a software package for solving nonlinear constrained optimal control problems from a wide range of engineering and scientific disciplines. The VTOTS package was specifically designed to minimize the amount of user programming; in fact, for problems that may be expressed in terms of analytical functions, the user needs only to define the problem in terms of symbolic variables. This version of the VTOTS does not support tabular data; thus, problems must be expressed in terms of analytical functions. The VTOTS package consists of two methods for solving nonlinear optimal control problems: a time-domain finite-element algorithm and a multiple shooting algorithm. These two algorithms, under the VTOTS package, may be run independently or jointly. The finite-element algorithm generates approximate solutions, whereas the shooting algorithm provides a more accurate solution to the optimization problem. A user's manual, some examples with results, and a brief description of the individual subroutines are included.

  18. Improving Critical Skills Using Wikis and CGPS in a Physics Classroom

    NASA Astrophysics Data System (ADS)

    Mohottala, H. E.

    2016-10-01

    We report the combined use of Wikispaces (wikis) and collaborative group problem solving (CGPS) sessions conducted in introductory-level calculus-based physics classes. As a part of this new teaching tool, some essay-type problems were posted on the wiki page on a weekly basis and students were encouraged to participate in problem solving without providing numerical final answers but only the steps. Each week students were further evaluated on problem solving skills, opening up more opportunity for peer interaction through CGPS. Students developed a set of skills in decision making, problem solving, communication, negotiation, critical and independent thinking, and teamwork through the combination of wikis and CGPS.

  19. No Child Left Unchallenged

    ERIC Educational Resources Information Center

    Beigie, Darin

    2011-01-01

    Providing student choice and opportunities for independent study are recognized as viable differentiation techniques. Daily homework sets that contain more demanding problems even though not required allow the teacher to provide challenge without incurring undue stress. The modest incentive of some homework bonus points is enough to whet the…

  20. Stability analysis of multiple-robot control systems

    NASA Technical Reports Server (NTRS)

    Wen, John T.; Kreutz, Kenneth

    1989-01-01

    In a space telerobotic service scenario, cooperative motion and force control of multiple robot arms are of fundamental importance. Three paradigms to study this problem are proposed. They are distinguished by the set of variables used for control design. They are joint torques, arm tip force vectors, and an accelerated generalized coordinate set. Control issues related to each case are discussed. The latter two choices require complete model information, which presents practical modeling, computational, and robustness problems. Therefore, focus is on the joint torque control case to develop relatively model independent motion and internal force control laws. The rigid body assumption allows the motion and force control problems to be independently addressed. By using an energy motivated Lyapunov function, a simple proportional derivative plus gravity compensation type of motion control law is always shown to be stabilizing. The asymptotic convergence of the tracing error to zero requires the use of a generalized coordinate with the contact constraints taken into account. If a non-generalized coordinate is used, only convergence to a steady state manifold can be concluded. For the force control, both feedforward and feedback schemes are analyzed. The feedback control, if proper care has been taken, exhibits better robustness and transient performance.

  1. Independence polynomial and matching polynomial of the Koch network

    NASA Astrophysics Data System (ADS)

    Liao, Yunhua; Xie, Xiaoliang

    2015-11-01

    The lattice gas model and the monomer-dimer model are two classical models in statistical mechanics. It is well known that the partition functions of these two models are associated with the independence polynomial and the matching polynomial in graph theory, respectively. Both polynomials have been shown to belong to the “#P-complete” class, which indicate the problems are computationally “intractable”. We consider these two polynomials of the Koch networks which are scale-free with small-world effects. Explicit recurrences are derived, and explicit formulae are presented for the number of independent sets of a certain type.

  2. Node-Based Learning of Multiple Gaussian Graphical Models

    PubMed Central

    Mohan, Karthik; London, Palma; Fazel, Maryam; Witten, Daniela; Lee, Su-In

    2014-01-01

    We consider the problem of estimating high-dimensional Gaussian graphical models corresponding to a single set of variables under several distinct conditions. This problem is motivated by the task of recovering transcriptional regulatory networks on the basis of gene expression data containing heterogeneous samples, such as different disease states, multiple species, or different developmental stages. We assume that most aspects of the conditional dependence networks are shared, but that there are some structured differences between them. Rather than assuming that similarities and differences between networks are driven by individual edges, we take a node-based approach, which in many cases provides a more intuitive interpretation of the network differences. We consider estimation under two distinct assumptions: (1) differences between the K networks are due to individual nodes that are perturbed across conditions, or (2) similarities among the K networks are due to the presence of common hub nodes that are shared across all K networks. Using a row-column overlap norm penalty function, we formulate two convex optimization problems that correspond to these two assumptions. We solve these problems using an alternating direction method of multipliers algorithm, and we derive a set of necessary and sufficient conditions that allows us to decompose the problem into independent subproblems so that our algorithm can be scaled to high-dimensional settings. Our proposal is illustrated on synthetic data, a webpage data set, and a brain cancer gene expression data set. PMID:25309137

  3. Machine learning-based coreference resolution of concepts in clinical documents

    PubMed Central

    Ware, Henry; Mullett, Charles J; El-Rawas, Oussama

    2012-01-01

    Objective Coreference resolution of concepts, although a very active area in the natural language processing community, has not yet been widely applied to clinical documents. Accordingly, the 2011 i2b2 competition focusing on this area is a timely and useful challenge. The objective of this research was to collate coreferent chains of concepts from a corpus of clinical documents. These concepts are in the categories of person, problems, treatments, and tests. Design A machine learning approach based on graphical models was employed to cluster coreferent concepts. Features selected were divided into domain independent and domain specific sets. Training was done with the i2b2 provided training set of 489 documents with 6949 chains. Testing was done on 322 documents. Results The learning engine, using the un-weighted average of three different measurement schemes, resulted in an F measure of 0.8423 where no domain specific features were included and 0.8483 where the feature set included both domain independent and domain specific features. Conclusion Our machine learning approach is a promising solution for recognizing coreferent concepts, which in turn is useful for practical applications such as the assembly of problem and medication lists from clinical documents. PMID:22582205

  4. A preprocessing strategy for helioseismic inversions

    NASA Astrophysics Data System (ADS)

    Christensen-Dalsgaard, J.; Thompson, M. J.

    1993-05-01

    Helioseismic inversion in general involves considerable computational expense, due to the large number of modes that is typically considered. This is true in particular of the widely used optimally localized averages (OLA) inversion methods, which require the inversion of one or more matrices whose order is the number of modes in the set. However, the number of practically independent pieces of information that a large helioseismic mode set contains is very much less than the number of modes, suggesting that the set might first be reduced before the expensive inversion is performed. We demonstrate with a model problem that by first performing a singular value decomposition the original problem may be transformed into a much smaller one, reducing considerably the cost of the OLA inversion and with no significant loss of information.

  5. Approximations of Two-Attribute Utility Functions

    DTIC Science & Technology

    1976-09-01

    preferred to") be a bina-zy relation on the set • of simple probability measures or ’gambles’ defined on a set T of consequences. Throughout this study it...simplifying independence assumptions. Although there are several approaches to this problem, the21 present study will focus on approximations of u... study will elicit additional interest in the topic. 2. REMARKS ON APPROXIMATION THEORY This section outlines a few basic ideas of approximation theory

  6. Utilizing Maximal Independent Sets as Dominating Sets in Scale-Free Networks

    NASA Astrophysics Data System (ADS)

    Derzsy, N.; Molnar, F., Jr.; Szymanski, B. K.; Korniss, G.

    Dominating sets provide key solution to various critical problems in networked systems, such as detecting, monitoring, or controlling the behavior of nodes. Motivated by graph theory literature [Erdos, Israel J. Math. 4, 233 (1966)], we studied maximal independent sets (MIS) as dominating sets in scale-free networks. We investigated the scaling behavior of the size of MIS in artificial scale-free networks with respect to multiple topological properties (size, average degree, power-law exponent, assortativity), evaluated its resilience to network damage resulting from random failure or targeted attack [Molnar et al., Sci. Rep. 5, 8321 (2015)], and compared its efficiency to previously proposed dominating set selection strategies. We showed that, despite its small set size, MIS provides very high resilience against network damage. Using extensive numerical analysis on both synthetic and real-world (social, biological, technological) network samples, we demonstrate that our method effectively satisfies four essential requirements of dominating sets for their practical applicability on large-scale real-world systems: 1.) small set size, 2.) minimal network information required for their construction scheme, 3.) fast and easy computational implementation, and 4.) resiliency to network damage. Supported by DARPA, DTRA, and NSF.

  7. Connected Component Model for Multi-Object Tracking.

    PubMed

    He, Zhenyu; Li, Xin; You, Xinge; Tao, Dacheng; Tang, Yuan Yan

    2016-08-01

    In multi-object tracking, it is critical to explore the data associations by exploiting the temporal information from a sequence of frames rather than the information from the adjacent two frames. Since straightforwardly obtaining data associations from multi-frames is an NP-hard multi-dimensional assignment (MDA) problem, most existing methods solve this MDA problem by either developing complicated approximate algorithms, or simplifying MDA as a 2D assignment problem based upon the information extracted only from adjacent frames. In this paper, we show that the relation between associations of two observations is the equivalence relation in the data association problem, based on the spatial-temporal constraint that the trajectories of different objects must be disjoint. Therefore, the MDA problem can be equivalently divided into independent subproblems by equivalence partitioning. In contrast to existing works for solving the MDA problem, we develop a connected component model (CCM) by exploiting the constraints of the data association and the equivalence relation on the constraints. Based upon CCM, we can efficiently obtain the global solution of the MDA problem for multi-object tracking by optimizing a sequence of independent data association subproblems. Experiments on challenging public data sets demonstrate that our algorithm outperforms the state-of-the-art approaches.

  8. A method and knowledge base for automated inference of patient problems from structured data in an electronic medical record

    PubMed Central

    Pang, Justine; Feblowitz, Joshua C; Maloney, Francine L; Wilcox, Allison R; Ramelson, Harley Z; Schneider, Louise I; Bates, David W

    2011-01-01

    Background Accurate knowledge of a patient's medical problems is critical for clinical decision making, quality measurement, research, billing and clinical decision support. Common structured sources of problem information include the patient problem list and billing data; however, these sources are often inaccurate or incomplete. Objective To develop and validate methods of automatically inferring patient problems from clinical and billing data, and to provide a knowledge base for inferring problems. Study design and methods We identified 17 target conditions and designed and validated a set of rules for identifying patient problems based on medications, laboratory results, billing codes, and vital signs. A panel of physicians provided input on a preliminary set of rules. Based on this input, we tested candidate rules on a sample of 100 000 patient records to assess their performance compared to gold standard manual chart review. The physician panel selected a final rule for each condition, which was validated on an independent sample of 100 000 records to assess its accuracy. Results Seventeen rules were developed for inferring patient problems. Analysis using a validation set of 100 000 randomly selected patients showed high sensitivity (range: 62.8–100.0%) and positive predictive value (range: 79.8–99.6%) for most rules. Overall, the inference rules performed better than using either the problem list or billing data alone. Conclusion We developed and validated a set of rules for inferring patient problems. These rules have a variety of applications, including clinical decision support, care improvement, augmentation of the problem list, and identification of patients for research cohorts. PMID:21613643

  9. Permutation flow-shop scheduling problem to optimize a quadratic objective function

    NASA Astrophysics Data System (ADS)

    Ren, Tao; Zhao, Peng; Zhang, Da; Liu, Bingqian; Yuan, Huawei; Bai, Danyu

    2017-09-01

    A flow-shop scheduling model enables appropriate sequencing for each job and for processing on a set of machines in compliance with identical processing orders. The objective is to achieve a feasible schedule for optimizing a given criterion. Permutation is a special setting of the model in which the processing order of the jobs on the machines is identical for each subsequent step of processing. This article addresses the permutation flow-shop scheduling problem to minimize the criterion of total weighted quadratic completion time. With a probability hypothesis, the asymptotic optimality of the weighted shortest processing time schedule under a consistency condition (WSPT-CC) is proven for sufficiently large-scale problems. However, the worst case performance ratio of the WSPT-CC schedule is the square of the number of machines in certain situations. A discrete differential evolution algorithm, where a new crossover method with multiple-point insertion is used to improve the final outcome, is presented to obtain high-quality solutions for moderate-scale problems. A sequence-independent lower bound is designed for pruning in a branch-and-bound algorithm for small-scale problems. A set of random experiments demonstrates the performance of the lower bound and the effectiveness of the proposed algorithms.

  10. Ultrahigh-Dimensional Multiclass Linear Discriminant Analysis by Pairwise Sure Independence Screening

    PubMed Central

    Pan, Rui; Wang, Hansheng; Li, Runze

    2016-01-01

    This paper is concerned with the problem of feature screening for multi-class linear discriminant analysis under ultrahigh dimensional setting. We allow the number of classes to be relatively large. As a result, the total number of relevant features is larger than usual. This makes the related classification problem much more challenging than the conventional one, where the number of classes is small (very often two). To solve the problem, we propose a novel pairwise sure independence screening method for linear discriminant analysis with an ultrahigh dimensional predictor. The proposed procedure is directly applicable to the situation with many classes. We further prove that the proposed method is screening consistent. Simulation studies are conducted to assess the finite sample performance of the new procedure. We also demonstrate the proposed methodology via an empirical analysis of a real life example on handwritten Chinese character recognition. PMID:28127109

  11. Plastics in Perspective.

    ERIC Educational Resources Information Center

    Bergandine, David R.; Holm, D. Andrew

    The materials in this curriculum supplement, developed for middle school or high school science classes, present solid waste problems related to plastics. The set of curriculum materials is divided into two units to be used together or independently. Unit I begins by comparing patterns in solid waste from 1960 to 1990 and introducing methods for…

  12. The Decline and Fall of the Laws of Learning

    ERIC Educational Resources Information Center

    McKeachie, W. J.

    1974-01-01

    Problems in trying to apply the laws of learning to educational situations derive both from the failure to take account of differences between humans and other animals, and from failure to take into account of important variables interacting with independent variables in natural educational settings. (Author/JM)

  13. The Cooperative Form, the Value and the Allocation of Joint Costs and Benefits

    DTIC Science & Technology

    1984-05-15

    all play a role yet they are ex- tremely difficult to formalize. The existence of a vast body of law complete with intricate docu- ments such as the...externality problems involve a finite set of profit centers whose activities influence each other. When interests are either independent (no...else to whom they submit their reports. In general the cooperative form is not adequate to study problems of auditing, en - forcement and agency

  14. NP-hardness of the cluster minimization problem revisited

    NASA Astrophysics Data System (ADS)

    Adib, Artur B.

    2005-10-01

    The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested.

  15. Using Laboratory Homework to Facilitate Skill Integration and Assess Understanding in Intermediate Physics Courses

    NASA Astrophysics Data System (ADS)

    Johnston, Marty; Jalkio, Jeffrey

    2013-04-01

    By the time students have reached the intermediate level physics courses they have been exposed to a broad set of analytical, experimental, and computational skills. However, their ability to independently integrate these skills into the study of a physical system is often weak. To address this weakness and assess their understanding of the underlying physical concepts we have introduced laboratory homework into lecture based, junior level theoretical mechanics and electromagnetics courses. A laboratory homework set replaces a traditional one and emphasizes the analysis of a single system. In an exercise, students use analytical and computational tools to predict the behavior of a system and design a simple measurement to test their model. The laboratory portion of the exercises is straight forward and the emphasis is on concept integration and application. The short student reports we collect have revealed misconceptions that were not apparent in reviewing the traditional homework and test problems. Work continues on refining the current problems and expanding the problem sets.

  16. Intelligent Text Retrieval and Knowledge Acquisition from Texts for NASA Applications: Preprocessing Issues

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A system that retrieves problem reports from a NASA database is described. The database is queried with natural language questions. Part-of-speech tags are first assigned to each word in the question using a rule based tagger. A partial parse of the question is then produced with independent sets of deterministic finite state a utomata. Using partial parse information, a look up strategy searches the database for problem reports relevant to the question. A bigram stemmer and irregular verb conjugates have been incorporated into the system to improve accuracy. The system is evaluated by a set of fifty five questions posed by NASA engineers. A discussion of future research is also presented.

  17. Resolvent-Techniques for Multiple Exercise Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christensen, Sören, E-mail: christensen@math.uni-kiel.de; Lempa, Jukka, E-mail: jukka.lempa@hioa.no

    2015-02-15

    We study optimal multiple stopping of strong Markov processes with random refraction periods. The refraction periods are assumed to be exponentially distributed with a common rate and independent of the underlying dynamics. Our main tool is using the resolvent operator. In the first part, we reduce infinite stopping problems to ordinary ones in a general strong Markov setting. This leads to explicit solutions for wide classes of such problems. Starting from this result, we analyze problems with finitely many exercise rights and explain solution methods for some classes of problems with underlying Lévy and diffusion processes, where the optimal characteristicsmore » of the problems can be identified more explicitly. We illustrate the main results with explicit examples.« less

  18. Drugs and the single woman: pharmacy, fashion, desire, and destitution in India.

    PubMed

    Pinto, Sarah

    2014-06-01

    A cultural imaginary identified as "fashion" links single women with problems of desire in contemporary India, setting the stakes not only for independent living, but also for the ways distresses may be read and treated. From celebrity cases to films to clinical practices oriented around pharmaceuticals, the mechanisms of this imaginary locate female personhood at a series of critical junctures or "hinges," from pharmaceuticals to drugs of vice, from desire to expressions of disorder, and from singularity or independence to destitution. In each of these turns, as psychiatrists read female bodies for signs of affliction and media portray counter trajectories of aspiration and downfall, certain realities are shielded from consideration, including sexual violence in intimate settings.

  19. Working towards a scalable model of problem-based learning instruction in undergraduate engineering education

    NASA Astrophysics Data System (ADS)

    Mantri, Archana

    2014-05-01

    The intent of the study presented in this paper is to show that the model of problem-based learning (PBL) can be made scalable by designing curriculum around a set of open-ended problems (OEPs). The detailed statistical analysis of the data collected to measure the effects of traditional and PBL instructions for three courses in Electronics and Communication Engineering, namely Analog Electronics, Digital Electronics and Pulse, Digital & Switching Circuits is presented here. It measures the effects of pedagogy, gender and cognitive styles on the knowledge, skill and attitude of the students. The study was conducted two times with content designed around same set of OEPs but with two different trained facilitators for all the three courses. The repeatability of results for effects of the independent parameters on dependent parameters is studied and inferences are drawn.

  20. User's manual for two dimensional FDTD version TEA and TMA codes for scattering from frequency-independent dielectic materials

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.

    1991-01-01

    The Penn State Finite Difference Time Domain Electromagnetic Scattering Code Versions TEA and TMA are two dimensional numerical electromagnetic scattering codes based upon the Finite Difference Time Domain Technique (FDTD) first proposed by Yee in 1966. The supplied version of the codes are two versions of our current two dimensional FDTD code set. This manual provides a description of the codes and corresponding results for the default scattering problem. The manual is organized into eleven sections: introduction, Version TEA and TMA code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include files (TEACOM.FOR TMACOM.FOR), a section briefly discussing scattering width computations, a section discussing the scattering results, a sample problem set section, a new problem checklist, references and figure titles.

  1. Coulomb matrix elements in multi-orbital Hubbard models.

    PubMed

    Bünemann, Jörg; Gebhard, Florian

    2017-04-26

    Coulomb matrix elements are needed in all studies in solid-state theory that are based on Hubbard-type multi-orbital models. Due to symmetries, the matrix elements are not independent. We determine a set of independent Coulomb parameters for a d-shell and an f-shell and all point groups with up to 16 elements (O h , O, T d , T h , D 6h , and D 4h ). Furthermore, we express all other matrix elements as a function of the independent Coulomb parameters. Apart from the solution of the general point-group problem we investigate in detail the spherical approximation and first-order corrections to the spherical approximation.

  2. Independent Component Analysis-motivated Approach to Classificatory Decomposition of Cortical Evoked Potentials

    PubMed Central

    Smolinski, Tomasz G; Buchanan, Roger; Boratyn, Grzegorz M; Milanova, Mariofanna; Prinz, Astrid A

    2006-01-01

    Background Independent Component Analysis (ICA) proves to be useful in the analysis of neural activity, as it allows for identification of distinct sources of activity. Applied to measurements registered in a controlled setting and under exposure to an external stimulus, it can facilitate analysis of the impact of the stimulus on those sources. The link between the stimulus and a given source can be verified by a classifier that is able to "predict" the condition a given signal was registered under, solely based on the components. However, the ICA's assumption about statistical independence of sources is often unrealistic and turns out to be insufficient to build an accurate classifier. Therefore, we propose to utilize a novel method, based on hybridization of ICA, multi-objective evolutionary algorithms (MOEA), and rough sets (RS), that attempts to improve the effectiveness of signal decomposition techniques by providing them with "classification-awareness." Results The preliminary results described here are very promising and further investigation of other MOEAs and/or RS-based classification accuracy measures should be pursued. Even a quick visual analysis of those results can provide an interesting insight into the problem of neural activity analysis. Conclusion We present a methodology of classificatory decomposition of signals. One of the main advantages of our approach is the fact that rather than solely relying on often unrealistic assumptions about statistical independence of sources, components are generated in the light of a underlying classification problem itself. PMID:17118151

  3. Cross-domain expression recognition based on sparse coding and transfer learning

    NASA Astrophysics Data System (ADS)

    Yang, Yong; Zhang, Weiyi; Huang, Yong

    2017-05-01

    Traditional facial expression recognition methods usually assume that the training set and the test set are independent and identically distributed. However, in actual expression recognition applications, the conditions of independent and identical distribution are hardly satisfied for the training set and test set because of the difference of light, shade, race and so on. In order to solve this problem and improve the performance of expression recognition in the actual applications, a novel method based on transfer learning and sparse coding is applied to facial expression recognition. First of all, a common primitive model, that is, the dictionary is learnt. Then, based on the idea of transfer learning, the learned primitive pattern is transferred to facial expression and the corresponding feature representation is obtained by sparse coding. The experimental results in CK +, JAFFE and NVIE database shows that the transfer learning based on sparse coding method can effectively improve the expression recognition rate in the cross-domain expression recognition task and is suitable for the practical facial expression recognition applications.

  4. An impatient evolutionary algorithm with probabilistic tabu search for unified solution of some NP-hard problems in graph and set theory via clique finding.

    PubMed

    Guturu, Parthasarathy; Dantu, Ram

    2008-06-01

    Many graph- and set-theoretic problems, because of their tremendous application potential and theoretical appeal, have been well investigated by the researchers in complexity theory and were found to be NP-hard. Since the combinatorial complexity of these problems does not permit exhaustive searches for optimal solutions, only near-optimal solutions can be explored using either various problem-specific heuristic strategies or metaheuristic global-optimization methods, such as simulated annealing, genetic algorithms, etc. In this paper, we propose a unified evolutionary algorithm (EA) to the problems of maximum clique finding, maximum independent set, minimum vertex cover, subgraph and double subgraph isomorphism, set packing, set partitioning, and set cover. In the proposed approach, we first map these problems onto the maximum clique-finding problem (MCP), which is later solved using an evolutionary strategy. The proposed impatient EA with probabilistic tabu search (IEA-PTS) for the MCP integrates the best features of earlier successful approaches with a number of new heuristics that we developed to yield a performance that advances the state of the art in EAs for the exploration of the maximum cliques in a graph. Results of experimentation with the 37 DIMACS benchmark graphs and comparative analyses with six state-of-the-art algorithms, including two from the smaller EA community and four from the larger metaheuristics community, indicate that the IEA-PTS outperforms the EAs with respect to a Pareto-lexicographic ranking criterion and offers competitive performance on some graph instances when individually compared to the other heuristic algorithms. It has also successfully set a new benchmark on one graph instance. On another benchmark suite called Benchmarks with Hidden Optimal Solutions, IEA-PTS ranks second, after a very recent algorithm called COVER, among its peers that have experimented with this suite.

  5. Child Sexual Abuse in Early-Childhood Care and Education Settings

    ERIC Educational Resources Information Center

    Briggs, Freda

    2014-01-01

    When the author was adviser to the Australian Minister for Education for writing the national Safe Schools Framework (2003), meetings were held with early-childhood care and education administrators from all state, Catholic and independent sectors. Their unexpected message was that educators were facing new problems, those of child sexual abuse in…

  6. Applying Graph Theory to Problems in Air Traffic Management

    NASA Technical Reports Server (NTRS)

    Farrahi, Amir Hossein; Goldbert, Alan; Bagasol, Leonard Neil; Jung, Jaewoo

    2017-01-01

    Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it is shown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.

  7. Applying Graph Theory to Problems in Air Traffic Management

    NASA Technical Reports Server (NTRS)

    Farrahi, Amir H.; Goldberg, Alan T.; Bagasol, Leonard N.; Jung, Jaewoo

    2017-01-01

    Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it isshown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.

  8. A case study on modeling and independent practice cycles in teaching beginning science inquiry

    NASA Astrophysics Data System (ADS)

    Sadeghpour-Kramer, Margaret Ann Plattenberger

    With increasing pressure to produce high standardized test scores, school systems will be looking for the surest ways to increase scores. Decision makers uninformed about the value of inquiry science may recommend more direct teaching methods and curricula in the hope that students will more quickly accumulate factual information for high test scores. This researcher and other proponents of inquiry science suggest that the best preparation for any test is the ability to use all available information and problem solving skills to think through to a solution. This study proposes to test the theory that inquiry problem solving skills need to be modeled and practiced in increasingly independent situations to be learned. Students tend to copy what they have been led to believe is correct, and to avoid continued copying, their skills must be applied in new situations requiring independent practice and improvement. This study follows ten sixth grade students, selected for maximum variation, as they participate in a series of five cycles of modeling and practicing inquiry science investigations as part of an ongoing unit on water quality. The cycles were designed to make the students increasingly independent in their use of inquiry. The results showed that all ten students made significant progress from copying teacher modeling in investigation #1 towards independent inquiry, with nine of the ten achieving acceptable to good beginning independent inquiry in investigation #5. Each case was analyzed independently using such case study methodology as pattern matching, case study protocols, and theoretical propositions. Constant comparison and other case study methods were used in a cross-case analysis. Eight cases confirmed a matching set of propositions and the hypothesis, in literal replication, and the other two cases confirmed a set of propositions and the hypothesis through theoretical replication. The study suggests to educators that repeated cycles of modeling and increasingly independent practice serve three purposes; first to develop independent inquiry skills by providing multiple opportunities with intermittent modeling, second to repeat the modeling initially in very similar situations and then encourage transfer to new situations, and third to provide repeated modeling for those students who do not grasp the concepts as quickly as do their classmates.

  9. On a production system using default reasoning for pattern classification

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Lowe, Carlyle M.

    1990-01-01

    This paper addresses an unconventional application of a production system to a problem involving belief specialization. The production system reduces a large quantity of low-level descriptions into just a few higher-level descriptions that encompass the problem space in a more tractable fashion. This classification process utilizes a set of descriptions generated by combining the component hierarchy of a physical system with the semantics of the terminology employed in its operation. The paper describes an application of this process in a program, constructed in C and CLIPS, that classifies signatures of electromechanical system configurations. The program compares two independent classifications, describing the actual and expected system configurations, in order to generate a set of contradictions between the two.

  10. Finite-time synchronization of stochastic coupled neural networks subject to Markovian switching and input saturation.

    PubMed

    Selvaraj, P; Sakthivel, R; Kwon, O M

    2018-06-07

    This paper addresses the problem of finite-time synchronization of stochastic coupled neural networks (SCNNs) subject to Markovian switching, mixed time delay, and actuator saturation. In addition, coupling strengths of the SCNNs are characterized by mutually independent random variables. By utilizing a simple linear transformation, the problem of stochastic finite-time synchronization of SCNNs is converted into a mean-square finite-time stabilization problem of an error system. By choosing a suitable mode dependent switched Lyapunov-Krasovskii functional, a new set of sufficient conditions is derived to guarantee the finite-time stability of the error system. Subsequently, with the help of anti-windup control scheme, the actuator saturation risks could be mitigated. Moreover, the derived conditions help to optimize estimation of the domain of attraction by enlarging the contractively invariant set. Furthermore, simulations are conducted to exhibit the efficiency of proposed control scheme. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Efficiency of parallel direct optimization

    NASA Technical Reports Server (NTRS)

    Janies, D. A.; Wheeler, W. C.

    2001-01-01

    Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.

  12. A filtering approach to edge preserving MAP estimation of images.

    PubMed

    Humphrey, David; Taubman, David

    2011-05-01

    The authors present a computationally efficient technique for maximum a posteriori (MAP) estimation of images in the presence of both blur and noise. The image is divided into statistically independent regions. Each region is modelled with a WSS Gaussian prior. Classical Wiener filter theory is used to generate a set of convex sets in the solution space, with the solution to the MAP estimation problem lying at the intersection of these sets. The proposed algorithm uses an underlying segmentation of the image, and a means of determining the segmentation and refining it are described. The algorithm is suitable for a range of image restoration problems, as it provides a computationally efficient means to deal with the shortcomings of Wiener filtering without sacrificing the computational simplicity of the filtering approach. The algorithm is also of interest from a theoretical viewpoint as it provides a continuum of solutions between Wiener filtering and Inverse filtering depending upon the segmentation used. We do not attempt to show here that the proposed method is the best general approach to the image reconstruction problem. However, related work referenced herein shows excellent performance in the specific problem of demosaicing.

  13. On the existence, uniqueness, and asymptotic normality of a consistent solution of the likelihood equations for nonidentically distributed observations: Applications to missing data problems

    NASA Technical Reports Server (NTRS)

    Peters, C. (Principal Investigator)

    1980-01-01

    A general theorem is given which establishes the existence and uniqueness of a consistent solution of the likelihood equations given a sequence of independent random vectors whose distributions are not identical but have the same parameter set. In addition, it is shown that the consistent solution is a MLE and that it is asymptotically normal and efficient. Two applications are discussed: one in which independent observations of a normal random vector have missing components, and the other in which the parameters in a mixture from an exponential family are estimated using independent homogeneous sample blocks of different sizes.

  14. The Ablowitz–Ladik system on a finite set of integers

    NASA Astrophysics Data System (ADS)

    Xia, Baoqiang

    2018-07-01

    We show how to solve initial-boundary value problems for integrable nonlinear differential–difference equations on a finite set of integers. The method we employ is the discrete analogue of the unified transform (Fokas method). The implementation of this method to the Ablowitz–Ladik system yields the solution in terms of the unique solution of a matrix Riemann–Hilbert problem, which has a jump matrix with explicit -dependence involving certain functions referred to as spectral functions. Some of these functions are defined in terms of the initial value, while the remaining spectral functions are defined in terms of two sets of boundary values. These spectral functions are not independent but satisfy an algebraic relation called global relation. We analyze the global relation to characterize the unknown boundary values in terms of the given initial and boundary values. We also discuss the linearizable boundary conditions.

  15. Combining multiple positive training sets to generate confidence scores for protein-protein interactions.

    PubMed

    Yu, Jingkai; Finley, Russell L

    2009-01-01

    High-throughput experimental and computational methods are generating a wealth of protein-protein interaction data for a variety of organisms. However, data produced by current state-of-the-art methods include many false positives, which can hinder the analyses needed to derive biological insights. One way to address this problem is to assign confidence scores that reflect the reliability and biological significance of each interaction. Most previously described scoring methods use a set of likely true positives to train a model to score all interactions in a dataset. A single positive training set, however, may be biased and not representative of true interaction space. We demonstrate a method to score protein interactions by utilizing multiple independent sets of training positives to reduce the potential bias inherent in using a single training set. We used a set of benchmark yeast protein interactions to show that our approach outperforms other scoring methods. Our approach can also score interactions across data types, which makes it more widely applicable than many previously proposed methods. We applied the method to protein interaction data from both Drosophila melanogaster and Homo sapiens. Independent evaluations show that the resulting confidence scores accurately reflect the biological significance of the interactions.

  16. The chaotic set and the cross section for chaotic scattering in three degrees of freedom

    NASA Astrophysics Data System (ADS)

    Jung, C.; Merlo, O.; Seligman, T. H.; Zapfe, W. P. K.

    2010-10-01

    This article treats chaotic scattering with three degrees of freedom, where one of them is open and the other two are closed, as a first step towards a more general understanding of chaotic scattering in higher dimensions. Despite the strong restrictions, it breaks the essential simplicity implicit in any two-dimensional time-independent scattering problem. Introducing the third degree of freedom by breaking a continuous symmetry, we first explore the topological structure of the homoclinic/heteroclinic tangle and the structures in the scattering functions. Then we work out the implications of these structures for the doubly differential cross section. The most prominent structures in the cross section are rainbow singularities. They form a fractal pattern that reflects the fractal structure of the chaotic invariant set. This allows us to determine structures in the cross section from the invariant set and, conversely, to obtain information about the topology of the invariant set from the cross section. The latter is a contribution to the inverse scattering problem for chaotic systems.

  17. Computations of Aerodynamic Performance Databases Using Output-Based Refinement

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.

    2009-01-01

    Objectives: Handle complex geometry problems; Control discretization errors via solution-adaptive mesh refinement; Focus on aerodynamic databases of parametric and optimization studies: 1. Accuracy: satisfy prescribed error bounds 2. Robustness and speed: may require over 105 mesh generations 3. Automation: avoid user supervision Obtain "expert meshes" independent of user skill; and Run every case adaptively in production settings.

  18. The renormalization scale-setting problem in QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Xing-Gang; Brodsky, Stanley J.; Mojaza, Matin

    2013-09-01

    A key problem in making precise perturbative QCD predictions is to set the proper renormalization scale of the running coupling. The conventional scale-setting procedure assigns an arbitrary range and an arbitrary systematic error to fixed-order pQCD predictions. In fact, this ad hoc procedure gives results which depend on the choice of the renormalization scheme, and it is in conflict with the standard scale-setting procedure used in QED. Predictions for physical results should be independent of the choice of the scheme or other theoretical conventions. We review current ideas and points of view on how to deal with the renormalization scalemore » ambiguity and show how to obtain renormalization scheme- and scale-independent estimates. We begin by introducing the renormalization group (RG) equation and an extended version, which expresses the invariance of physical observables under both the renormalization scheme and scale-parameter transformations. The RG equation provides a convenient way for estimating the scheme- and scale-dependence of a physical process. We then discuss self-consistency requirements of the RG equations, such as reflexivity, symmetry, and transitivity, which must be satisfied by a scale-setting method. Four typical scale setting methods suggested in the literature, i.e., the Fastest Apparent Convergence (FAC) criterion, the Principle of Minimum Sensitivity (PMS), the Brodsky–Lepage–Mackenzie method (BLM), and the Principle of Maximum Conformality (PMC), are introduced. Basic properties and their applications are discussed. We pay particular attention to the PMC, which satisfies all of the requirements of RG invariance. Using the PMC, all non-conformal terms associated with the β-function in the perturbative series are summed into the running coupling, and one obtains a unique, scale-fixed, scheme-independent prediction at any finite order. The PMC provides the principle underlying the BLM method, since it gives the general rule for extending BLM up to any perturbative order; in fact, they are equivalent to each other through the PMC–BLM correspondence principle. Thus, all the features previously observed in the BLM literature are also adaptable to the PMC. The PMC scales and the resulting finite-order PMC predictions are to high accuracy independent of the choice of the initial renormalization scale, and thus consistent with RG invariance. The PMC is also consistent with the renormalization scale-setting procedure for QED in the zero-color limit. The use of the PMC thus eliminates a serious systematic scale error in perturbative QCD predictions, greatly improving the precision of empirical tests of the Standard Model and their sensitivity to new physics.« less

  19. Polarity related influence maximization in signed social networks.

    PubMed

    Li, Dong; Xu, Zhi-Ming; Chakraborty, Nilanjan; Gupta, Anika; Sycara, Katia; Li, Sheng

    2014-01-01

    Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods.

  20. Polarity Related Influence Maximization in Signed Social Networks

    PubMed Central

    Li, Dong; Xu, Zhi-Ming; Chakraborty, Nilanjan; Gupta, Anika; Sycara, Katia; Li, Sheng

    2014-01-01

    Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods. PMID:25061986

  1. Decentralized learning in Markov games.

    PubMed

    Vrancx, Peter; Verbeeck, Katja; Nowé, Ann

    2008-08-01

    Learning automata (LA) were recently shown to be valuable tools for designing multiagent reinforcement learning algorithms. One of the principal contributions of the LA theory is that a set of decentralized independent LA is able to control a finite Markov chain with unknown transition probabilities and rewards. In this paper, we propose to extend this algorithm to Markov games--a straightforward extension of single-agent Markov decision problems to distributed multiagent decision problems. We show that under the same ergodic assumptions of the original theorem, the extended algorithm will converge to a pure equilibrium point between agent policies.

  2. Incredible Years parenting interventions: current effectiveness research and future directions.

    PubMed

    Gardner, Frances; Leijten, Patty

    2017-06-01

    The Incredible Years parenting intervention is a social learning theory-based programme for reducing children's conduct problems. Dozens of randomized trials, many by independent investigators, find consistent effects of Incredible Years on children's conduct problems across multiple countries and settings. However, in common with other interventions, these average effects hide much variability in the responses of individual children and families. Innovative moderator research is needed to enhance scientific understanding of why individual children and parents respond differently to intervention. Additionally, research is needed to test whether there are ways to make Incredible Years more effective and accessible for families and service providers, especially in low resource settings, by developing innovative delivery systems using new media, and by systematically testing for essential components of parenting interventions. Copyright © 2017. Published by Elsevier Ltd.

  3. Distributed Adaptive Control: Beyond Single-Instant, Discrete Variables

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Bieniawski, Stefan

    2005-01-01

    In extensive form noncooperative game theory, at each instant t, each agent i sets its state x, independently of the other agents, by sampling an associated distribution, q(sub i)(x(sub i)). The coupling between the agents arises in the joint evolution of those distributions. Distributed control problems can be cast the same way. In those problems the system designer sets aspects of the joint evolution of the distributions to try to optimize the goal for the overall system. Now information theory tells us what the separate q(sub i) of the agents are most likely to be if the system were to have a particular expected value of the objective function G(x(sub 1),x(sub 2), ...). So one can view the job of the system designer as speeding an iterative process. Each step of that process starts with a specified value of E(G), and the convergence of the q(sub i) to the most likely set of distributions consistent with that value. After this the target value for E(sub q)(G) is lowered, and then the process repeats. Previous work has elaborated many schemes for implementing this process when the underlying variables x(sub i) all have a finite number of possible values and G does not extend to multiple instants in time. That work also is based on a fixed mapping from agents to control devices, so that the the statistical independence of the agents' moves means independence of the device states. This paper also extends that work to relax all of these restrictions. This extends the applicability of that work to include continuous spaces and Reinforcement Learning. This paper also elaborates how some of that earlier work can be viewed as a first-principles justification of evolution-based search algorithms.

  4. Remote sensing and urban public health

    NASA Technical Reports Server (NTRS)

    Rush, M.; Vernon, S.

    1975-01-01

    The applicability of remote sensing in the form of aerial photography to urban public health problems is examined. Environmental characteristics are analyzed to determine if health differences among areas could be predicted from the visual expression of remote sensing data. The analysis is carried out on a socioeconomic cross-sectional sample of census block groups. Six morbidity and mortality rates are the independent variables while environmental measures from aerial photographs and from the census constitute the two independent variable sets. It is found that environmental data collected by remote sensing are as good as census data in evaluating rates of health outcomes.

  5. The Relationship between Functional Status and Judgment/Problem Solving Among Individuals with Dementia

    PubMed Central

    Mayo, Ann M.; Wallhagen, Margaret; Cooper, Bruce A.; Mehta, Kala; Ross, Leslie; Miller, Bruce

    2012-01-01

    Objective To determine the relationship between functional status (independent activities of daily living) and judgment/problem solving and the extent to which select demographic characteristics such as dementia subtype and cognitive measures may moderate that relationship in older adult individuals with dementia. Methods The National Alzheimer’s Coordinating Center Universal Data Set was accessed for a study sample of 3,855 individuals diagnosed with dementia. Primary variables included functional status, judgment/problem solving, and cognition. Results Functional status was related to judgment/problem solving (r= 0.66; p< .0005). Functional status and cognition jointly predicted 56% of the variance in judgment/problem solving (R-squared = .56, p <.0005). As cognition decreases, the prediction of poorer judgment/problem solving by functional status became stronger. Conclusions Among individuals with a diagnosis of dementia, declining functional status as well as declining cognition should raise concerns about judgment/problem solving. PMID:22786576

  6. A general optimality criteria algorithm for a class of engineering optimization problems

    NASA Astrophysics Data System (ADS)

    Belegundu, Ashok D.

    2015-05-01

    An optimality criteria (OC)-based algorithm for optimization of a general class of nonlinear programming (NLP) problems is presented. The algorithm is only applicable to problems where the objective and constraint functions satisfy certain monotonicity properties. For multiply constrained problems which satisfy these assumptions, the algorithm is attractive compared with existing NLP methods as well as prevalent OC methods, as the latter involve computationally expensive active set and step-size control strategies. The fixed point algorithm presented here is applicable not only to structural optimization problems but also to certain problems as occur in resource allocation and inventory models. Convergence aspects are discussed. The fixed point update or resizing formula is given physical significance, which brings out a strength and trim feature. The number of function evaluations remains independent of the number of variables, allowing the efficient solution of problems with large number of variables.

  7. INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groer, Christopher S; Sullivan, Blair D; Weerapurage, Dinesh P

    2012-10-01

    It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms wemore » have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.« less

  8. Primal-dual methods of shape sensitivity analysis for curvilinear cracks with nonpenetration

    NASA Astrophysics Data System (ADS)

    Kovtunenko, V. A.

    2006-10-01

    Based on a level-set description of a crack moving with a given velocity, the problem of shape perturb-ation of the crack is considered. Nonpenetration conditions are imposed between opposite crack surfaces which result in a constrained minimization problem describing equilibrium of a solid with the crack. We suggest a minimax formulation of the state problem thus allowing curvilinear (nonplanar) cracks for the consideration. Utilizing primal-dual methods of shape sensitivity analysis we obtain the general formula for a shape derivative of the potential energy, which describes an energy-release rate for the curvilinear cracks. The conditions sufficient to rewrite it in the form of a path-independent integral (J-integral) are derived.

  9. Optimal control problems with mixed control-phase variable equality and inequality constraints

    NASA Technical Reports Server (NTRS)

    Makowski, K.; Neustad, L. W.

    1974-01-01

    In this paper, necessary conditions are obtained for optimal control problems containing equality constraints defined in terms of functions of the control and phase variables. The control system is assumed to be characterized by an ordinary differential equation, and more conventional constraints, including phase inequality constraints, are also assumed to be present. Because the first-mentioned equality constraint must be satisfied for all t (the independent variable of the differential equation) belonging to an arbitrary (prescribed) measurable set, this problem gives rise to infinite-dimensional equality constraints. To obtain the necessary conditions, which are in the form of a maximum principle, an implicit-function-type theorem in Banach spaces is derived.

  10. Issues of organizational cybernetics and viability beyond Beer's viable systems model

    NASA Astrophysics Data System (ADS)

    Nechansky, Helmut

    2013-11-01

    The paper starts summarizing the claims of Beer's viable systems model to identify five issues any viable organizations has to deal with in an unequivocal hierarchical structure of five interrelated systems. Then the evidence is introduced for additional issues and related viable structures of organizations, which deviate from Beer's model. These issues are: (1) the establishment and (2) evolution of an organization; (3) systems for independent top-down control (like "Six Sigma"); (4) systems for independent bottom-up correction of performance problems (like "Kaizen"), both working outside a hierarchical structure; (5) pull production systems ("Just in Time") and (6) systems for checks and balances of top-level power (like boards and shareholder meetings). Based on that an evolutionary approach to organizational cybernetics is outlined, addressing the establishment of organizations and possible courses of developments, including recent developments in quality and production engineering, as well as problems of setting and changing goal values determining organizational policies.

  11. An analysis of input errors in precipitation-runoff models using regression with errors in the independent variables

    USGS Publications Warehouse

    Troutman, Brent M.

    1982-01-01

    Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.

  12. An Optimization-based Framework to Learn Conditional Random Fields for Multi-label Classification

    PubMed Central

    Naeini, Mahdi Pakdaman; Batal, Iyad; Liu, Zitao; Hong, CharmGil; Hauskrecht, Milos

    2015-01-01

    This paper studies multi-label classification problem in which data instances are associated with multiple, possibly high-dimensional, label vectors. This problem is especially challenging when labels are dependent and one cannot decompose the problem into a set of independent classification problems. To address the problem and properly represent label dependencies we propose and study a pairwise conditional random Field (CRF) model. We develop a new approach for learning the structure and parameters of the CRF from data. The approach maximizes the pseudo likelihood of observed labels and relies on the fast proximal gradient descend for learning the structure and limited memory BFGS for learning the parameters of the model. Empirical results on several datasets show that our approach outperforms several multi-label classification baselines, including recently published state-of-the-art methods. PMID:25927015

  13. Principal component analysis-based unsupervised feature extraction applied to in silico drug discovery for posttraumatic stress disorder-mediated heart disease.

    PubMed

    Taguchi, Y-h; Iwadate, Mitsuo; Umeyama, Hideaki

    2015-04-30

    Feature extraction (FE) is difficult, particularly if there are more features than samples, as small sample numbers often result in biased outcomes or overfitting. Furthermore, multiple sample classes often complicate FE because evaluating performance, which is usual in supervised FE, is generally harder than the two-class problem. Developing sample classification independent unsupervised methods would solve many of these problems. Two principal component analysis (PCA)-based FE, specifically, variational Bayes PCA (VBPCA) was extended to perform unsupervised FE, and together with conventional PCA (CPCA)-based unsupervised FE, were tested as sample classification independent unsupervised FE methods. VBPCA- and CPCA-based unsupervised FE both performed well when applied to simulated data, and a posttraumatic stress disorder (PTSD)-mediated heart disease data set that had multiple categorical class observations in mRNA/microRNA expression of stressed mouse heart. A critical set of PTSD miRNAs/mRNAs were identified that show aberrant expression between treatment and control samples, and significant, negative correlation with one another. Moreover, greater stability and biological feasibility than conventional supervised FE was also demonstrated. Based on the results obtained, in silico drug discovery was performed as translational validation of the methods. Our two proposed unsupervised FE methods (CPCA- and VBPCA-based) worked well on simulated data, and outperformed two conventional supervised FE methods on a real data set. Thus, these two methods have suggested equivalence for FE on categorical multiclass data sets, with potential translational utility for in silico drug discovery.

  14. On the estimation of the domain of attraction for discrete-time switched and hybrid nonlinear systems

    NASA Astrophysics Data System (ADS)

    Kit Luk, Chuen; Chesi, Graziano

    2015-11-01

    This paper addresses the estimation of the domain of attraction for discrete-time nonlinear systems where the vector field is subject to changes. First, the paper considers the case of switched systems, where the vector field is allowed to arbitrarily switch among the elements of a finite family. Second, the paper considers the case of hybrid systems, where the state space is partitioned into several regions described by polynomial inequalities, and the vector field is defined on each region independently from the other ones. In both cases, the problem consists of computing the largest sublevel set of a Lyapunov function included in the domain of attraction. An approach is proposed for solving this problem based on convex programming, which provides a guaranteed inner estimate of the sought sublevel set. The conservatism of the provided estimate can be decreased by increasing the size of the optimisation problem. Some numerical examples illustrate the proposed approach.

  15. An efficient and flexible Abel-inversion method for noisy data

    NASA Astrophysics Data System (ADS)

    Antokhin, Igor I.

    2016-12-01

    We propose an efficient and flexible method for solving the Abel integral equation of the first kind, frequently appearing in many fields of astrophysics, physics, chemistry, and applied sciences. This equation represents an ill-posed problem, thus solving it requires some kind of regularization. Our method is based on solving the equation on a so-called compact set of functions and/or using Tikhonov's regularization. A priori constraints on the unknown function, defining a compact set, are very loose and can be set using simple physical considerations. Tikhonov's regularization in itself does not require any explicit a priori constraints on the unknown function and can be used independently of such constraints or in combination with them. Various target degrees of smoothness of the unknown function may be set, as required by the problem at hand. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact solution, as the errors of input data tend to zero. The method is illustrated on several simulated models with known solutions. An example of astrophysical application of the method is also given.

  16. A Solution to the Small Enrollment Problem in Aerospace Engineering--Self-Paced Materials Used in an Independent Studies Mode.

    ERIC Educational Resources Information Center

    Fowler, Wallace T.; Watkins, R. D.

    With the decline in enrollment in the early 1970's, many aerospace engineering departments had too few students to offer some required courses. At the University of Texas at Austin, a set of personalized system of instruction (PSI) materials for the aircraft performance, stability, and control course was developed. The paper includes a description…

  17. Binge drinking and sleep problems among young adults.

    PubMed

    Popovici, Ioana; French, Michael T

    2013-09-01

    As most of the literature exploring the relationships between alcohol use and sleep problems is descriptive and with small sample sizes, the present study seeks to provide new information on the topic by employing a large, nationally representative dataset with several waves of data and a broad set of measures for binge drinking and sleep problems. We use data from the National Longitudinal Study of Adolescent Health (Add Health), a nationally representative survey of adolescents and young adults. The analysis sample consists of all Wave 4 observations without missing values for the sleep problems variables (N=14,089, 53% females). We estimate gender-specific multivariate probit models with a rich set of socioeconomic, demographic, physical, and mental health variables to control for confounding factors. Our results confirm that alcohol use, and specifically binge drinking, is positively and significantly associated with various types of sleep problems. The detrimental effects on sleep increase in magnitude with frequency of binge drinking, suggesting a dose-response relationship. Moreover, binge drinking is associated with sleep problems independent of psychiatric conditions. The statistically strong association between sleep problems and binge drinking found in this study is a first step in understanding these relationships. Future research is needed to determine the causal links between alcohol misuse and sleep problems to inform appropriate clinical and policy responses. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. A decentralized square root information filter/smoother

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.; Belzer, M. R.

    1985-01-01

    A number of developments has recently led to a considerable interest in the decentralization of linear least squares estimators. The developments are partly related to the impending emergence of VLSI technology, the realization of parallel processing, and the need for algorithmic ways to speed the solution of dynamically decoupled, high dimensional estimation problems. A new method is presented for combining Square Root Information Filters (SRIF) estimates obtained from independent data sets. The new method involves an orthogonal transformation, and an information matrix filter 'homework' problem discussed by Schweppe (1973) is generalized. The employed SRIF orthogonal transformation methodology has been described by Bierman (1977).

  19. Using ridge regression in systematic pointing error corrections

    NASA Technical Reports Server (NTRS)

    Guiar, C. N.

    1988-01-01

    A pointing error model is used in the antenna calibration process. Data from spacecraft or radio star observations are used to determine the parameters in the model. However, the regression variables are not truly independent, displaying a condition known as multicollinearity. Ridge regression, a biased estimation technique, is used to combat the multicollinearity problem. Two data sets pertaining to Voyager 1 spacecraft tracking (days 105 and 106 of 1987) were analyzed using both linear least squares and ridge regression methods. The advantages and limitations of employing the technique are presented. The problem is not yet fully resolved.

  20. Coordinating complex problem-solving among distributed intelligent agents

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.

    1992-01-01

    A process-oriented control model is described for distributed problem solving. The model coordinates the transfer and manipulation of information across independent networked applications, both intelligent and conventional. The model was implemented using SOCIAL, a set of object-oriented tools for distributing computing. Complex sequences of distributed tasks are specified in terms of high level scripts. Scripts are executed by SOCIAL objects called Manager Agents, which realize an intelligent coordination model that routes individual tasks to suitable server applications across the network. These tools are illustrated in a prototype distributed system for decision support of ground operations for NASA's Space Shuttle fleet.

  1. Virtual and concrete manipulatives: a comparison of approaches for solving mathematics problems for students with autism spectrum disorder.

    PubMed

    Bouck, Emily C; Satsangi, Rajiv; Doughty, Teresa Taber; Courtney, William T

    2014-01-01

    Students with autism spectrum disorder (ASD) are included in general education classes and expected to participate in general education content, such as mathematics. Yet, little research explores academically-based mathematics instruction for this population. This single subject alternating treatment design study explored the effectiveness of concrete (physical objects that can be manipulated) and virtual (3-D objects from the Internet that can be manipulated) manipulatives to teach single- and double-digit subtraction skills. Participants in this study included three elementary-aged students (ages ranging from 6 to 10) diagnosed with ASD. Students were selected from a clinic-based setting, where all participants received medically necessary intensive services provided via one-to-one, trained therapists. Both forms of manipulatives successfully assisted students in accurately and independently solving subtraction problem. However, all three students demonstrated greater accuracy and faster independence with the virtual manipulatives as compared to the concrete manipulatives. Beyond correctly solving the subtraction problems, students were also able to generalize their learning of subtraction through concrete and virtual manipulatives to more real-world applications.

  2. Efficient Variable Selection Method for Exposure Variables on Binary Data

    NASA Astrophysics Data System (ADS)

    Ohno, Manabu; Tarumi, Tomoyuki

    In this paper, we propose a new variable selection method for "robust" exposure variables. We define "robust" as property that the same variable can select among original data and perturbed data. There are few studies of effective for the selection method. The problem that selects exposure variables is almost the same as a problem that extracts correlation rules without robustness. [Brin 97] is suggested that correlation rules are possible to extract efficiently using chi-squared statistic of contingency table having monotone property on binary data. But the chi-squared value does not have monotone property, so it's is easy to judge the method to be not independent with an increase in the dimension though the variable set is completely independent, and the method is not usable in variable selection for robust exposure variables. We assume anti-monotone property for independent variables to select robust independent variables and use the apriori algorithm for it. The apriori algorithm is one of the algorithms which find association rules from the market basket data. The algorithm use anti-monotone property on the support which is defined by association rules. But independent property does not completely have anti-monotone property on the AIC of independent probability model, but the tendency to have anti-monotone property is strong. Therefore, selected variables with anti-monotone property on the AIC have robustness. Our method judges whether a certain variable is exposure variable for the independent variable using previous comparison of the AIC. Our numerical experiments show that our method can select robust exposure variables efficiently and precisely.

  3. Time-dependent seismic tomography

    USGS Publications Warehouse

    Julian, B.R.; Foulger, G.R.

    2010-01-01

    Of methods for measuring temporal changes in seismic-wave speeds in the Earth, seismic tomography is among those that offer the highest spatial resolution. 3-D tomographic methods are commonly applied in this context by inverting seismic wave arrival time data sets from different epochs independently and assuming that differences in the derived structures represent real temporal variations. This assumption is dangerous because the results of independent inversions would differ even if the structure in the Earth did not change, due to observational errors and differences in the seismic ray distributions. The latter effect may be especially severe when data sets include earthquake swarms or aftershock sequences, and may produce the appearance of correlation between structural changes and seismicity when the wave speeds are actually temporally invariant. A better approach, which makes it possible to assess what changes are truly required by the data, is to invert multiple data sets simultaneously, minimizing the difference between models for different epochs as well as the rms arrival-time residuals. This problem leads, in the case of two epochs, to a system of normal equations whose order is twice as great as for a single epoch. The direct solution of this system would require twice as much memory and four times as much computational effort as would independent inversions. We present an algorithm, tomo4d, that takes advantage of the structure and sparseness of the system to obtain the solution with essentially no more effort than independent inversions require. No claim to original US government works Journal compilation ?? 2010 RAS.

  4. Experimental Measurement-Device-Independent Entanglement Detection

    NASA Astrophysics Data System (ADS)

    Nawareg, Mohamed; Muhammad, Sadiq; Amselem, Elias; Bourennane, Mohamed

    2015-02-01

    Entanglement is one of the most puzzling features of quantum theory and of great importance for the new field of quantum information. The determination whether a given state is entangled or not is one of the most challenging open problems of the field. Here we report on the experimental demonstration of measurement-device-independent (MDI) entanglement detection using witness method for general two qubits photon polarization systems. In the MDI settings, there is no requirement to assume perfect implementations or neither to trust the measurement devices. This experimental demonstration can be generalized for the investigation of properties of quantum systems and for the realization of cryptography and communication protocols.

  5. Experimental Measurement-Device-Independent Entanglement Detection

    PubMed Central

    Nawareg, Mohamed; Muhammad, Sadiq; Amselem, Elias; Bourennane, Mohamed

    2015-01-01

    Entanglement is one of the most puzzling features of quantum theory and of great importance for the new field of quantum information. The determination whether a given state is entangled or not is one of the most challenging open problems of the field. Here we report on the experimental demonstration of measurement-device-independent (MDI) entanglement detection using witness method for general two qubits photon polarization systems. In the MDI settings, there is no requirement to assume perfect implementations or neither to trust the measurement devices. This experimental demonstration can be generalized for the investigation of properties of quantum systems and for the realization of cryptography and communication protocols. PMID:25649664

  6. Measurement-device-independent quantum key distribution for Scarani-Acin-Ribordy-Gisin 04 protocol

    PubMed Central

    Mizutani, Akihiro; Tamaki, Kiyoshi; Ikuta, Rikizo; Yamamoto, Takashi; Imoto, Nobuyuki

    2014-01-01

    The measurement-device-independent quantum key distribution (MDI QKD) was proposed to make BB84 completely free from any side-channel in detectors. Like in prepare & measure QKD, the use of other protocols in MDI setting would be advantageous in some practical situations. In this paper, we consider SARG04 protocol in MDI setting. The prepare & measure SARG04 is proven to be able to generate a key up to two-photon emission events. In MDI setting we show that the key generation is possible from the event with single or two-photon emission by a party and single-photon emission by the other party, but the two-photon emission event by both parties cannot contribute to the key generation. On the contrary to prepare & measure SARG04 protocol where the experimental setup is exactly the same as BB84, the measurement setup for SARG04 in MDI setting cannot be the same as that for BB84 since the measurement setup for BB84 in MDI setting induces too many bit errors. To overcome this problem, we propose two alternative experimental setups, and we simulate the resulting key rate. Our study highlights the requirements that MDI QKD poses on us regarding with the implementation of a variety of QKD protocols. PMID:24913431

  7. A Fast and Scalable Method for A-Optimal Design of Experiments for Infinite-dimensional Bayesian Nonlinear Inverse Problems with Application to Porous Medium Flow

    NASA Astrophysics Data System (ADS)

    Petra, N.; Alexanderian, A.; Stadler, G.; Ghattas, O.

    2015-12-01

    We address the problem of optimal experimental design (OED) for Bayesian nonlinear inverse problems governed by partial differential equations (PDEs). The inverse problem seeks to infer a parameter field (e.g., the log permeability field in a porous medium flow model problem) from synthetic observations at a set of sensor locations and from the governing PDEs. The goal of the OED problem is to find an optimal placement of sensors so as to minimize the uncertainty in the inferred parameter field. We formulate the OED objective function by generalizing the classical A-optimal experimental design criterion using the expected value of the trace of the posterior covariance. This expected value is computed through sample averaging over the set of likely experimental data. Due to the infinite-dimensional character of the parameter field, we seek an optimization method that solves the OED problem at a cost (measured in the number of forward PDE solves) that is independent of both the parameter and the sensor dimension. To facilitate this goal, we construct a Gaussian approximation to the posterior at the maximum a posteriori probability (MAP) point, and use the resulting covariance operator to define the OED objective function. We use randomized trace estimation to compute the trace of this covariance operator. The resulting OED problem includes as constraints the system of PDEs characterizing the MAP point, and the PDEs describing the action of the covariance (of the Gaussian approximation to the posterior) to vectors. We control the sparsity of the sensor configurations using sparsifying penalty functions, and solve the resulting penalized bilevel optimization problem via an interior-point quasi-Newton method, where gradient information is computed via adjoints. We elaborate our OED method for the problem of determining the optimal sensor configuration to best infer the log permeability field in a porous medium flow problem. Numerical results show that the number of PDE solves required for the evaluation of the OED objective function and its gradient is essentially independent of both the parameter dimension and the sensor dimension (i.e., the number of candidate sensor locations). The number of quasi-Newton iterations for computing an OED also exhibits the same dimension invariance properties.

  8. Learning the facts in medical school is not enough: which factors predict successful application of procedural knowledge in a laboratory setting?

    PubMed

    Schmidmaier, Ralf; Eiber, Stephan; Ebersbach, Rene; Schiller, Miriam; Hege, Inga; Holzer, Matthias; Fischer, Martin R

    2013-02-22

    Medical knowledge encompasses both conceptual (facts or "what" information) and procedural knowledge ("how" and "why" information). Conceptual knowledge is known to be an essential prerequisite for clinical problem solving. Primarily, medical students learn from textbooks and often struggle with the process of applying their conceptual knowledge to clinical problems. Recent studies address the question of how to foster the acquisition of procedural knowledge and its application in medical education. However, little is known about the factors which predict performance in procedural knowledge tasks. Which additional factors of the learner predict performance in procedural knowledge? Domain specific conceptual knowledge (facts) in clinical nephrology was provided to 80 medical students (3rd to 5th year) using electronic flashcards in a laboratory setting. Learner characteristics were obtained by questionnaires. Procedural knowledge in clinical nephrology was assessed by key feature problems (KFP) and problem solving tasks (PST) reflecting strategic and conditional knowledge, respectively. Results in procedural knowledge tests (KFP and PST) correlated significantly with each other. In univariate analysis, performance in procedural knowledge (sum of KFP+PST) was significantly correlated with the results in (1) the conceptual knowledge test (CKT), (2) the intended future career as hospital based doctor, (3) the duration of clinical clerkships, and (4) the results in the written German National Medical Examination Part I on preclinical subjects (NME-I). After multiple regression analysis only clinical clerkship experience and NME-I performance remained independent influencing factors. Performance in procedural knowledge tests seems independent from the degree of domain specific conceptual knowledge above a certain level. Procedural knowledge may be fostered by clinical experience. More attention should be paid to the interplay of individual clinical clerkship experiences and structured teaching of procedural knowledge and its assessment in medical education curricula.

  9. A Taylor weak-statement algorithm for hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Kim, J. W.

    1987-01-01

    Finite element analysis, applied to computational fluid dynamics (CFD) problem classes, presents a formal procedure for establishing the ingredients of a discrete approximation numerical solution algorithm. A classical Galerkin weak-statement formulation, formed on a Taylor series extension of the conservation law system, is developed herein that embeds a set of parameters eligible for constraint according to specification of suitable norms. The derived family of Taylor weak statements is shown to contain, as special cases, over one dozen independently derived CFD algorithms published over the past several decades for the high speed flow problem class. A theoretical analysis is completed that facilitates direct qualitative comparisons. Numerical results for definitive linear and nonlinear test problems permit direct quantitative performance comparisons.

  10. Parameter estimation in nonlinear distributed systems - Approximation theory and convergence results

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Reich, Simeon; Rosen, I. G.

    1988-01-01

    An abstract approximation framework and convergence theory is described for Galerkin approximations applied to inverse problems involving nonlinear distributed parameter systems. Parameter estimation problems are considered and formulated as the minimization of a least-squares-like performance index over a compact admissible parameter set subject to state constraints given by an inhomogeneous nonlinear distributed system. The theory applies to systems whose dynamics can be described by either time-independent or nonstationary strongly maximal monotonic operators defined on a reflexive Banach space which is densely and continuously embedded in a Hilbert space. It is demonstrated that if readily verifiable conditions on the system's dependence on the unknown parameters are satisfied, and the usual Galerkin approximation assumption holds, then solutions to the approximating problems exist and approximate a solution to the original infinite-dimensional identification problem.

  11. A Direct Method for Fuel Optimal Maneuvers of Distributed Spacecraft in Multiple Flight Regimes

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.; Cooley, D. S.; Guzman, Jose J.

    2005-01-01

    We present a method to solve the impulsive minimum fuel maneuver problem for a distributed set of spacecraft. We develop the method assuming a non-linear dynamics model and parameterize the problem to allow the method to be applicable to multiple flight regimes including low-Earth orbits, highly-elliptic orbits (HEO), Lagrange point orbits, and interplanetary trajectories. Furthermore, the approach is not limited by the inter-spacecraft separation distances and is applicable to both small formations as well as large constellations. Semianalytical derivatives are derived for the changes in the total AV with respect to changes in the independent variables. We also apply a set of constraints to ensure that the fuel expenditure is equalized over the spacecraft in formation. We conclude with several examples and present optimal maneuver sequences for both a HE0 and libration point formation.

  12. Efficient greedy algorithms for economic manpower shift planning

    NASA Astrophysics Data System (ADS)

    Nearchou, A. C.; Giannikos, I. C.; Lagodimos, A. G.

    2015-01-01

    Consideration is given to the economic manpower shift planning (EMSP) problem, an NP-hard capacity planning problem appearing in various industrial settings including the packing stage of production in process industries and maintenance operations. EMSP aims to determine the manpower needed in each available workday shift of a given planning horizon so as to complete a set of independent jobs at minimum cost. Three greedy heuristics are presented for the EMSP solution. These practically constitute adaptations of an existing algorithm for a simplified version of EMSP which had shown excellent performance in terms of solution quality and speed. Experimentation shows that the new algorithms perform very well in comparison to the results obtained by both the CPLEX optimizer and an existing metaheuristic. Statistical analysis is deployed to rank the algorithms in terms of their solution quality and to identify the effects that critical planning factors may have on their relative efficiency.

  13. Efficient Trajectory Propagation for Orbit Determination Problems

    NASA Technical Reports Server (NTRS)

    Roa, Javier; Pelaez, Jesus

    2015-01-01

    Regularized formulations of orbital motion apply a series of techniques to improve the numerical integration of the orbit. Despite their advantages and potential applications little attention has been paid to the propagation of the partial derivatives of the corresponding set of elements or coordinates, required in many orbit-determination scenarios and optimization problems. This paper fills this gap by presenting the general procedure for integrating the state-transition matrix of the system together with the nominal trajectory using regularized formulations and different sets of elements. The main difficulty comes from introducing an independent variable different from time, because the solution needs to be synchronized. The correction of the time delay is treated from a generic perspective not focused on any particular formulation. The synchronization using time-elements is also discussed. Numerical examples include strongly-perturbed orbits in the Pluto system, motivated by the recent flyby of the New Horizons spacecraft, together with a geocentric flyby of the NEAR spacecraft.

  14. Introducing Teamwork Challenges in Simulation Using Game Cards.

    PubMed

    Chang, Todd P; Kwan, Karen Y; Liberman, Danica; Song, Eric; Dao, Eugene H; Chung, Dayun; Morton, Inge; Festekjian, Ara

    2015-08-01

    Poor teamwork and communication during resuscitations are linked to patient safety problems and poorer outcomes. We present a novel simulation-based educational intervention using game cards to introduce challenges in teamwork. This intervention uses sets of game cards that designate roles, limitations, or communication challenges designed to introduce common communication or teamwork problems. Game cards are designed to be applicable for any simulation-based scenario and are independent from patient physiology. In our example, participants were pediatric emergency medicine fellows undergoing simulation training for orientation. We describe the use of card sets in different scenarios with increasing teamwork challenge and difficulty. Both postscenario and summative debriefings were facilitated to allow participants to reflect on their performance and discover ways to apply their strategies to real resuscitations. In this article, we present our experience with the novel use of game cards to modify simulation scenarios to improve communication and teamwork skills.

  15. Completable scheduling: An integrated approach to planning and scheduling

    NASA Technical Reports Server (NTRS)

    Gervasio, Melinda T.; Dejong, Gerald F.

    1992-01-01

    The planning problem has traditionally been treated separately from the scheduling problem. However, as more realistic domains are tackled, it becomes evident that the problem of deciding on an ordered set of tasks to achieve a set of goals cannot be treated independently of the problem of actually allocating resources to the tasks. Doing so would result in losing the robustness and flexibility needed to deal with imperfectly modeled domains. Completable scheduling is an approach which integrates the two problems by allowing an a priori planning module to defer particular planning decisions, and consequently the associated scheduling decisions, until execution time. This allows a completable scheduling system to maximize plan flexibility by allowing runtime information to be taken into consideration when making planning and scheduling decision. Furthermore, through the criteria of achievability placed on deferred decision, a completable scheduling system is able to retain much of the goal-directedness and guarantees of achievement afforded by a priori planning. The completable scheduling approach is further enhanced by the use of contingent explanation-based learning, which enables a completable scheduling system to learn general completable plans from example and improve its performance through experience. Initial experimental results show that completable scheduling outperforms classical scheduling as well as pure reactive scheduling in a simple scheduling domain.

  16. Automatic feature design for optical character recognition using an evolutionary search procedure.

    PubMed

    Stentiford, F W

    1985-03-01

    An automatic evolutionary search is applied to the problem of feature extraction in an OCR application. A performance measure based on feature independence is used to generate features which do not appear to suffer from peaking effects [17]. Features are extracted from a training set of 30 600 machine printed 34 class alphanumeric characters derived from British mail. Classification results on the training set and a test set of 10 200 characters are reported for an increasing number of features. A 1.01 percent forced decision error rate is obtained on the test data using 316 features. The hardware implementation should be cheap and fast to operate. The performance compares favorably with current low cost OCR page readers.

  17. Solution of the Generalized Noah's Ark Problem.

    PubMed

    Billionnet, Alain

    2013-01-01

    The phylogenetic diversity (PD) of a set of species is a measure of the evolutionary distance among the species in the collection, based on a phylogenetic tree. Such a tree is composed of a root, internal nodes, and leaves that correspond to the set of taxa under study. With each edge of the tree is associated a non-negative branch length (evolutionary distance). If a particular survival probability is associated with each taxon, the PD measure becomes the expected PD measure. In the Noah's Ark Problem (NAP) introduced by Weitzman (1998), these survival probabilities can be increased at some cost. The problem is to determine how best to allocate a limited amount of resources to maximize the expected PD of the considered species. It is easy to formulate the NAP as a (difficult) nonlinear 0-1 programming problem. The aim of this article is to show that a general version of the NAP (GNAP) can be solved simply and efficiently with any set of edge weights and any set of survival probabilities by using standard mixed-integer linear programming software. The crucial point to move from a nonlinear program in binary variables to a mixed-integer linear program, is to approximate the logarithmic function by the lower envelope of a set of tangents to the curve. Solving the obtained mixed-integer linear program provides not only a near-optimal solution but also an upper bound on the value of the optimal solution. We also applied this approach to a generalization of the nature reserve problem (GNRP) that consists of selecting a set of regions to be conserved so that the expected PD of the set of species present in these regions is maximized. In this case, the survival probabilities of different taxa are not independent of each other. Computational results are presented to illustrate potentialities of the approach. Near-optimal solutions with hypothetical phylogenetic trees comprising about 4000 taxa are obtained in a few seconds or minutes of computing time for the GNAP, and in about 30 min for the GNRP. In all the cases the average guarantee varies from 0% to 1.20%.

  18. Exponential operations and aggregation operators of interval neutrosophic sets and their decision making methods.

    PubMed

    Ye, Jun

    2016-01-01

    An interval neutrosophic set (INS) is a subclass of a neutrosophic set and a generalization of an interval-valued intuitionistic fuzzy set, and then the characteristics of INS are independently described by the interval numbers of its truth-membership, indeterminacy-membership, and falsity-membership degrees. However, the exponential parameters (weights) of all the existing exponential operational laws of INSs and the corresponding exponential aggregation operators are crisp values in interval neutrosophic decision making problems. As a supplement, this paper firstly introduces new exponential operational laws of INSs, where the bases are crisp values or interval numbers and the exponents are interval neutrosophic numbers (INNs), which are basic elements in INSs. Then, we propose an interval neutrosophic weighted exponential aggregation (INWEA) operator and a dual interval neutrosophic weighted exponential aggregation (DINWEA) operator based on these exponential operational laws and introduce comparative methods based on cosine measure functions for INNs and dual INNs. Further, we develop decision-making methods based on the INWEA and DINWEA operators. Finally, a practical example on the selecting problem of global suppliers is provided to illustrate the applicability and rationality of the proposed methods.

  19. Estimating the number of motor units using random sums with independently thinned terms.

    PubMed

    Müller, Samuel; Conforto, Adriana Bastos; Z'graggen, Werner J; Kaelin-Lang, Alain

    2006-07-01

    The problem of estimating the numbers of motor units N in a muscle is embedded in a general stochastic model using the notion of thinning from point process theory. In the paper a new moment type estimator for the numbers of motor units in a muscle is denned, which is derived using random sums with independently thinned terms. Asymptotic normality of the estimator is shown and its practical value is demonstrated with bootstrap and approximative confidence intervals for a data set from a 31-year-old healthy right-handed, female volunteer. Moreover simulation results are presented and Monte-Carlo based quantiles, means, and variances are calculated for N in{300,600,1000}.

  20. The Annual Report of the National Shipbuilding Research Program (The Naval Shipbuilding Research Program)

    DTIC Science & Technology

    1986-08-29

    Abrasives Work Planning for Shipyard SP&C Training Overcoating of Zinc Primers Citric Acid Cleaning - Phase II - Waterborne Coatings Economics of...members workout organizational problems with minimum government involvement a set of strong, committed, and sometimes fiercely independent panels and...01 Manual of Welding Planning and Design Guidelines - Phase III PANEL SP-3 Ship Design Considerations: Adaptation of Japanese Pre - 7 9 79 81 82 83 83

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosler, Peter A.; Roesler, Erika L.; Taylor, Mark A.

    This study discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared: the commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. The Stride Search algorithm is defined independently of the spatial discretization associated with a particular data set. Results from the two algorithms are compared for the application of tropical cyclonemore » detection, and shown to produce similar results for the same set of storm identification criteria. Differences between the two algorithms arise for some storms due to their different definition of search regions in physical space. The physical space associated with each Stride Search region is constant, regardless of data resolution or latitude, and Stride Search is therefore capable of searching all regions of the globe in the same manner. Stride Search's ability to search high latitudes is demonstrated for the case of polar low detection. Wall clock time required for Stride Search is shown to be smaller than a grid point search of the same data, and the relative speed up associated with Stride Search increases as resolution increases.« less

  2. Parenting and Independent Problem-Solving in Preschool Children With Food Allergy

    PubMed Central

    Power, Thomas G.; Hahn, Amy L.; Hoehn, Jessica L.; Thompson, Caitlin C.; Herbert, Linda J.; Law, Emily F.; Bollinger, Mary Elizabeth

    2015-01-01

    Objective To examine autonomy-promoting parenting and independent problem-solving in children with food allergy. Methods 66 children with food allergy, aged 3–6 years, and 67 age-matched healthy peers and their mothers were videotaped while completing easy and difficult puzzles. Coders recorded time to puzzle completion, children’s direct and indirect requests for help, and maternal help-giving behaviors. Results Compared with healthy peers, younger (3- to 4-year-old) children with food allergy made more indirect requests for help during the easy puzzle, and their mothers were more likely to provide unnecessary help (i.e., explain where to place a puzzle piece). Differences were not found for older children. Conclusions The results suggest that highly involved parenting practices that are medically necessary to manage food allergy may spill over into settings where high levels of involvement are not needed, and that young children with food allergy may be at increased risk for difficulties in autonomy development. PMID:25326001

  3. MT+, integrating magnetotellurics to determine earth structure, physical state, and processes

    USGS Publications Warehouse

    Bedrosian, P.A.

    2007-01-01

    As one of the few deep-earth imaging techniques, magnetotellurics provides information on both the structure and physical state of the crust and upper mantle. Magnetotellurics is sensitive to electrical conductivity, which varies within the earth by many orders of magnitude and is modified by a range of earth processes. As with all geophysical techniques, magnetotellurics has a non-unique inverse problem and has limitations in resolution and sensitivity. As such, an integrated approach, either via the joint interpretation of independent geophysical models, or through the simultaneous inversion of independent data sets is valuable, and at times essential to an accurate interpretation. Magnetotelluric data and models are increasingly integrated with geological, geophysical and geochemical information. This review considers recent studies that illustrate the ways in which such information is combined, from qualitative comparisons to statistical correlation studies to multi-property inversions. Also emphasized are the range of problems addressed by these integrated approaches, and their value in elucidating earth structure, physical state, and processes. ?? Springer Science+Business Media B.V. 2007.

  4. Hard Constraints in Optimization Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.

    2008-01-01

    This paper proposes a methodology for the analysis and design of systems subject to parametric uncertainty where design requirements are specified via hard inequality constraints. Hard constraints are those that must be satisfied for all parameter realizations within a given uncertainty model. Uncertainty models given by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles, are the focus of this paper. These models, which are also quite practical, allow for a rigorous mathematical treatment within the proposed framework. Hard constraint feasibility is determined by sizing the largest uncertainty set for which the design requirements are satisfied. Analytically verifiable assessments of robustness are attained by comparing this set with the actual uncertainty model. Strategies that enable the comparison of the robustness characteristics of competing design alternatives, the description and approximation of the robust design space, and the systematic search for designs with improved robustness are also proposed. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, this methodology is applicable to a broad range of engineering problems.

  5. Scalable learning method for feedforward neural networks using minimal-enclosing-ball approximation.

    PubMed

    Wang, Jun; Deng, Zhaohong; Luo, Xiaoqing; Jiang, Yizhang; Wang, Shitong

    2016-06-01

    Training feedforward neural networks (FNNs) is one of the most critical issues in FNNs studies. However, most FNNs training methods cannot be directly applied for very large datasets because they have high computational and space complexity. In order to tackle this problem, the CCMEB (Center-Constrained Minimum Enclosing Ball) problem in hidden feature space of FNN is discussed and a novel learning algorithm called HFSR-GCVM (hidden-feature-space regression using generalized core vector machine) is developed accordingly. In HFSR-GCVM, a novel learning criterion using L2-norm penalty-based ε-insensitive function is formulated and the parameters in the hidden nodes are generated randomly independent of the training sets. Moreover, the learning of parameters in its output layer is proved equivalent to a special CCMEB problem in FNN hidden feature space. As most CCMEB approximation based machine learning algorithms, the proposed HFSR-GCVM training algorithm has the following merits: The maximal training time of the HFSR-GCVM training is linear with the size of training datasets and the maximal space consumption is independent of the size of training datasets. The experiments on regression tasks confirm the above conclusions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. An efficient 3-D eddy-current solver using an independent impedance method for transcranial magnetic stimulation.

    PubMed

    De Geeter, Nele; Crevecoeur, Guillaume; Dupre, Luc

    2011-02-01

    In many important bioelectromagnetic problem settings, eddy-current simulations are required. Examples are the reduction of eddy-current artifacts in magnetic resonance imaging and techniques, whereby the eddy currents interact with the biological system, like the alteration of the neurophysiology due to transcranial magnetic stimulation (TMS). TMS has become an important tool for the diagnosis and treatment of neurological diseases and psychiatric disorders. A widely applied method for simulating the eddy currents is the impedance method (IM). However, this method has to contend with an ill conditioned problem and consequently a long convergence time. When dealing with optimal design problems and sensitivity control, the convergence rate becomes even more crucial since the eddy-current solver needs to be evaluated in an iterative loop. Therefore, we introduce an independent IM (IIM), which improves the conditionality and speeds up the numerical convergence. This paper shows how IIM is based on IM and what are the advantages. Moreover, the method is applied to the efficient simulation of TMS. The proposed IIM achieves superior convergence properties with high time efficiency, compared to the traditional IM and is therefore a useful tool for accurate and fast TMS simulations.

  7. A learning approach to the bandwidth multicolouring problem

    NASA Astrophysics Data System (ADS)

    Akbari Torkestani, Javad

    2016-05-01

    In this article, a generalisation of the vertex colouring problem known as bandwidth multicolouring problem (BMCP), in which a set of colours is assigned to each vertex such that the difference between the colours, assigned to each vertex and its neighbours, is by no means less than a predefined threshold, is considered. It is shown that the proposed method can be applied to solve the bandwidth colouring problem (BCP) as well. BMCP is known to be NP-hard in graph theory, and so a large number of approximation solutions, as well as exact algorithms, have been proposed to solve it. In this article, two learning automata-based approximation algorithms are proposed for estimating a near-optimal solution to the BMCP. We show, for the first proposed algorithm, that by choosing a proper learning rate, the algorithm finds the optimal solution with a probability close enough to unity. Moreover, we compute the worst-case time complexity of the first algorithm for finding a 1/(1-ɛ) optimal solution to the given problem. The main advantage of this method is that a trade-off between the running time of algorithm and the colour set size (colouring optimality) can be made, by a proper choice of the learning rate also. Finally, it is shown that the running time of the proposed algorithm is independent of the graph size, and so it is a scalable algorithm for large graphs. The second proposed algorithm is compared with some well-known colouring algorithms and the results show the efficiency of the proposed algorithm in terms of the colour set size and running time of algorithm.

  8. Quantum Dynamics with Short-Time Trajectories and Minimal Adaptive Basis Sets.

    PubMed

    Saller, Maximilian A C; Habershon, Scott

    2017-07-11

    Methods for solving the time-dependent Schrödinger equation via basis set expansion of the wave function can generally be categorized as having either static (time-independent) or dynamic (time-dependent) basis functions. We have recently introduced an alternative simulation approach which represents a middle road between these two extremes, employing dynamic (classical-like) trajectories to create a static basis set of Gaussian wavepackets in regions of phase-space relevant to future propagation of the wave function [J. Chem. Theory Comput., 11, 8 (2015)]. Here, we propose and test a modification of our methodology which aims to reduce the size of basis sets generated in our original scheme. In particular, we employ short-time classical trajectories to continuously generate new basis functions for short-time quantum propagation of the wave function; to avoid the continued growth of the basis set describing the time-dependent wave function, we employ Matching Pursuit to periodically minimize the number of basis functions required to accurately describe the wave function. Overall, this approach generates a basis set which is adapted to evolution of the wave function while also being as small as possible. In applications to challenging benchmark problems, namely a 4-dimensional model of photoexcited pyrazine and three different double-well tunnelling problems, we find that our new scheme enables accurate wave function propagation with basis sets which are around an order-of-magnitude smaller than our original trajectory-guided basis set methodology, highlighting the benefits of adaptive strategies for wave function propagation.

  9. Power and instrument strength requirements for Mendelian randomization studies using multiple genetic variants.

    PubMed

    Pierce, Brandon L; Ahsan, Habibul; Vanderweele, Tyler J

    2011-06-01

    Mendelian Randomization (MR) studies assess the causality of an exposure-disease association using genetic determinants [i.e. instrumental variables (IVs)] of the exposure. Power and IV strength requirements for MR studies using multiple genetic variants have not been explored. We simulated cohort data sets consisting of a normally distributed disease trait, a normally distributed exposure, which affects this trait and a biallelic genetic variant that affects the exposure. We estimated power to detect an effect of exposure on disease for varying allele frequencies, effect sizes and samples sizes (using two-stage least squares regression on 10,000 data sets-Stage 1 is a regression of exposure on the variant. Stage 2 is a regression of disease on the fitted exposure). Similar analyses were conducted using multiple genetic variants (5, 10, 20) as independent or combined IVs. We assessed IV strength using the first-stage F statistic. Simulations of realistic scenarios indicate that MR studies will require large (n > 1000), often very large (n > 10,000), sample sizes. In many cases, so-called 'weak IV' problems arise when using multiple variants as independent IVs (even with as few as five), resulting in biased effect estimates. Combining genetic factors into fewer IVs results in modest power decreases, but alleviates weak IV problems. Ideal methods for combining genetic factors depend upon knowledge of the genetic architecture underlying the exposure. The feasibility of well-powered, unbiased MR studies will depend upon the amount of variance in the exposure that can be explained by known genetic factors and the 'strength' of the IV set derived from these genetic factors.

  10. Correlations in star networks: from Bell inequalities to network inequalities

    NASA Astrophysics Data System (ADS)

    Tavakoli, Armin; Olivier Renou, Marc; Gisin, Nicolas; Brunner, Nicolas

    2017-07-01

    The problem of characterizing classical and quantum correlations in networks is considered. Contrary to the usual Bell scenario, where distant observers share a physical system emitted by one common source, a network features several independent sources, each distributing a physical system to a subset of observers. In the quantum setting, the observers can perform joint measurements on initially independent systems, which may lead to strong correlations across the whole network. In this work, we introduce a technique to systematically map a Bell inequality to a family of Bell-type inequalities bounding classical correlations on networks in a star-configuration. Also, we show that whenever a given Bell inequality can be violated by some entangled state ρ, then all the corresponding network inequalities can be violated by considering many copies of ρ distributed in the star network. The relevance of these ideas is illustrated by applying our method to a specific multi-setting Bell inequality. We derive the corresponding network inequalities, and study their quantum violations.

  11. The LET Procedure for Prosthetic Myocontrol: Towards Multi-DOF Control Using Single-DOF Activations.

    PubMed

    Nowak, Markus; Castellini, Claudio

    2016-01-01

    Simultaneous and proportional myocontrol of dexterous hand prostheses is to a large extent still an open problem. With the advent of commercially and clinically available multi-fingered hand prostheses there are now more independent degrees of freedom (DOFs) in prostheses than can be effectively controlled using surface electromyography (sEMG), the current standard human-machine interface for hand amputees. In particular, it is uncertain, whether several DOFs can be controlled simultaneously and proportionally by exclusively calibrating the intended activation of single DOFs. The problem is currently solved by training on all required combinations. However, as the number of available DOFs grows, this approach becomes overly long and poses a high cognitive burden on the subject. In this paper we present a novel approach to overcome this problem. Multi-DOF activations are artificially modelled from single-DOF ones using a simple linear combination of sEMG signals, which are then added to the training set. This procedure, which we named LET (Linearly Enhanced Training), provides an augmented data set to any machine-learning-based intent detection system. In two experiments involving intact subjects, one offline and one online, we trained a standard machine learning approach using the full data set containing single- and multi-DOF activations as well as using the LET-augmented data set in order to evaluate the performance of the LET procedure. The results indicate that the machine trained on the latter data set obtains worse results in the offline experiment compared to the full data set. However, the online implementation enables the user to perform multi-DOF tasks with almost the same precision as single-DOF tasks without the need of explicitly training multi-DOF activations. Moreover, the parameters involved in the system are statistically uniform across subjects.

  12. An Efficient Distributed Compressed Sensing Algorithm for Decentralized Sensor Network.

    PubMed

    Liu, Jing; Huang, Kaiyu; Zhang, Guoxian

    2017-04-20

    We consider the joint sparsity Model 1 (JSM-1) in a decentralized scenario, where a number of sensors are connected through a network and there is no fusion center. A novel algorithm, named distributed compact sensing matrix pursuit (DCSMP), is proposed to exploit the computational and communication capabilities of the sensor nodes. In contrast to the conventional distributed compressed sensing algorithms adopting a random sensing matrix, the proposed algorithm focuses on the deterministic sensing matrices built directly on the real acquisition systems. The proposed DCSMP algorithm can be divided into two independent parts, the common and innovation support set estimation processes. The goal of the common support set estimation process is to obtain an estimated common support set by fusing the candidate support set information from an individual node and its neighboring nodes. In the following innovation support set estimation process, the measurement vector is projected into a subspace that is perpendicular to the subspace spanned by the columns indexed by the estimated common support set, to remove the impact of the estimated common support set. We can then search the innovation support set using an orthogonal matching pursuit (OMP) algorithm based on the projected measurement vector and projected sensing matrix. In the proposed DCSMP algorithm, the process of estimating the common component/support set is decoupled with that of estimating the innovation component/support set. Thus, the inaccurately estimated common support set will have no impact on estimating the innovation support set. It is proven that under the condition the estimated common support set contains the true common support set, the proposed algorithm can find the true innovation set correctly. Moreover, since the innovation support set estimation process is independent of the common support set estimation process, there is no requirement for the cardinality of both sets; thus, the proposed DCSMP algorithm is capable of tackling the unknown sparsity problem successfully.

  13. Expert and novice categorization of introductory physics problems

    NASA Astrophysics Data System (ADS)

    Wolf, Steven Frederick

    Since it was first published 30 years ago, Chi et al.'s seminal paper on expert and novice categorization of introductory problems led to a plethora of follow-up studies within and outside of the area of physics [Chi et al. Cognitive Science 5, 121 -- 152 (1981)]. These studies frequently encompass "card-sorting" exercises whereby the participants group problems. The study firmly established the paradigm that novices categorize physics problems by "surface features" (e.g. "incline," "pendulum," "projectile motion,"... ), while experts use "deep structure" (e.g. "energy conservation," "Newton 2,"... ). While this technique certainly allows insights into problem solving approaches, simple descriptive statistics more often than not fail to find significant differences between experts and novices. In most experiments, the clean-cut outcome of the original study cannot be reproduced. Given the widespread implications of the original study, the frequent failure to reproduce its findings warrants a closer look. We developed a less subjective statistical analysis method for the card sorting outcome and studied how the "successful" outcome of the experiment depends on the choice of the original card set. Thus, in a first step, we are moving beyond descriptive statistics, and develop a novel microscopic approach that takes into account the individual identity of the cards and uses graph theory and models to visualize, analyze, and interpret problem categorization experiments. These graphs are compared macroscopically, using standard graph theoretic statistics, and microscopically, using a distance metric that we have developed. This macroscopic sorting behavior is described using our Cognitive Categorization Model. The microscopic comparison allows us to visualize our sorters using Principal Components Analysis and compare the expert sorters to the novice sorters as a group. In the second step, we ask the question: Which properties of problems are most important in problem sets that discriminate experts from novices in a measurable way? We are describing a method to characterize problems along several dimensions, and then study the effectiveness of differently composed problem sets in differentiating experts from novices, using our analysis method. Both components of our study are based on an extensive experiment using a large problem set, which known physics experts and novices categorized according to the original experimental protocol. Both the size of the card set and the size of the sorter pool were larger than in comparable experiments. Based on our analysis method, we find that most of the variation in sorting outcome is not due to the sorter being an expert versus a novice, but rather due to an independent characteristic that we named "stacker" versus "spreader." The fact that the expert-novice distinction only accounts for a smaller amount of the variation may partly explain the frequent null-results when conducting these experiments. In order to study how the outcome depends on the original problem set, our problem set needed to be large so that we could determine how well experts and novices could be discriminated by considering both small subsets using a Monte Carlo approach and larger subsets using Simulated Annealing. This computationally intense study relied on our objective analysis method, as the large combinatorics did not allow for manual analysis of the outcomes from the subsets. We found that the number of questions required to accurately classify experts and novices could be surprisingly small so long as the problem set was carefully crafted to be composed of problems with particular pedagogical and contextual features. In order to discriminate experts from novices in a categorization task, it is important that the problem sets carefully consider three problem properties: The chapters that problems are in (the problems need to be from a wide spectrum of chapters to allow for the original "deep structure" categorization), the processes required to solve the problems (the problems must required different solving strategies), and the difficulty of the problems (the problems must be "easy"). In other words, for the experiment to be "successful," the card set needs to be carefully "rigged" across three property dimensions.

  14. Do executive deficits and delay aversion make independent contributions to preschool attention-deficit/hyperactivity disorder symptoms?

    PubMed

    Sonuga-Barke, Edmund J S; Dalen, Lindy; Remington, Bob

    2003-11-01

    To test whether deficits in executive function and delay aversion make independent contributions to levels of attention-deficit/hyperactivity disorder (ADHD) symptoms exhibited by preschool children. One hundred fifty-six children between 3 and 5.5 years old (78 girls and 78 boys) selected from the community completed an age-appropriate battery of tests measuring working memory, set shifting, planning, delay of gratification, and preference for delayed rewards. Parents completed a clinical interview about their children's ADHD symptoms. Analysis of test performance revealed two factors: executive dysfunction and delay aversion. Multivariate analysis demonstrated that when other factors (i.e., age, IQ, and conduct problems) were controlled, executive dysfunction and delay aversion each made significant independent contributions to predictions of ADHD symptoms. Preschool ADHD symptoms are psychologically heterogeneous. Executive dysfunction and delay aversion may represent two distinct and early appearing neurodevelopmental bases for ADHD symptoms.

  15. Weighted partial least squares based on the error and variance of the recovery rate in calibration set.

    PubMed

    Yu, Shaohui; Xiao, Xue; Ding, Hong; Xu, Ge; Li, Haixia; Liu, Jing

    2017-08-05

    The quantitative analysis is very difficult for the emission-excitation fluorescence spectroscopy of multi-component mixtures whose fluorescence peaks are serious overlapping. As an effective method for the quantitative analysis, partial least squares can extract the latent variables from both the independent variables and the dependent variables, so it can model for multiple correlations between variables. However, there are some factors that usually affect the prediction results of partial least squares, such as the noise, the distribution and amount of the samples in calibration set etc. This work focuses on the problems in the calibration set that are mentioned above. Firstly, the outliers in the calibration set are removed by leave-one-out cross-validation. Then, according to two different prediction requirements, the EWPLS method and the VWPLS method are proposed. The independent variables and dependent variables are weighted in the EWPLS method by the maximum error of the recovery rate and weighted in the VWPLS method by the maximum variance of the recovery rate. Three organic matters with serious overlapping excitation-emission fluorescence spectroscopy are selected for the experiments. The step adjustment parameter, the iteration number and the sample amount in the calibration set are discussed. The results show the EWPLS method and the VWPLS method are superior to the PLS method especially for the case of small samples in the calibration set. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Using Independent Components Analysis to diminish the response of groundwater in borehole strainmeter

    NASA Astrophysics Data System (ADS)

    Chen, Chih-Yen; Hu, Jyr-Ching

    2017-04-01

    With designed feather, borehole strainmeter can not only record minor signals of tectonic movements, but also broad environmental signs such as barometry, rainfall and groundwater. Among these external factor, groundwater will influence the observation of borehole strainmeter mostly. According to essential observation, groundwater will cause much bigger response than the target tectonic strain change. We use co-sited piezometer to record pore pressure of groundwater in the rock formation in order to obtain the relationship of stain change and pore pressure. But there still exist some puzzle that can not be solved. First, due to instrument limitation, we could not set the pore pressure transducer in the same aquifer as strainmeter did. In this case, the response due to pore pressure change might be not fully correct. Furthermore, through pore-pressure transducers were set in most observatory, problem of electricity and connectivity will cause the record lack and lost. Therefore, it is necessary to find out a better and more stable method to diminish the groundwater response of strainmeter data.Strain transducer with different orientation can observe the groundwater response in different scale. If we can extract out groundwater signal from each independent strain transducer and estimate its original source. That will significantly rise signal strength and lower noise level. The case belongs some kind of blind-signal-separation (BSS) problem. The procedure of BSS extract or rebuild signal that can't be observed directly in many mixed sources and Independent-Component-Analysis (ICA) is one method adopted broadly. ICA is an analysis to find out parts which have statistics independence and non-Gaussian factor in complex signals. We use FastICA developed by to figure out the groundwater response strain in original strain data, and try to diminish it to rise the signal strength. We preceded strain data previously, then using ICA to separate data into serval independent components. Among them, we found one is highly correlated to groundwater result. It has not only good correlation in long-term trend, but also in short-term fluctuations. It can minimize the groundwater response in borehole strainmeter data effectively.

  17. Multiclass Reduced-Set Support Vector Machines

    NASA Technical Reports Server (NTRS)

    Tang, Benyang; Mazzoni, Dominic

    2006-01-01

    There are well-established methods for reducing the number of support vectors in a trained binary support vector machine, often with minimal impact on accuracy. We show how reduced-set methods can be applied to multiclass SVMs made up of several binary SVMs, with significantly better results than reducing each binary SVM independently. Our approach is based on Burges' approach that constructs each reduced-set vector as the pre-image of a vector in kernel space, but we extend this by recomputing the SVM weights and bias optimally using the original SVM objective function. This leads to greater accuracy for a binary reduced-set SVM, and also allows vectors to be 'shared' between multiple binary SVMs for greater multiclass accuracy with fewer reduced-set vectors. We also propose computing pre-images using differential evolution, which we have found to be more robust than gradient descent alone. We show experimental results on a variety of problems and find that this new approach is consistently better than previous multiclass reduced-set methods, sometimes with a dramatic difference.

  18. Developmentally dynamic genome: Evidence of genetic influences on increases and decreases in conduct problems from early childhood to adolescence.

    PubMed

    Pingault, Jean-Baptiste; Rijsdijk, Frühling; Zheng, Yao; Plomin, Robert; Viding, Essi

    2015-05-06

    The development of conduct problems in childhood and adolescence is associated with adverse long-term outcomes, including psychiatric morbidity. Although genes constitute a proven factor of stability in conduct problems, less is known regarding their role in conduct problems' developmental course (i.e. systematic age changes, for instance linear increases or decreases).Mothers rated conduct problems from age 4 to 16 years in 10,038 twin pairs from the Twins Early Development Study. Individual differences in the baseline level (.78; 95% CI: .68-.88) and the developmental course of conduct problems (.73; 95% CI: .60-.86) were under high and largely independent additive genetic influences. Shared environment made a small contribution to the baseline level but not to the developmental course of conduct problems. These results show that genetic influences not only contribute to behavioural stability but also explain systematic change in conduct problems. Different sets of genes may be associated with the developmental course versus the baseline level of conduct problems. The structure of genetic and environmental influences on the development of conduct problems suggests that repeated preventive interventions at different developmental stages might be necessary to achieve a long-term impact.

  19. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates.

    PubMed

    LeDell, Erin; Petersen, Maya; van der Laan, Mark

    In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC.

  20. Theory of the Lattice Boltzmann Equation: Symmetry properties of Discrete Velocity Sets

    NASA Technical Reports Server (NTRS)

    Rubinstein, Robert; Luo, Li-Shi

    2007-01-01

    In the lattice Boltzmann equation, continuous particle velocity space is replaced by a finite dimensional discrete set. The number of linearly independent velocity moments in a lattice Boltzmann model cannot exceed the number of discrete velocities. Thus, finite dimensionality introduces linear dependencies among the moments that do not exist in the exact continuous theory. Given a discrete velocity set, it is important to know to exactly what order moments are free of these dependencies. Elementary group theory is applied to the solution of this problem. It is found that by decomposing the velocity set into subsets that transform among themselves under an appropriate symmetry group, it becomes relatively straightforward to assess the behavior of moments in the theory. The construction of some standard two- and three-dimensional models is reviewed from this viewpoint, and procedures for constructing some new higher dimensional models are suggested.

  1. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates

    PubMed Central

    Petersen, Maya; van der Laan, Mark

    2015-01-01

    In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC. PMID:26279737

  2. Promoting the Multidimensional Character of Scientific Reasoning.

    PubMed

    Bradshaw, William S; Nelson, Jennifer; Adams, Byron J; Bell, John D

    2017-04-01

    This study reports part of a long-term program to help students improve scientific reasoning using higher-order cognitive tasks set in the discipline of cell biology. This skill was assessed using problems requiring the construction of valid conclusions drawn from authentic research data. We report here efforts to confirm the hypothesis that data interpretation is a complex, multifaceted exercise. Confirmation was obtained using a statistical treatment showing that various such problems rank students differently-each contains a unique set of cognitive challenges. Additional analyses of performance results have allowed us to demonstrate that individuals differ in their capacity to navigate five independent generic elements that constitute successful data interpretation: biological context, connection to course concepts, experimental protocols, data inference, and integration of isolated experimental observations into a coherent model. We offer these aspects of scientific thinking as a "data analysis skills inventory," along with usable sample problems that illustrate each element. Additionally, we show that this kind of reasoning is rigorous in that it is difficult for most novice students, who are unable to intuitively implement strategies for improving these skills. Instructors armed with knowledge of the specific challenges presented by different types of problems can provide specific helpful feedback during formative practice. The use of this instructional model is most likely to require changes in traditional classroom instruction.

  3. Quantum Adiabatic Algorithms and Large Spin Tunnelling

    NASA Technical Reports Server (NTRS)

    Boulatov, A.; Smelyanskiy, V. N.

    2003-01-01

    We provide a theoretical study of the quantum adiabatic evolution algorithm with different evolution paths proposed in this paper. The algorithm is applied to a random binary optimization problem (a version of the 3-Satisfiability problem) where the n-bit cost function is symmetric with respect to the permutation of individual bits. The evolution paths are produced, using the generic control Hamiltonians H (r) that preserve the bit symmetry of the underlying optimization problem. In the case where the ground state of H(0) coincides with the totally-symmetric state of an n-qubit system the algorithm dynamics is completely described in terms of the motion of a spin-n/2. We show that different control Hamiltonians can be parameterized by a set of independent parameters that are expansion coefficients of H (r) in a certain universal set of operators. Only one of these operators can be responsible for avoiding the tunnelling in the spin-n/2 system during the quantum adiabatic algorithm. We show that it is possible to select a coefficient for this operator that guarantees a polynomial complexity of the algorithm for all problem instances. We show that a successful evolution path of the algorithm always corresponds to the trajectory of a classical spin-n/2 and provide a complete characterization of such paths.

  4. The meaning of "independence" for older people in different residential settings.

    PubMed

    Hillcoat-Nallétamby, Sarah

    2014-05-01

    Drawing on older people's understandings of "independence" and Collopy's work on autonomy, the article elaborates an interpretive framework of the concept in relation to 3 residential settings-the private dwelling-home, the extra-care, and the residential-care settings. Data include 91 qualitative interviews with frail, older people living in each setting, collected as part of a larger Welsh study. Thematic analysis techniques were employed to identify patterns in meanings of independence across settings and then interpreted using Collopy's conceptualizations of autonomy, as well as notions of space and interdependencies. Independence has multiple meanings for older people, but certain meanings are common to all settings: Accepting help at hand; doing things alone; having family, friends, and money as resources; and preserving physical and mental capacities. Concepts of delegated, executional, authentic, decisional, and consumer autonomy, as well as social interdependencies and spatial and social independence, do provide appropriate higher order interpretive constructs of these meanings across settings. A broader interpretive framework of "independence" should encompass concepts of relative independence, autonomy(ies), as well as spatial and social independence, and can provide more nuanced interpretations of structured dependency and institutionalization theories when applied to different residential settings.

  5. User's manual for two dimensional FDTD version TEA and TMA codes for scattering from frequency-independent dielectric materials

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.

    1991-01-01

    The Penn State Finite Difference Time Domain Electromagnetic Scattering Code Versions TEA and TMA are two dimensional electromagnetic scattering codes based on the Finite Difference Time Domain Technique (FDTD) first proposed by Yee in 1966. The supplied version of the codes are two versions of our current FDTD code set. This manual provides a description of the codes and corresponding results for the default scattering problem. The manual is organized into eleven sections: introduction, Version TEA and TMA code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include files (TEACOM.FOR TMACOM.FOR), a section briefly discussing scattering width computations, a section discussing the scattering results, a sample problem setup section, a new problem checklist, references, and figure titles.

  6. Solution for a bipartite Euclidean traveling-salesman problem in one dimension

    NASA Astrophysics Data System (ADS)

    Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M.

    2018-05-01

    The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.

  7. Solution for a bipartite Euclidean traveling-salesman problem in one dimension.

    PubMed

    Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M

    2018-05-01

    The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.

  8. User's manual for three dimensional FDTD version C code for scattering from frequency-independent dielectric and magnetic materials

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.

    1991-01-01

    The Penn State Finite Difference Time Domain Electromagnetic Scattering Code Version C is a three dimensional numerical electromagnetic scattering code based upon the Finite Difference Time Domain Technique (FDTD). The supplied version of the code is one version of our current three dimensional FDTD code set. This manual provides a description of the code and corresponding results for several scattering problems. The manual is organized into fourteen sections: introduction, description of the FDTD method, operation, resource requirements, Version C code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include file (COMMONC.FOR), a section briefly discussing Radar Cross Section (RCS) computations, a section discussing some scattering results, a sample problem setup section, a new problem checklist, references and figure titles.

  9. User's manual for three dimensional FDTD version A code for scattering from frequency-independent dielectric materials

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.

    1992-01-01

    The Penn State Finite Difference Time Domain (FDTD) Electromagnetic Scattering Code Version A is a three dimensional numerical electromagnetic scattering code based on the Finite Difference Time Domain technique. The supplied version of the code is one version of our current three dimensional FDTD code set. The manual provides a description of the code and the corresponding results for the default scattering problem. The manual is organized into 14 sections: introduction, description of the FDTD method, operation, resource requirements, Version A code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include file (COMMONA.FOR), a section briefly discussing radar cross section (RCS) computations, a section discussing the scattering results, a sample problem setup section, a new problem checklist, references, and figure titles.

  10. Developmentally dynamic genome: Evidence of genetic influences on increases and decreases in conduct problems from early childhood to adolescence

    PubMed Central

    Pingault, Jean-Baptiste; Rijsdijk, Frühling; Zheng, Yao; Plomin, Robert; Viding, Essi

    2015-01-01

    The development of conduct problems in childhood and adolescence is associated with adverse long-term outcomes, including psychiatric morbidity. Although genes constitute a proven factor of stability in conduct problems, less is known regarding their role in conduct problems’ developmental course (i.e. systematic age changes, for instance linear increases or decreases).Mothers rated conduct problems from age 4 to 16 years in 10,038 twin pairs from the Twins Early Development Study. Individual differences in the baseline level (.78; 95% CI: .68-.88) and the developmental course of conduct problems (.73; 95% CI: .60-.86) were under high and largely independent additive genetic influences. Shared environment made a small contribution to the baseline level but not to the developmental course of conduct problems. These results show that genetic influences not only contribute to behavioural stability but also explain systematic change in conduct problems. Different sets of genes may be associated with the developmental course versus the baseline level of conduct problems. The structure of genetic and environmental influences on the development of conduct problems suggests that repeated preventive interventions at different developmental stages might be necessary to achieve a long-term impact. PMID:25944445

  11. PWC-ICA: A Method for Stationary Ordered Blind Source Separation with Application to EEG.

    PubMed

    Ball, Kenneth; Bigdely-Shamlo, Nima; Mullen, Tim; Robbins, Kay

    2016-01-01

    Independent component analysis (ICA) is a class of algorithms widely applied to separate sources in EEG data. Most ICA approaches use optimization criteria derived from temporal statistical independence and are invariant with respect to the actual ordering of individual observations. We propose a method of mapping real signals into a complex vector space that takes into account the temporal order of signals and enforces certain mixing stationarity constraints. The resulting procedure, which we call Pairwise Complex Independent Component Analysis (PWC-ICA), performs the ICA in a complex setting and then reinterprets the results in the original observation space. We examine the performance of our candidate approach relative to several existing ICA algorithms for the blind source separation (BSS) problem on both real and simulated EEG data. On simulated data, PWC-ICA is often capable of achieving a better solution to the BSS problem than AMICA, Extended Infomax, or FastICA. On real data, the dipole interpretations of the BSS solutions discovered by PWC-ICA are physically plausible, are competitive with existing ICA approaches, and may represent sources undiscovered by other ICA methods. In conjunction with this paper, the authors have released a MATLAB toolbox that performs PWC-ICA on real, vector-valued signals.

  12. PWC-ICA: A Method for Stationary Ordered Blind Source Separation with Application to EEG

    PubMed Central

    Bigdely-Shamlo, Nima; Mullen, Tim; Robbins, Kay

    2016-01-01

    Independent component analysis (ICA) is a class of algorithms widely applied to separate sources in EEG data. Most ICA approaches use optimization criteria derived from temporal statistical independence and are invariant with respect to the actual ordering of individual observations. We propose a method of mapping real signals into a complex vector space that takes into account the temporal order of signals and enforces certain mixing stationarity constraints. The resulting procedure, which we call Pairwise Complex Independent Component Analysis (PWC-ICA), performs the ICA in a complex setting and then reinterprets the results in the original observation space. We examine the performance of our candidate approach relative to several existing ICA algorithms for the blind source separation (BSS) problem on both real and simulated EEG data. On simulated data, PWC-ICA is often capable of achieving a better solution to the BSS problem than AMICA, Extended Infomax, or FastICA. On real data, the dipole interpretations of the BSS solutions discovered by PWC-ICA are physically plausible, are competitive with existing ICA approaches, and may represent sources undiscovered by other ICA methods. In conjunction with this paper, the authors have released a MATLAB toolbox that performs PWC-ICA on real, vector-valued signals. PMID:27340397

  13. Propulsion Diagnostic Method Evaluation Strategy (ProDiMES) User's Guide

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.

    2010-01-01

    This report is a User's Guide for the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES). ProDiMES is a standard benchmarking problem and a set of evaluation metrics to enable the comparison of candidate aircraft engine gas path diagnostic methods. This Matlab (The Mathworks, Inc.) based software tool enables users to independently develop and evaluate diagnostic methods. Additionally, a set of blind test case data is also distributed as part of the software. This will enable the side-by-side comparison of diagnostic approaches developed by multiple users. The Users Guide describes the various components of ProDiMES, and provides instructions for the installation and operation of the tool.

  14. Methods and circuitry for reconfigurable SEU/SET tolerance

    NASA Technical Reports Server (NTRS)

    Shuler, Jr., Robert L. (Inventor)

    2010-01-01

    A device is disclosed in one embodiment that has multiple identical sets of programmable functional elements, programmable routing resources, and majority voters that correct errors. The voters accept a mode input for a redundancy mode and a split mode. In the redundancy mode, the programmable functional elements are identical and are programmed identically so the voters produce an output corresponding to the majority of inputs that agree. In a split mode, each voter selects a particular programmable functional element output as the output of the voter. Therefore, in the split mode, the programmable functional elements can perform different functions, operate independently, and/or be connected together to process different parts of the same problem.

  15. Encopresis, soiling and constipation in children and adults with developmental disability.

    PubMed

    Matson, Johnny L; LoVullo, Santino V

    2009-01-01

    Children and adults with developmental disabilities are more likely to evince encopresis, soiling and constipation than the general population. This set of related behaviors can produce a great deal of stress and can be a major restriction in independent living. This paper provides a review of the current state of knowledge on the prevalence, etiology, assessment and treatment of this co-occurring set of disorders. These problems are more common in persons with developmental disabilities then the general population. Furthermore, classical and operant treatment methods appear to be the best supported interventions for most cases. Strengths and weaknesses of the current research base are discussed along with potential avenues for future research.

  16. Weighted Description Logics Preference Formulas for Multiattribute Negotiation

    NASA Astrophysics Data System (ADS)

    Ragone, Azzurra; di Noia, Tommaso; Donini, Francesco M.; di Sciascio, Eugenio; Wellman, Michael P.

    We propose a framework to compute the utility of an agreement w.r.t a preference set in a negotiation process. In particular, we refer to preferences expressed as weighted formulas in a decidable fragment of First-order Logic and agreements expressed as a formula. We ground our framework in Description Logics (DL) endowed with disjunction, to be compliant with Semantic Web technologies. A logic based approach to preference representation allows, when a background knowledge base is exploited, to relax the often unrealistic assumption of additive independence among attributes. We provide suitable definitions of the problem and present algorithms to compute utility in our setting. We also validate our approach through an experimental evaluation.

  17. Polygenic scores predict alcohol problems in an independent sample and show moderation by the environment.

    PubMed

    Salvatore, Jessica E; Aliev, Fazil; Edwards, Alexis C; Evans, David M; Macleod, John; Hickman, Matthew; Lewis, Glyn; Kendler, Kenneth S; Loukola, Anu; Korhonen, Tellervo; Latvala, Antti; Rose, Richard J; Kaprio, Jaakko; Dick, Danielle M

    2014-04-10

    Alcohol problems represent a classic example of a complex behavioral outcome that is likely influenced by many genes of small effect. A polygenic approach, which examines aggregate measured genetic effects, can have predictive power in cases where individual genes or genetic variants do not. In the current study, we first tested whether polygenic risk for alcohol problems-derived from genome-wide association estimates of an alcohol problems factor score from the age 18 assessment of the Avon Longitudinal Study of Parents and Children (ALSPAC; n = 4304 individuals of European descent; 57% female)-predicted alcohol problems earlier in development (age 14) in an independent sample (FinnTwin12; n = 1162; 53% female). We then tested whether environmental factors (parental knowledge and peer deviance) moderated polygenic risk to predict alcohol problems in the FinnTwin12 sample. We found evidence for both polygenic association and for additive polygene-environment interaction. Higher polygenic scores predicted a greater number of alcohol problems (range of Pearson partial correlations 0.07-0.08, all p-values ≤ 0.01). Moreover, genetic influences were significantly more pronounced under conditions of low parental knowledge or high peer deviance (unstandardized regression coefficients (b), p-values (p), and percent of variance (R2) accounted for by interaction terms: b = 1.54, p = 0.02, R2 = 0.33%; b = 0.94, p = 0.04, R2 = 0.30%, respectively). Supplementary set-based analyses indicated that the individual top single nucleotide polymorphisms (SNPs) contributing to the polygenic scores were not individually enriched for gene-environment interaction. Although the magnitude of the observed effects are small, this study illustrates the usefulness of polygenic approaches for understanding the pathways by which measured genetic predispositions come together with environmental factors to predict complex behavioral outcomes.

  18. ASPIC: a novel method to predict the exon-intron structure of a gene that is optimally compatible to a set of transcript sequences.

    PubMed

    Bonizzoni, Paola; Rizzi, Raffaella; Pesole, Graziano

    2005-10-05

    Currently available methods to predict splice sites are mainly based on the independent and progressive alignment of transcript data (mostly ESTs) to the genomic sequence. Apart from often being computationally expensive, this approach is vulnerable to several problems--hence the need to develop novel strategies. We propose a method, based on a novel multiple genome-EST alignment algorithm, for the detection of splice sites. To avoid limitations of splice sites prediction (mainly, over-predictions) due to independent single EST alignments to the genomic sequence our approach performs a multiple alignment of transcript data to the genomic sequence based on the combined analysis of all available data. We recast the problem of predicting constitutive and alternative splicing as an optimization problem, where the optimal multiple transcript alignment minimizes the number of exons and hence of splice site observations. We have implemented a splice site predictor based on this algorithm in the software tool ASPIC (Alternative Splicing PredICtion). It is distinguished from other methods based on BLAST-like tools by the incorporation of entirely new ad hoc procedures for accurate and computationally efficient transcript alignment and adopts dynamic programming for the refinement of intron boundaries. ASPIC also provides the minimal set of non-mergeable transcript isoforms compatible with the detected splicing events. The ASPIC web resource is dynamically interconnected with the Ensembl and Unigene databases and also implements an upload facility. Extensive bench marking shows that ASPIC outperforms other existing methods in the detection of novel splicing isoforms and in the minimization of over-predictions. ASPIC also requires a lower computation time for processing a single gene and an EST cluster. The ASPIC web resource is available at http://aspic.algo.disco.unimib.it/aspic-devel/.

  19. A geometric viewpoint on generalized hydrodynamics

    NASA Astrophysics Data System (ADS)

    Doyon, Benjamin; Spohn, Herbert; Yoshimura, Takato

    2018-01-01

    Generalized hydrodynamics (GHD) is a large-scale theory for the dynamics of many-body integrable systems. It consists of an infinite set of conservation laws for quasi-particles traveling with effective ("dressed") velocities that depend on the local state. We show that these equations can be recast into a geometric dynamical problem. They are conservation equations with state-independent quasi-particle velocities, in a space equipped with a family of metrics, parametrized by the quasi-particles' type and speed, that depend on the local state. In the classical hard rod or soliton gas picture, these metrics measure the free length of space as perceived by quasi-particles; in the quantum picture, they weigh space with the density of states available to them. Using this geometric construction, we find a general solution to the initial value problem of GHD, in terms of a set of integral equations where time appears explicitly. These integral equations are solvable by iteration and provide an extremely efficient solution algorithm for GHD.

  20. Facilitation of Goal-Setting and Follow-Up in an Internet Intervention for Health and Wellness

    NASA Astrophysics Data System (ADS)

    Kaipainen, Kirsikka; Mattila, Elina; Kinnunen, Marja-Liisa; Korhonen, Ilkka

    Chronic work-related stress and insufficient recovery from workload can gradually lead to problems with mental and physical health. Resources in healthcare are limited especially for preventive treatment, but low-cost support can be provided by Internet-based behavior change interventions. This paper describes the design of an Internet intervention which supports working-age people in managing and preventing stress-related health and wellness problems. The intervention is designed for early prevention and aims to motivate individuals to take responsibility for their own well-being. It allows them to choose the approach to take to address personally significant issues, while guiding them through the process. The first iteration of the intervention was evaluated with three user groups and subsequently improved based on the user experiences to be more persuasive, motivating and better suited for independent use. Goal setting and follow-up were especially enhanced, tunneled structure improved, and the threshold of use lowered.

  1. Coordinating complex decision support activities across distributed applications

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.

    1994-01-01

    Knowledge-based technologies have been applied successfully to automate planning and scheduling in many problem domains. Automation of decision support can be increased further by integrating task-specific applications with supporting database systems, and by coordinating interactions between such tools to facilitate collaborative activities. Unfortunately, the technical obstacles that must be overcome to achieve this vision of transparent, cooperative problem-solving are daunting. Intelligent decision support tools are typically developed for standalone use, rely on incompatible, task-specific representational models and application programming interfaces (API's), and run on heterogeneous computing platforms. Getting such applications to interact freely calls for platform independent capabilities for distributed communication, as well as tools for mapping information across disparate representations. Symbiotics is developing a layered set of software tools (called NetWorks! for integrating and coordinating heterogeneous distributed applications. he top layer of tools consists of an extensible set of generic, programmable coordination services. Developers access these services via high-level API's to implement the desired interactions between distributed applications.

  2. Spatiotemporal motion boundary detection and motion boundary velocity estimation for tracking moving objects with a moving camera: a level sets PDEs approach with concurrent camera motion compensation.

    PubMed

    Feghali, Rosario; Mitiche, Amar

    2004-11-01

    The purpose of this study is to investigate a method of tracking moving objects with a moving camera. This method estimates simultaneously the motion induced by camera movement. The problem is formulated as a Bayesian motion-based partitioning problem in the spatiotemporal domain of the image quence. An energy functional is derived from the Bayesian formulation. The Euler-Lagrange descent equations determine imultaneously an estimate of the image motion field induced by camera motion and an estimate of the spatiotemporal motion undary surface. The Euler-Lagrange equation corresponding to the surface is expressed as a level-set partial differential equation for topology independence and numerically stable implementation. The method can be initialized simply and can track multiple objects with nonsimultaneous motions. Velocities on motion boundaries can be estimated from geometrical properties of the motion boundary. Several examples of experimental verification are given using synthetic and real-image sequences.

  3. Stride search: A general algorithm for storm detection in high-resolution climate data

    DOE PAGES

    Bosler, Peter A.; Roesler, Erika L.; Taylor, Mark A.; ...

    2016-04-13

    This study discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared: the commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. The Stride Search algorithm is defined independently of the spatial discretization associated with a particular data set. Results from the two algorithms are compared for the application of tropical cyclonemore » detection, and shown to produce similar results for the same set of storm identification criteria. Differences between the two algorithms arise for some storms due to their different definition of search regions in physical space. The physical space associated with each Stride Search region is constant, regardless of data resolution or latitude, and Stride Search is therefore capable of searching all regions of the globe in the same manner. Stride Search's ability to search high latitudes is demonstrated for the case of polar low detection. Wall clock time required for Stride Search is shown to be smaller than a grid point search of the same data, and the relative speed up associated with Stride Search increases as resolution increases.« less

  4. The Context Dependency of the Self-Report Version of the Strength and Difficulties Questionnaire (SDQ): A Cross-Sectional Study between Two Administration Settings

    PubMed Central

    Hoofs, H.; Jansen, N. W. H.; Mohren, D. C. L.; Jansen, M. W. J.; Kant, I. J.

    2015-01-01

    Background The Strength and Difficulties Questionnaire (SDQ) is a screening instrument for psychosocial problems in children and adolescents, which is applied in “individual” and “collective” settings. Assessment in the individual setting is confidential for clinical applications, such as preventive child healthcare, while assessment in the collective setting is anonymous and applied in (epidemiological) research. Due to administration differences between the settings it remains unclear whether results and conclusions actually can be used interchangeably. This study therefore aims to investigate whether the SDQ is invariant across settings. Methods Two independent samples were retrieved (mean age = 14.07 years), one from an individual setting (N = 6,594) and one from a collective setting (N = 4,613). The SDQ was administered in the second year of secondary school in both settings. Samples come from the same socio-geographic population in the Netherlands. Results Confirmatory factor analysis showed that the SDQ was measurement invariant/equivalent across settings and gender. On average, children in the individual setting scored lower on total difficulties (mean difference = 2.05) and the psychosocial problems subscales compared to those in the collective setting. This was also reflected in the cut-off points for caseness, defined by the 90th percentiles, which were lower in the individual setting. Using cut-off points from the collective in the individual setting therefore resulted in a small number of cases, 2 to 3%, while ∼10% is expected. Conclusion The SDQ has the same connotation across the individual and collective setting. The observed structural differences regarding the mean scores, however, undermine the validity of the cross-use of absolute SDQ-scores between these settings. Applying cut-off scores from the collective setting in the individual setting could, therefore, result in invalid conclusions and potential misuse of the instrument. To correctly apply cut-off scores these should be retrieved from the applied setting. PMID:25886464

  5. Flight program language requirements. Volume 3: Appendices

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Government-sponsored study and development efforts were directed toward design and implementation of high level programming languages suitable for future aerospace applications. The study centered around an evaluation of the four most pertinent existing aerospace languages. Evaluation criteria were established, and selected kernels from the current Saturn 5 and Skylab flight programs were used as benchmark problems for sample coding. An independent review of the language specifications incorporated anticipated future programming requirements into the evaluation. A set of language requirements was synthesized from these activities.

  6. Proportional plus integral MIMO controller for regulation and tracking with anti-wind-up features

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Puleston, P.F.; Mantz, R.J.

    1993-11-01

    A proportional plus integral matrix control structure for MIMO systems is proposed. Based on a standard optimal control structure with integral action, it permits a greater degree of independence of the design and tuning of the regulating and tracking features, without considerably increasing the controller complexity. Fast recovery from load disturbances is achieved, while large overshoots associated with set-point changes and reset wind-up problems can be reduced. A simple effective procedure for practical tuning is introduced.

  7. Probabilistic Analysis of Combinatorial Optimization Problems on Hypergraph Matchings

    DTIC Science & Technology

    2012-02-01

    per dimension” ( recall that d is equal to the number of independent subsets of vertices Vk in the hypergraph Hd jn, and n denotes the number of...disjoint solutions whose costs are iid random variables. First, recalling the interpretation of feasible MAP solu- tions as paths in the index graph G, we...elements. On the other hand, recall that a (feasible) path G can be described as a set of n vectors D f.i .1/ 1 ; : : : ; i .1/ d /; : : : ; .i .n

  8. Existence of ``free will'' as a problem of physics

    NASA Astrophysics Data System (ADS)

    Peres, Asher

    1986-06-01

    The proof of Bell's inequality is based on the assumption that distant observers can freely and independently choose their experiments. As Bell's inequality is experimentally violated, it appears that distant physical systems may behave as a single, nonlocal, indivisible entity. This apparent contradiction is resolved. It is shown that the “free will” assumption is, under usual circumstances, an excellent approximation. I have set before you life and death, blessing and cursing: therefore choose life.... — Deuteronomy XXX, 19

  9. Inverse Problems in Complex Models and Applications to Earth Sciences

    NASA Astrophysics Data System (ADS)

    Bosch, M. E.

    2015-12-01

    The inference of the subsurface earth structure and properties requires the integration of different types of data, information and knowledge, by combined processes of analysis and synthesis. To support the process of integrating information, the regular concept of data inversion is evolving to expand its application to models with multiple inner components (properties, scales, structural parameters) that explain multiple data (geophysical survey data, well-logs, core data). The probabilistic inference methods provide the natural framework for the formulation of these problems, considering a posterior probability density function (PDF) that combines the information from a prior information PDF and the new sets of observations. To formulate the posterior PDF in the context of multiple datasets, the data likelihood functions are factorized assuming independence of uncertainties for data originating across different surveys. A realistic description of the earth medium requires modeling several properties and structural parameters, which relate to each other according to dependency and independency notions. Thus, conditional probabilities across model components also factorize. A common setting proceeds by structuring the model parameter space in hierarchical layers. A primary layer (e.g. lithology) conditions a secondary layer (e.g. physical medium properties), which conditions a third layer (e.g. geophysical data). In general, less structured relations within model components and data emerge from the analysis of other inverse problems. They can be described with flexibility via direct acyclic graphs, which are graphs that map dependency relations between the model components. Examples of inverse problems in complex models can be shown at various scales. At local scale, for example, the distribution of gas saturation is inferred from pre-stack seismic data and a calibrated rock-physics model. At regional scale, joint inversion of gravity and magnetic data is applied for the estimation of lithological structure of the crust, with the lithotype body regions conditioning the mass density and magnetic susceptibility fields. At planetary scale, the Earth mantle temperature and element composition is inferred from seismic travel-time and geodetic data.

  10. Learning the facts in medical school is not enough: which factors predict successful application of procedural knowledge in a laboratory setting?

    PubMed Central

    2013-01-01

    Background Medical knowledge encompasses both conceptual (facts or “what” information) and procedural knowledge (“how” and “why” information). Conceptual knowledge is known to be an essential prerequisite for clinical problem solving. Primarily, medical students learn from textbooks and often struggle with the process of applying their conceptual knowledge to clinical problems. Recent studies address the question of how to foster the acquisition of procedural knowledge and its application in medical education. However, little is known about the factors which predict performance in procedural knowledge tasks. Which additional factors of the learner predict performance in procedural knowledge? Methods Domain specific conceptual knowledge (facts) in clinical nephrology was provided to 80 medical students (3rd to 5th year) using electronic flashcards in a laboratory setting. Learner characteristics were obtained by questionnaires. Procedural knowledge in clinical nephrology was assessed by key feature problems (KFP) and problem solving tasks (PST) reflecting strategic and conditional knowledge, respectively. Results Results in procedural knowledge tests (KFP and PST) correlated significantly with each other. In univariate analysis, performance in procedural knowledge (sum of KFP+PST) was significantly correlated with the results in (1) the conceptual knowledge test (CKT), (2) the intended future career as hospital based doctor, (3) the duration of clinical clerkships, and (4) the results in the written German National Medical Examination Part I on preclinical subjects (NME-I). After multiple regression analysis only clinical clerkship experience and NME-I performance remained independent influencing factors. Conclusions Performance in procedural knowledge tests seems independent from the degree of domain specific conceptual knowledge above a certain level. Procedural knowledge may be fostered by clinical experience. More attention should be paid to the interplay of individual clinical clerkship experiences and structured teaching of procedural knowledge and its assessment in medical education curricula. PMID:23433202

  11. A two-step hierarchical hypothesis set testing framework, with applications to gene expression data on ordered categories

    PubMed Central

    2014-01-01

    Background In complex large-scale experiments, in addition to simultaneously considering a large number of features, multiple hypotheses are often being tested for each feature. This leads to a problem of multi-dimensional multiple testing. For example, in gene expression studies over ordered categories (such as time-course or dose-response experiments), interest is often in testing differential expression across several categories for each gene. In this paper, we consider a framework for testing multiple sets of hypothesis, which can be applied to a wide range of problems. Results We adopt the concept of the overall false discovery rate (OFDR) for controlling false discoveries on the hypothesis set level. Based on an existing procedure for identifying differentially expressed gene sets, we discuss a general two-step hierarchical hypothesis set testing procedure, which controls the overall false discovery rate under independence across hypothesis sets. In addition, we discuss the concept of the mixed-directional false discovery rate (mdFDR), and extend the general procedure to enable directional decisions for two-sided alternatives. We applied the framework to the case of microarray time-course/dose-response experiments, and proposed three procedures for testing differential expression and making multiple directional decisions for each gene. Simulation studies confirm the control of the OFDR and mdFDR by the proposed procedures under independence and positive correlations across genes. Simulation results also show that two of our new procedures achieve higher power than previous methods. Finally, the proposed methodology is applied to a microarray dose-response study, to identify 17 β-estradiol sensitive genes in breast cancer cells that are induced at low concentrations. Conclusions The framework we discuss provides a platform for multiple testing procedures covering situations involving two (or potentially more) sources of multiplicity. The framework is easy to use and adaptable to various practical settings that frequently occur in large-scale experiments. Procedures generated from the framework are shown to maintain control of the OFDR and mdFDR, quantities that are especially relevant in the case of multiple hypothesis set testing. The procedures work well in both simulations and real datasets, and are shown to have better power than existing methods. PMID:24731138

  12. Improving Global Models of Remotely Sensed Ocean Chlorophyll Content Using Partial Least Squares and Geographically Weighted Regression

    NASA Astrophysics Data System (ADS)

    Gholizadeh, H.; Robeson, S. M.

    2015-12-01

    Empirical models have been widely used to estimate global chlorophyll content from remotely sensed data. Here, we focus on the standard NASA empirical models that use blue-green band ratios. These band ratio ocean color (OC) algorithms are in the form of fourth-order polynomials and the parameters of these polynomials (i.e. coefficients) are estimated from the NASA bio-Optical Marine Algorithm Data set (NOMAD). Most of the points in this data set have been sampled from tropical and temperate regions. However, polynomial coefficients obtained from this data set are used to estimate chlorophyll content in all ocean regions with different properties such as sea-surface temperature, salinity, and downwelling/upwelling patterns. Further, the polynomial terms in these models are highly correlated. In sum, the limitations of these empirical models are as follows: 1) the independent variables within the empirical models, in their current form, are correlated (multicollinear), and 2) current algorithms are global approaches and are based on the spatial stationarity assumption, so they are independent of location. Multicollinearity problem is resolved by using partial least squares (PLS). PLS, which transforms the data into a set of independent components, can be considered as a combined form of principal component regression (PCR) and multiple regression. Geographically weighted regression (GWR) is also used to investigate the validity of spatial stationarity assumption. GWR solves a regression model over each sample point by using the observations within its neighbourhood. PLS results show that the empirical method underestimates chlorophyll content in high latitudes, including the Southern Ocean region, when compared to PLS (see Figure 1). Cluster analysis of GWR coefficients also shows that the spatial stationarity assumption in empirical models is not likely a valid assumption.

  13. Modification of the random forest algorithm to avoid statistical dependence problems when classifying remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Cánovas-García, Fulgencio; Alonso-Sarría, Francisco; Gomariz-Castillo, Francisco; Oñate-Valdivieso, Fernando

    2017-06-01

    Random forest is a classification technique widely used in remote sensing. One of its advantages is that it produces an estimation of classification accuracy based on the so called out-of-bag cross-validation method. It is usually assumed that such estimation is not biased and may be used instead of validation based on an external data-set or a cross-validation external to the algorithm. In this paper we show that this is not necessarily the case when classifying remote sensing imagery using training areas with several pixels or objects. According to our results, out-of-bag cross-validation clearly overestimates accuracy, both overall and per class. The reason is that, in a training patch, pixels or objects are not independent (from a statistical point of view) of each other; however, they are split by bootstrapping into in-bag and out-of-bag as if they were really independent. We believe that putting whole patch, rather than pixels/objects, in one or the other set would produce a less biased out-of-bag cross-validation. To deal with the problem, we propose a modification of the random forest algorithm to split training patches instead of the pixels (or objects) that compose them. This modified algorithm does not overestimate accuracy and has no lower predictive capability than the original. When its results are validated with an external data-set, the accuracy is not different from that obtained with the original algorithm. We analysed three remote sensing images with different classification approaches (pixel and object based); in the three cases reported, the modification we propose produces a less biased accuracy estimation.

  14. Harnessing the Bethe free energy†

    PubMed Central

    Bapst, Victor

    2016-01-01

    ABSTRACT A wide class of problems in combinatorics, computer science and physics can be described along the following lines. There are a large number of variables ranging over a finite domain that interact through constraints that each bind a few variables and either encourage or discourage certain value combinations. Examples include the k‐SAT problem or the Ising model. Such models naturally induce a Gibbs measure on the set of assignments, which is characterised by its partition function. The present paper deals with the partition function of problems where the interactions between variables and constraints are induced by a sparse random (hyper)graph. According to physics predictions, a generic recipe called the “replica symmetric cavity method” yields the correct value of the partition function if the underlying model enjoys certain properties [Krzkala et al., PNAS (2007) 10318–10323]. Guided by this conjecture, we prove general sufficient conditions for the success of the cavity method. The proofs are based on a “regularity lemma” for probability measures on sets of the form Ωn for a finite Ω and a large n that may be of independent interest. © 2016 Wiley Periodicals, Inc. Random Struct. Alg., 49, 694–741, 2016 PMID:28035178

  15. Self-management of chronic low back pain and osteoarthritis.

    PubMed

    May, Stephen

    2010-04-01

    Chronic low back pain and osteoarthritis are two musculoskeletal problems that are highly prevalent in the general population, are frequently episodic and persistent, and are associated with high costs to society, both direct and indirect. This epidemiological picture provides the background that justifies the use of self-management strategies in managing these problems. For this Review, relevant systematic reviews were included that related to effectiveness; other study designs were included that addressed other aspects of the topic. The accepted definition of self-management includes liaison between health professionals and individuals with these problems, as well as independent health-promotion activities. Independent self-management strategies, such as exercise and self-medication, are practiced by individuals in the general population. Consistent evidence shows that self-management programs for osteoarthritis are effective in addressing pain and function, but effect sizes are small and might be clinically negligible. Educational programs for patients with back pain are effective in an occupational setting and if combined with an exercise program. Exercise is an effective strategy in the management of both chronic low back pain and osteoarthritis, although it is unclear what the optimum exercise is. Exercise, supported by advice and education, should be at the core of self-management strategies for chronic low back pain and osteoarthritis.

  16. How do public health policies tackle alcohol-related harm: a review of 12 developed countries.

    PubMed

    Crombie, Iain K; Irvine, Linda; Elliott, Lawrence; Wallace, Hilary

    2007-01-01

    To identify how current public health policies of 12 developed countries assess alcohol-related problems, the goals and targets that are set and the strategic directives proposed. Policy documents on alcohol and on general public heath were obtained through repeated searches of government websites. Documents were reviewed by two independent observers. All the countries studied state that alcohol causes substantial harm to individual health and family well-being, increases crime and social disruption, and results in economic loss through lost productivity. All are concerned about consumption of alcohol by young adults and by heavy and problem drinkers. Few aim to reduce total consumption. Only five of the countries set specific targets for changes in drinking behaviour. Countries vary in their commitment to intervene, particularly on taxation, drink-driving, the drinking environment and for high-risk groups. Australia and New Zealand stand out as having coordinated intervention programmes in most areas. Policies differ markedly in their organization, the goals and targets that are set, the strategic approaches proposed and areas identified for intervention. Most countries could improve their policies by following the recommendations in the World Heath Organization's European Alcohol Action Plan.

  17. Housing conditions and mental health in a disadvantaged area in Scotland.

    PubMed Central

    Hopton, J L; Hunt, S M

    1996-01-01

    OBJECTIVE: To examine the mental health impact of different aspects of poor housing. DESIGN: This was a post hoc analysis of data from a household interview survey. SETTING: A public sector housing estate on the outskirts of Glasgow. SUBJECTS: These comprised 114 men and 333 women aged between 17 and 65 from 451 households. MEASURES: Dependent variable: scoring > or = 5 on the 30 item general health questionnaire (GHQ30). Independent variables: self reported data on household composition, whether ill health was a factor in the move to the current dwelling, length of time at address, household income, whether the respondent was employed, chronic illness, and 6 problems with the dwelling. RESULTS: Reporting a problem with dampness was significantly and independently associated with scores of > or = 5 on the GHQ30 after controlling for possible confounding variables. CONCLUSION: Initiatives to tackle housing dampness may be important in developing a strategy to improve mental health for the study area. More research on the mental health impact of different aspects of poor housing is required. PMID:8762355

  18. Parenting and independent problem-solving in preschool children with food allergy.

    PubMed

    Dahlquist, Lynnda M; Power, Thomas G; Hahn, Amy L; Hoehn, Jessica L; Thompson, Caitlin C; Herbert, Linda J; Law, Emily F; Bollinger, Mary Elizabeth

    2015-01-01

    To examine autonomy-promoting parenting and independent problem-solving in children with food allergy. 66 children with food allergy, aged 3-6 years, and 67 age-matched healthy peers and their mothers were videotaped while completing easy and difficult puzzles. Coders recorded time to puzzle completion, children's direct and indirect requests for help, and maternal help-giving behaviors. Compared with healthy peers, younger (3- to 4-year-old) children with food allergy made more indirect requests for help during the easy puzzle, and their mothers were more likely to provide unnecessary help (i.e., explain where to place a puzzle piece). Differences were not found for older children. The results suggest that highly involved parenting practices that are medically necessary to manage food allergy may spill over into settings where high levels of involvement are not needed, and that young children with food allergy may be at increased risk for difficulties in autonomy development. © The Author 2014. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. Multiparameter Estimation in Networked Quantum Sensors

    NASA Astrophysics Data System (ADS)

    Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.

    2018-02-01

    We introduce a general model for a network of quantum sensors, and we use this model to consider the following question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. This immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or nonlinear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.

  20. Sleep problems: an emerging global epidemic? Findings from the INDEPTH WHO-SAGE study among more than 40,000 older adults from 8 countries across Africa and Asia.

    PubMed

    Stranges, Saverio; Tigbe, William; Gómez-Olivé, Francesc Xavier; Thorogood, Margaret; Kandala, Ngianga-Bakwin

    2012-08-01

    To estimate the prevalence of sleep problems and the effect of potential correlates in low-income settings from Africa and Asia, where the evidence is lacking. Cross-sectional. Community-wide samples from 8 countries across Africa and Asia participating in the INDEPTH WHO-SAGE multicenter collaboration during 2006-2007. The participating sites included rural populations in Ghana, Tanzania, South Africa, India, Bangladesh, Vietnam, and Indonesia, and an urban area in Kenya. There were 24,434 women and 19,501 men age 50 yr and older. N/A. Two measures of sleep quality, over the past 30 days, were assessed alongside a number of sociodemographic variables, measures of quality of life, and comorbidities. Overall, 16.6% of participants reported severe/extreme nocturnal sleep problems, with a striking variation across the 8 populations, ranging from 3.9% (Purworejo, Indonesia and Nairobi, Kenya) to more than 40.0% (Matlab, Bangladesh). There was a consistent pattern of higher prevalence of sleep problems in women and older age groups. In bivariate analyses, lower education, not living in partnership, and poorer self-rated quality of life were consistently associated with higher prevalence of sleep problems (P < 0.001). In multivariate logistic regression analyses, limited physical functionality or greater disability and feelings of depression and anxiety were consistently strong, independent correlates of sleep problems, in both women and men, across the 8 sites (P < 0.001). A large number of older adults in low-income settings are currently experiencing sleep problems, which emphasizes the global dimension of this emerging public health issue. This study corroborates the multifaceted nature of sleep problems, which are strongly linked to poorer general well-being and quality of life, and psychiatric comorbidities.

  1. Addressing Spatial Dependence Bias in Climate Model Simulations—An Independent Component Analysis Approach

    NASA Astrophysics Data System (ADS)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2018-02-01

    Conventional bias correction is usually applied on a grid-by-grid basis, meaning that the resulting corrections cannot address biases in the spatial distribution of climate variables. To solve this problem, a two-step bias correction method is proposed here to correct time series at multiple locations conjointly. The first step transforms the data to a set of statistically independent univariate time series, using a technique known as independent component analysis (ICA). The mutually independent signals can then be bias corrected as univariate time series and back-transformed to improve the representation of spatial dependence in the data. The spatially corrected data are then bias corrected at the grid scale in the second step. The method has been applied to two CMIP5 General Circulation Model simulations for six different climate regions of Australia for two climate variables—temperature and precipitation. The results demonstrate that the ICA-based technique leads to considerable improvements in temperature simulations with more modest improvements in precipitation. Overall, the method results in current climate simulations that have greater equivalency in space and time with observational data.

  2. On determining important aspects of mathematical models: Application to problems in physics and chemistry

    NASA Technical Reports Server (NTRS)

    Rabitz, Herschel

    1987-01-01

    The use of parametric and functional gradient sensitivity analysis techniques is considered for models described by partial differential equations. By interchanging appropriate dependent and independent variables, questions of inverse sensitivity may be addressed to gain insight into the inversion of observational data for parameter and function identification in mathematical models. It may be argued that the presence of a subset of dominantly strong coupled dependent variables will result in the overall system sensitivity behavior collapsing into a simple set of scaling and self similarity relations amongst elements of the entire matrix of sensitivity coefficients. These general tools are generic in nature, but herein their application to problems arising in selected areas of physics and chemistry is presented.

  3. Steps to Independence for People with Learning Disabilities.

    ERIC Educational Resources Information Center

    Brown, Dale

    The booklet is designed to help learning disabled (LD) adults become economically independent and fulfill their potential. Introductory chapters define LD and specify such types of LD as auditory perceptual problems, catastrophic responses, directional problems, disinhibition, perceptual problems, and short term memory problems. Psychological…

  4. An effective rumor-containing strategy

    NASA Astrophysics Data System (ADS)

    Pan, Cheng; Yang, Lu-Xing; Yang, Xiaofan; Wu, Yingbo; Tang, Yuan Yan

    2018-06-01

    False rumors can lead to huge economic losses or/and social instability. Hence, mitigating the impact of bogus rumors is of primary importance. This paper focuses on the problem of how to suppress a false rumor by use of the truth. Based on a set of rational hypotheses and a novel rumor-truth mixed spreading model, the effectiveness and cost of a rumor-containing strategy are quantified, respectively. On this basis, the original problem is modeled as a constrained optimization problem (the RC model), in which the independent variable and the objective function represent a rumor-containing strategy and the effectiveness of a rumor-containing strategy, respectively. The goal of the optimization problem is to find the most effective rumor-containing strategy subject to a limited rumor-containing budget. Some optimal rumor-containing strategies are given by solving their respective RC models. The influence of different factors on the highest cost effectiveness of a RC model is illuminated through computer experiments. The results obtained are instructive to develop effective rumor-containing strategies.

  5. Optimal processor assignment for pipeline computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Simha, Rahul; Choudhury, Alok N.; Narahari, Bhagirath

    1991-01-01

    The availability of large scale multitasked parallel architectures introduces the following processor assignment problem for pipelined computations. Given a set of tasks and their precedence constraints, along with their experimentally determined individual responses times for different processor sizes, find an assignment of processor to tasks. Two objectives are of interest: minimal response given a throughput requirement, and maximal throughput given a response time requirement. These assignment problems differ considerably from the classical mapping problem in which several tasks share a processor; instead, it is assumed that a large number of processors are to be assigned to a relatively small number of tasks. Efficient assignment algorithms were developed for different classes of task structures. For a p processor system and a series parallel precedence graph with n constituent tasks, an O(np2) algorithm is provided that finds the optimal assignment for the response time optimization problem; it was found that the assignment optimizing the constrained throughput in O(np2log p) time. Special cases of linear, independent, and tree graphs are also considered.

  6. A new algorithm to create balanced teams promoting more diversity

    NASA Astrophysics Data System (ADS)

    Dias, Teresa Galvão; Borges, José

    2017-11-01

    The problem of assigning students to teams can be described as maximising their profiles diversity within teams while minimising the differences among teams. This problem is commonly known as the maximally diverse grouping problem and it is usually formulated as maximising the sum of the pairwise distances among students within teams. We propose an alternative algorithm in which the within group heterogeneity is measured by the attributes' variance instead of by the sum of distances between group members. The proposed algorithm is evaluated by means of two real data sets and the results suggest that it induces better solutions according to two independent evaluation criteria, the Davies-Bouldin index and the number of dominated teams. In conclusion, the results show that it is more adequate to use the attributes' variance to measure the heterogeneity of profiles within the teams and the homogeneity among teams.

  7. Combinatorial Optimization Algorithms for Dynamic Multiple Fault Diagnosis in Automotive and Aerospace Applications

    NASA Astrophysics Data System (ADS)

    Kodali, Anuradha

    In this thesis, we develop dynamic multiple fault diagnosis (DMFD) algorithms to diagnose faults that are sporadic and coupled. Firstly, we formulate a coupled factorial hidden Markov model-based (CFHMM) framework to diagnose dependent faults occurring over time (dynamic case). Here, we implement a mixed memory Markov coupling model to determine the most likely sequence of (dependent) fault states, the one that best explains the observed test outcomes over time. An iterative Gauss-Seidel coordinate ascent optimization method is proposed for solving the problem. A soft Viterbi algorithm is also implemented within the framework for decoding dependent fault states over time. We demonstrate the algorithm on simulated and real-world systems with coupled faults; the results show that this approach improves the correct isolation rate as compared to the formulation where independent fault states are assumed. Secondly, we formulate a generalization of set-covering, termed dynamic set-covering (DSC), which involves a series of coupled set-covering problems over time. The objective of the DSC problem is to infer the most probable time sequence of a parsimonious set of failure sources that explains the observed test outcomes over time. The DSC problem is NP-hard and intractable due to the fault-test dependency matrix that couples the failed tests and faults via the constraint matrix, and the temporal dependence of failure sources over time. Here, the DSC problem is motivated from the viewpoint of a dynamic multiple fault diagnosis problem, but it has wide applications in operations research, for e.g., facility location problem. Thus, we also formulated the DSC problem in the context of a dynamically evolving facility location problem. Here, a facility can be opened, closed, or can be temporarily unavailable at any time for a given requirement of demand points. These activities are associated with costs or penalties, viz., phase-in or phase-out for the opening or closing of a facility, respectively. The set-covering matrix encapsulates the relationship among the rows (tests or demand points) and columns (faults or locations) of the system at each time. By relaxing the coupling constraints using Lagrange multipliers, the DSC problem can be decoupled into independent subproblems, one for each column. Each subproblem is solved using the Viterbi decoding algorithm, and a primal feasible solution is constructed by modifying the Viterbi solutions via a heuristic. The proposed Viterbi-Lagrangian relaxation algorithm (VLRA) provides a measure of suboptimality via an approximate duality gap. As a major practical extension of the above problem, we also consider the problem of diagnosing faults with delayed test outcomes, termed delay-dynamic set-covering (DDSC), and experiment with real-world problems that exhibit masking faults. Also, we present simulation results on OR-library datasets (set-covering formulations are predominantly validated on these matrices in the literature), posed as facility location problems. Finally, we implement these algorithms to solve problems in aerospace and automotive applications. Firstly, we address the diagnostic ambiguity problem in aerospace and automotive applications by developing a dynamic fusion framework that includes dynamic multiple fault diagnosis algorithms. This improves the correct fault isolation rate, while minimizing the false alarm rates, by considering multiple faults instead of the traditional data-driven techniques based on single fault (class)-single epoch (static) assumption. The dynamic fusion problem is formulated as a maximum a posteriori decision problem of inferring the fault sequence based on uncertain outcomes of multiple binary classifiers over time. The fusion process involves three steps: the first step transforms the multi-class problem into dichotomies using error correcting output codes (ECOC), thereby solving the concomitant binary classification problems; the second step fuses the outcomes of multiple binary classifiers over time using a sliding window or block dynamic fusion method that exploits temporal data correlations over time. We solve this NP-hard optimization problem via a Lagrangian relaxation (variational) technique. The third step optimizes the classifier parameters, viz., probabilities of detection and false alarm, using a genetic algorithm. The proposed algorithm is demonstrated by computing the diagnostic performance metrics on a twin-spool commercial jet engine, an automotive engine, and UCI datasets (problems with high classification error are specifically chosen for experimentation). We show that the primal-dual optimization framework performed consistently better than any traditional fusion technique, even when it is forced to give a single fault decision across a range of classification problems. Secondly, we implement the inference algorithms to diagnose faults in vehicle systems that are controlled by a network of electronic control units (ECUs). The faults, originating from various interactions and especially between hardware and software, are particularly challenging to address. Our basic strategy is to divide the fault universe of such cyber-physical systems in a hierarchical manner, and monitor the critical variables/signals that have impact at different levels of interactions. The proposed diagnostic strategy is validated on an electrical power generation and storage system (EPGS) controlled by two ECUs in an environment with CANoe/MATLAB co-simulation. Eleven faults are injected with the failures originating in actuator hardware, sensor, controller hardware and software components. Diagnostic matrix is established to represent the relationship between the faults and the test outcomes (also known as fault signatures) via simulations. The results show that the proposed diagnostic strategy is effective in addressing the interaction-caused faults.

  8. Effects of extending the one-more-than technique with the support of a mobile purchasing assistance system.

    PubMed

    Hsu, Guo-Liang; Tang, Jung-Chang; Hwang, Wu-Yuin

    2014-08-01

    The one-more-than technique is an effective strategy for individuals with intellectual disabilities (ID) to use when making purchases. However, the heavy cognitive demands of money counting skills potentially limit how individuals with ID shop. This study employed a multiple-probe design across participants and settings, via the assistance of a mobile purchasing assistance system (MPAS), to assess the effectiveness of the one-more-than technique on independent purchases for items with prices beyond the participants' money counting skills. Results indicated that the techniques with the MPAS could effectively convert participants' initial money counting problems into useful advantages for successfully promoting the independent purchasing skills of three secondary school students with ID. Also noteworthy is the fact that mobile technologies could be a permanent prompt for those with ID to make purchases in their daily lives. The treatment effects could be maintained for eight weeks and generalized across three community settings. Implications for practice and future studies are provided. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Neural Systems with Numerically Matched Input-Output Statistic: Isotonic Bivariate Statistical Modeling

    PubMed Central

    Fiori, Simone

    2007-01-01

    Bivariate statistical modeling from incomplete data is a useful statistical tool that allows to discover the model underlying two data sets when the data in the two sets do not correspond in size nor in ordering. Such situation may occur when the sizes of the two data sets do not match (i.e., there are “holes” in the data) or when the data sets have been acquired independently. Also, statistical modeling is useful when the amount of available data is enough to show relevant statistical features of the phenomenon underlying the data. We propose to tackle the problem of statistical modeling via a neural (nonlinear) system that is able to match its input-output statistic to the statistic of the available data sets. A key point of the new implementation proposed here is that it is based on look-up-table (LUT) neural systems, which guarantee a computationally advantageous way of implementing neural systems. A number of numerical experiments, performed on both synthetic and real-world data sets, illustrate the features of the proposed modeling procedure. PMID:18566641

  10. A combined representation method for use in band structure calculations. 1: Method

    NASA Technical Reports Server (NTRS)

    Friedli, C.; Ashcroft, N. W.

    1975-01-01

    A representation was described whose basis levels combine the important physical aspects of a finite set of plane waves with those of a set of Bloch tight-binding levels. The chosen combination has a particularly simple dependence on the wave vector within the Brillouin Zone, and its use in reducing the standard one-electron band structure problem to the usual secular equation has the advantage that the lattice sums involved in the calculation of the matrix elements are actually independent of the wave vector. For systems with complicated crystal structures, for which the Korringa-Kohn-Rostoker (KKR), Augmented-Plane Wave (APW) and Orthogonalized-Plane Wave (OPW) methods are difficult to apply, the present method leads to results with satisfactory accuracy and convergence.

  11. A new non-iterative reconstruction method for the electrical impedance tomography problem

    NASA Astrophysics Data System (ADS)

    Ferreira, A. D.; Novotny, A. A.

    2017-03-01

    The electrical impedance tomography (EIT) problem consists in determining the distribution of the electrical conductivity of a medium subject to a set of current fluxes, from measurements of the corresponding electrical potentials on its boundary. EIT is probably the most studied inverse problem since the fundamental works by Calderón from the 1980s. It has many relevant applications in medicine (detection of tumors), geophysics (localization of mineral deposits) and engineering (detection of corrosion in structures). In this work, we are interested in reconstructing a number of anomalies with different electrical conductivity from the background. Since the EIT problem is written in the form of an overdetermined boundary value problem, the idea is to rewrite it as a topology optimization problem. In particular, a shape functional measuring the misfit between the boundary measurements and the electrical potentials obtained from the model is minimized with respect to a set of ball-shaped anomalies by using the concept of topological derivatives. It means that the objective functional is expanded and then truncated up to the second order term, leading to a quadratic and strictly convex form with respect to the parameters under consideration. Thus, a trivial optimization step leads to a non-iterative second order reconstruction algorithm. As a result, the reconstruction process becomes very robust with respect to noisy data and independent of any initial guess. Finally, in order to show the effectiveness of the devised reconstruction algorithm, some numerical experiments into two spatial dimensions are presented, taking into account total and partial boundary measurements.

  12. Distributed Prognostics based on Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, I.

    2014-01-01

    Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. Index Terms-model-based prognostics, distributed prognostics, structural model decomposition ABBREVIATIONS

  13. Promoting the Multidimensional Character of Scientific Reasoning †

    PubMed Central

    Bradshaw, William S.; Nelson, Jennifer; Adams, Byron J.; Bell, John D.

    2017-01-01

    This study reports part of a long-term program to help students improve scientific reasoning using higher-order cognitive tasks set in the discipline of cell biology. This skill was assessed using problems requiring the construction of valid conclusions drawn from authentic research data. We report here efforts to confirm the hypothesis that data interpretation is a complex, multifaceted exercise. Confirmation was obtained using a statistical treatment showing that various such problems rank students differently—each contains a unique set of cognitive challenges. Additional analyses of performance results have allowed us to demonstrate that individuals differ in their capacity to navigate five independent generic elements that constitute successful data interpretation: biological context, connection to course concepts, experimental protocols, data inference, and integration of isolated experimental observations into a coherent model. We offer these aspects of scientific thinking as a “data analysis skills inventory,” along with usable sample problems that illustrate each element. Additionally, we show that this kind of reasoning is rigorous in that it is difficult for most novice students, who are unable to intuitively implement strategies for improving these skills. Instructors armed with knowledge of the specific challenges presented by different types of problems can provide specific helpful feedback during formative practice. The use of this instructional model is most likely to require changes in traditional classroom instruction. PMID:28512524

  14. Combining independent decisions increases diagnostic accuracy of reading lumbosacral radiographs and magnetic resonance imaging.

    PubMed

    Kurvers, Ralf H J M; de Zoete, Annemarie; Bachman, Shelby L; Algra, Paul R; Ostelo, Raymond

    2018-01-01

    Diagnosing the causes of low back pain is a challenging task, prone to errors. A novel approach to increase diagnostic accuracy in medical decision making is collective intelligence, which refers to the ability of groups to outperform individual decision makers in solving problems. We investigated whether combining the independent ratings of chiropractors, chiropractic radiologists and medical radiologists can improve diagnostic accuracy when interpreting diagnostic images of the lumbosacral spine. Evaluations were obtained from two previously published studies: study 1 consisted of 13 raters independently rating 300 lumbosacral radiographs; study 2 consisted of 14 raters independently rating 100 lumbosacral magnetic resonance images. In both studies, raters evaluated the presence of "abnormalities", which are indicators of a serious health risk and warrant immediate further examination. We combined independent decisions of raters using a majority rule which takes as final diagnosis the decision of the majority of the group. We compared the performance of the majority rule to the performance of single raters. Our results show that with increasing group size (i.e., increasing the number of independent decisions) both sensitivity and specificity increased in both data-sets, with groups consistently outperforming single raters. These results were found for radiographs and MR image reading alike. Our findings suggest that combining independent ratings can improve the accuracy of lumbosacral diagnostic image reading.

  15. The control of ventilation during exercise: a lesson in critical thinking.

    PubMed

    Bruce, Richard M

    2017-12-01

    Learning the basic competencies of critical thinking are very important in the education of any young scientist, and teachers must be prepared to help students develop a valuable set of analytic tools. In my experience, this is best achieved by encouraging students to study areas with little scientific consensus, such as the control mechanisms of the exercise ventilatory response, as it can allow greater objectivity when evaluating evidence, while also giving students the freedom to think independently and problem solve. In this article, I discuss teaching strategies by which physiology, biomedical science, and sport science students can simultaneously develop their understanding of respiratory control mechanisms and learn to critically analyze evidence thoroughly. This can be best achieved by utilizing both teacher-led and student-led learning environments, the latter of which encourages the development of learner autonomy and independent problem solving. In this article, I also aim to demonstrate a systematic approach of critical assessment that students can be taught, adapt, and apply independently. Among other things, this strategy involves: 1 ) defining the precise phenomenon in question; 2 ) understanding what investigations must demonstrate to explain the phenomenon and its underlying mechanisms; 3 ) evaluating the explanations/mechanisms of the phenomenon and the evidence for them; and 4 ) forming strategies to produce strong evidence, if none exists. Copyright © 2017 the American Physiological Society.

  16. The M Word: Multicollinearity in Multiple Regression.

    ERIC Educational Resources Information Center

    Morrow-Howell, Nancy

    1994-01-01

    Notes that existence of substantial correlation between two or more independent variables creates problems of multicollinearity in multiple regression. Discusses multicollinearity problem in social work research in which independent variables are usually intercorrelated. Clarifies problems created by multicollinearity, explains detection of…

  17. Provenance Challenges for Earth Science Dataset Publication

    NASA Technical Reports Server (NTRS)

    Tilmes, Curt

    2011-01-01

    Modern science is increasingly dependent on computational analysis of very large data sets. Organizing, referencing, publishing those data has become a complex problem. Published research that depends on such data often fails to cite the data in sufficient detail to allow an independent scientist to reproduce the original experiments and analyses. This paper explores some of the challenges related to data identification, equivalence and reproducibility in the domain of data intensive scientific processing. It will use the example of Earth Science satellite data, but the challenges also apply to other domains.

  18. Tri-state oriented parallel processing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tenenbaum, J.; Wallach, Y.

    1982-08-01

    An alternating sequential/parallel system, the MOPPS was introduced a few years ago and is modified despite the fact that it solved satisfactorily a number of real-time problems. The new system, the TOPPS is described and compared to MOPPS and two applications are chosen to prove it to be superior. The advantage of having a third basic, the ring mode, is illustrated when solving sets of linear equations with band matrices. The advantage of having independent I/O for the slaves is illustrated for biomedical signal analysis. 11 references.

  19. Group analysis for natural convection from a vertical plate

    NASA Astrophysics Data System (ADS)

    Rashed, A. S.; Kassem, M. M.

    2008-12-01

    The steady laminar natural convection of a fluid having chemical reaction of order n past a semi-infinite vertical plate is considered. The solution of the problem by means of one-parameter group method reduces the number of independent variables by one leading to a system of nonlinear ordinary differential equations. Two different similarity transformations are found. In each case the set of differential equations are solved numerically using Runge-Kutta and the shooting method. For each transformation different Schmidt numbers and chemical reaction orders are tested.

  20. The research and implementation of a unified identity authentication in e-government network

    NASA Astrophysics Data System (ADS)

    Feng, Zhou

    Current problem existing in e-government network is that the applications of information system are developed independently by various departments, and each has its own specific set of authentication and access control mechanism. To build a comprehensive information system in favor of sharing and exchanging information, a sound and secure unified e-government authentication system is firstly needed. The paper, combining with practical development of e-government network, carries out a thorough discussion on how to achieve data synchronization between unified authentication system and related application systems.

  1. Hysteroscopic sterilization success in outpatient vs office setting is not affected by patient or procedural characteristics.

    PubMed

    Anderson, Ted L; Yunker, Amanda C; Scheib, Stacey A; Callahan, Tamara L

    2013-01-01

    To determine factors associated with hysteroscopic sterilization success and whether it differs between the operating room and office settings. Retrospective cohort analysis (Canadian Task Force classification II-2). Major university medical center. Six hundred thirty-eight women who underwent hysteroscopic sterilization between July 1, 2005, and June 30, 2011. Data collected included age, body mass index, previous office procedures, previous cesarean section, and presence of myomas or retroverted uterus. Place of surgery, experience of surgeon, insurance type, bilateral device placement, compliance with hysterosalpingography, and confirmation of occlusion were also recorded. Bivariate analysis of patient characteristics between groups was performed using χ(2) and independent t tests, and identified confounders and associated variables. Multivariate analysis was performed using logistic regression to assess for association and to adjust for confounders. Procedures were performed in the operating room (57%) or in the office (43%). There was no association between success in bilateral device placement or occlusion and any patient characteristic, regardless of surgery setting. Private insurance, patient age, and performance of procedures in the office setting were positively associated with likelihood of compliance with hysterosalpingography. Successful device placement and tubal occlusion are independent of patient age, body mass index, or setting of the procedure. Association between insurance type and completing hysterosalpingography illustrates an important public health problem. Patients who fail to undergo hysterosalpingography to confirm tubal occlusion may unknowingly be at risk of pregnancy and increased risk of ectopic pregnancy. Copyright © 2013 AAGL. Published by Elsevier Inc. All rights reserved.

  2. QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION.

    PubMed

    Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy

    We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method-named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)-for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results.

  3. Benchmarking image fusion system design parameters

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.

    2013-06-01

    A clear and absolute method for discriminating between image fusion algorithm performances is presented. This method can effectively be used to assist in the design and modeling of image fusion systems. Specifically, it is postulated that quantifying human task performance using image fusion should be benchmarked to whether the fusion algorithm, at a minimum, retained the performance benefit achievable by each independent spectral band being fused. The established benchmark would then clearly represent the threshold that a fusion system should surpass to be considered beneficial to a particular task. A genetic algorithm is employed to characterize the fused system parameters using a Matlab® implementation of NVThermIP as the objective function. By setting the problem up as a mixed-integer constraint optimization problem, one can effectively look backwards through the image acquisition process: optimizing fused system parameters by minimizing the difference between modeled task difficulty measure and the benchmark task difficulty measure. The results of an identification perception experiment are presented, where human observers were asked to identify a standard set of military targets, and used to demonstrate the effectiveness of the benchmarking process.

  4. An approximate stationary solution for multi-allele neutral diffusion with low mutation rates.

    PubMed

    Burden, Conrad J; Tang, Yurong

    2016-12-01

    We address the problem of determining the stationary distribution of the multi-allelic, neutral-evolution Wright-Fisher model in the diffusion limit. A full solution to this problem for an arbitrary K×K mutation rate matrix involves solving for the stationary solution of a forward Kolmogorov equation over a (K-1)-dimensional simplex, and remains intractable. In most practical situations mutations rates are slow on the scale of the diffusion limit and the solution is heavily concentrated on the corners and edges of the simplex. In this paper we present a practical approximate solution for slow mutation rates in the form of a set of line densities along the edges of the simplex. The method of solution relies on parameterising the general non-reversible rate matrix as the sum of a reversible part and a set of (K-1)(K-2)/2 independent terms corresponding to fluxes of probability along closed paths around faces of the simplex. The solution is potentially a first step in estimating non-reversible evolutionary rate matrices from observed allele frequency spectra. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. High performance computing aspects of a dimension independent semi-Lagrangian discontinuous Galerkin code

    NASA Astrophysics Data System (ADS)

    Einkemmer, Lukas

    2016-05-01

    The recently developed semi-Lagrangian discontinuous Galerkin approach is used to discretize hyperbolic partial differential equations (usually first order equations). Since these methods are conservative, local in space, and able to limit numerical diffusion, they are considered a promising alternative to more traditional semi-Lagrangian schemes (which are usually based on polynomial or spline interpolation). In this paper, we consider a parallel implementation of a semi-Lagrangian discontinuous Galerkin method for distributed memory systems (so-called clusters). Both strong and weak scaling studies are performed on the Vienna Scientific Cluster 2 (VSC-2). In the case of weak scaling we observe a parallel efficiency above 0.8 for both two and four dimensional problems and up to 8192 cores. Strong scaling results show good scalability to at least 512 cores (we consider problems that can be run on a single processor in reasonable time). In addition, we study the scaling of a two dimensional Vlasov-Poisson solver that is implemented using the framework provided. All of the simulations are conducted in the context of worst case communication overhead; i.e., in a setting where the CFL (Courant-Friedrichs-Lewy) number increases linearly with the problem size. The framework introduced in this paper facilitates a dimension independent implementation of scientific codes (based on C++ templates) using both an MPI and a hybrid approach to parallelization. We describe the essential ingredients of our implementation.

  6. Animated computer graphics models of space and earth sciences data generated via the massively parallel processor

    NASA Technical Reports Server (NTRS)

    Treinish, Lloyd A.; Gough, Michael L.; Wildenhain, W. David

    1987-01-01

    The capability was developed of rapidly producing visual representations of large, complex, multi-dimensional space and earth sciences data sets via the implementation of computer graphics modeling techniques on the Massively Parallel Processor (MPP) by employing techniques recently developed for typically non-scientific applications. Such capabilities can provide a new and valuable tool for the understanding of complex scientific data, and a new application of parallel computing via the MPP. A prototype system with such capabilities was developed and integrated into the National Space Science Data Center's (NSSDC) Pilot Climate Data System (PCDS) data-independent environment for computer graphics data display to provide easy access to users. While developing these capabilities, several problems had to be solved independently of the actual use of the MPP, all of which are outlined.

  7. Robust Requirements Tracing via Internet Search Technology: Improving an IV and V Technique. Phase 2

    NASA Technical Reports Server (NTRS)

    Hayes, Jane; Dekhtyar, Alex

    2004-01-01

    There are three major objectives to this phase of the work. (1) Improvement of Information Retrieval (IR) methods for Independent Verification and Validation (IV&V) requirements tracing. Information Retrieval methods are typically developed for very large (order of millions - tens of millions and more documents) document collections and therefore, most successfully used methods somewhat sacrifice precision and recall in order to achieve efficiency. At the same time typical IR systems treat all user queries as independent of each other and assume that relevance of documents to queries is subjective for each user. The IV&V requirements tracing problem has a much smaller data set to operate on, even for large software development projects; the set of queries is predetermined by the high-level specification document and individual requirements considered as query input to IR methods are not necessarily independent from each other. Namely, knowledge about the links for one requirement may be helpful in determining the links of another requirement. Finally, while the final decision on the exact form of the traceability matrix still belongs to the IV&V analyst, his/her decisions are much less arbitrary than those of an Internet search engine user. All this suggests that the information available to us in the framework of the IV&V tracing problem can be successfully leveraged to enhance standard IR techniques, which in turn would lead to increased recall and precision. We developed several new methods during Phase II; (2) IV&V requirements tracing IR toolkit. Based on the methods developed in Phase I and their improvements developed in Phase II, we built a toolkit of IR methods for IV&V requirements tracing. The toolkit has been integrated, at the data level, with SAIC's SuperTracePlus (STP) tool; (3) Toolkit testing. We tested the methods included in the IV&V requirements tracing IR toolkit on a number of projects.

  8. Implementing standard setting into the Conjoint MAFP/FRACGP Part 1 examination - Process and issues.

    PubMed

    Chan, S C; Mohd Amin, S; Lee, T W

    2016-01-01

    The College of General Practitioners of Malaysia and the Royal Australian College of General Practitioners held the first Conjoint Member of the College of General Practitioners (MCGP)/Fellow of Royal Australian College of General Practitioners (FRACGP) examination in 1982, later renamed the Conjoint MAFP/FRACGP examinations. The examination assesses competency for safe independent general practice and as family medicine specialists in Malaysia. Therefore, a defensible standard set pass mark is imperative to separate the competent from the incompetent. This paper discusses the process and issues encountered in implementing standard setting to the Conjoint Part 1 examination. Critical to success in standard setting were judges' understanding of the process of the modified Angoff method, defining the borderline candidate's characteristics and the composition of judges. These were overcome by repeated hands-on training, provision of detailed guidelines and careful selection of judges. In December 2013, 16 judges successfully standard set the Part 1 Conjoint examinations, with high inter-rater reliability: Cronbach's alpha coefficient 0.926 (Applied Knowledge Test), 0.921 (Key Feature Problems).

  9. Problem based learning: the effect of real time data on the website to student independence

    NASA Astrophysics Data System (ADS)

    Setyowidodo, I.; Pramesti, Y. S.; Handayani, A. D.

    2018-05-01

    Learning science developed as an integrative science rather than disciplinary education, the reality of the nation character development has not been able to form a more creative and independent Indonesian man. Problem Based Learning based on real time data in the website is a learning method focuses on developing high-level thinking skills in problem-oriented situations by integrating technology in learning. The essence of this study is the presentation of authentic problems in the real time data situation in the website. The purpose of this research is to develop student independence through Problem Based Learning based on real time data in website. The type of this research is development research with implementation using purposive sampling technique. Based on the study there is an increase in student self-reliance, where the students in very high category is 47% and in the high category is 53%. This learning method can be said to be effective in improving students learning independence in problem-oriented situations.

  10. Attribute-based classification for zero-shot visual object categorization.

    PubMed

    Lampert, Christoph H; Nickisch, Hannes; Harmeling, Stefan

    2014-03-01

    We study the problem of object recognition for categories for which we have no training examples, a task also called zero--data or zero-shot learning. This situation has hardly been studied in computer vision research, even though it occurs frequently; the world contains tens of thousands of different object classes, and image collections have been formed and suitably annotated for only a few of them. To tackle the problem, we introduce attribute-based classification: Objects are identified based on a high-level description that is phrased in terms of semantic attributes, such as the object's color or shape. Because the identification of each such property transcends the specific learning task at hand, the attribute classifiers can be prelearned independently, for example, from existing image data sets unrelated to the current task. Afterward, new classes can be detected based on their attribute representation, without the need for a new training phase. In this paper, we also introduce a new data set, Animals with Attributes, of over 30,000 images of 50 animal classes, annotated with 85 semantic attributes. Extensive experiments on this and two more data sets show that attribute-based classification indeed is able to categorize images without access to any training images of the target classes.

  11. Practical Problems with Medication Use that Older People Experience: A Qualitative Study

    PubMed Central

    Notenboom, Kim; Beers, Erna; van Riet-Nales, Diana A; Egberts, Toine C G; Leufkens, Hubert G M; Jansen, Paul A F; Bouvy, Marcel L

    2014-01-01

    Objectives To identify the practical problems that older people experience with the daily use of their medicines and their management strategies to address these problems and to determine the potential clinical relevance thereof. Design Qualitative study with semistructured face-to-face interviews. Setting A community pharmacy and a geriatric outpatient ward. Participants Community-dwelling people aged 70 and older (N = 59). Measurements Participants were interviewed at home. Two researchers coded the reported problems and management strategies independently according to a coding scheme. An expert panel classified the potential clinical relevance of every identified practical problem and associated management strategy using a 3-point scale. Results Two hundred eleven practical problems and 184 management strategies were identified. Ninety-five percent of the participants experienced one or more practical problems with the use of their medicines: problems reading and understanding the instructions for use, handling the outer packaging, handling the immediate packaging, completing preparation before use, and taking the medicine. For 10 participants, at least one of their problems, in combination with the applied management strategy, had potential clinical consequences and 11 cases (5% of the problems) had the potential to cause moderate or severe clinical deterioration. Conclusion Older people experience a number of practical problems using their medicines, and their strategies to manage these problems are sometimes suboptimal. These problems can lead to incorrect medication use with clinically relevant consequences. The findings pose a challenge for healthcare professionals, drug developers, and regulators to diminish these problems. PMID:25516030

  12. Independent Correlates of Reported Gambling Problems amongst Indigenous Australians

    ERIC Educational Resources Information Center

    Stevens, Matthew; Young, Martin

    2010-01-01

    To identify independent correlates of reported gambling problems amongst the Indigenous population of Australia. A cross-sectional design was applied to a nationally representative sample of the Indigenous population. Estimates of reported gambling problems are presented by remoteness and jurisdiction. Multivariable logistic regression was used to…

  13. Multiparameter Estimation in Networked Quantum Sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.

    We introduce a general model for a network of quantum sensors, and we use this model to consider the question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. Thismore » immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or non-linear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.« less

  14. Multiparameter Estimation in Networked Quantum Sensors

    DOE PAGES

    Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.

    2018-02-21

    We introduce a general model for a network of quantum sensors, and we use this model to consider the question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. Thismore » immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or non-linear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.« less

  15. UQTools: The Uncertainty Quantification Toolbox - Introduction and Tutorial

    NASA Technical Reports Server (NTRS)

    Kenny, Sean P.; Crespo, Luis G.; Giesy, Daniel P.

    2012-01-01

    UQTools is the short name for the Uncertainty Quantification Toolbox, a software package designed to efficiently quantify the impact of parametric uncertainty on engineering systems. UQTools is a MATLAB-based software package and was designed to be discipline independent, employing very generic representations of the system models and uncertainty. Specifically, UQTools accepts linear and nonlinear system models and permits arbitrary functional dependencies between the system s measures of interest and the probabilistic or non-probabilistic parametric uncertainty. One of the most significant features incorporated into UQTools is the theoretical development centered on homothetic deformations and their application to set bounding and approximating failure probabilities. Beyond the set bounding technique, UQTools provides a wide range of probabilistic and uncertainty-based tools to solve key problems in science and engineering.

  16. Consolidated View on Space Software Engineering Problems - An Empirical Study

    NASA Astrophysics Data System (ADS)

    Silva, N.; Vieira, M.; Ricci, D.; Cotroneo, D.

    2015-09-01

    Independent software verification and validation (ISVV) has been a key process for engineering quality assessment for decades, and is considered in several international standards. The “European Space Agency (ESA) ISVV Guide” is used for the European Space market to drive the ISVV tasks and plans, and to select applicable tasks and techniques. Software artefacts have room for improvement due to the amount if issues found during ISVV tasks. This article presents the analysis of the results of a large set of ISVV issues originated from three different ESA missions-amounting to more than 1000 issues. The study presents the main types, triggers and impacts related to the ISVV issues found and sets the path for a global software engineering improvement based on the most common deficiencies identified for space projects.

  17. Excore Modeling with VERAShift

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandya, Tara M.; Evans, Thomas M.

    It is important to be able to accurately predict the neutron flux outside the immediate reactor core for a variety of safety and material analyses. Monte Carlo radiation transport calculations are required to produce the high fidelity excore responses. Under this milestone VERA (specifically the VERAShift package) has been extended to perform excore calculations by running radiation transport calculations with Shift. This package couples VERA-CS with Shift to perform excore tallies for multiple state points concurrently, with each component capable of parallel execution on independent domains. Specifically, this package performs fluence calculations in the core barrel and vessel, or, performsmore » the requested tallies in any user-defined excore regions. VERAShift takes advantage of the general geometry package in Shift. This gives VERAShift the flexibility to explicitly model features outside the core barrel, including detailed vessel models, detectors, and power plant details. A very limited set of experimental and numerical benchmarks is available for excore simulation comparison. The Consortium for the Advanced Simulation of Light Water Reactors (CASL) has developed a set of excore benchmark problems to include as part of the VERA-CS verification and validation (V&V) problems. The excore capability in VERAShift has been tested on small representative assembly problems, multiassembly problems, and quarter-core problems. VERAView has also been extended to visualize these vessel fluence results from VERAShift. Preliminary vessel fluence results for quarter-core multistate calculations look very promising. Further development is needed to determine the details relevant to excore simulations. Validation of VERA for fluence and excore detectors still needs to be performed against experimental and numerical results.« less

  18. Implementing nurse prescribing: a case study in diabetes.

    PubMed

    Stenner, Karen; Carey, Nicola; Courtenay, Molly

    2010-03-01

    This paper is a report of a study exploring the views of nurses and team members on the implementation of nurse prescribing in diabetes services. Nurse prescribing is adopted as a means of improving service efficiency, particularly where demand outstretches resources. Although factors that support nurse prescribing have been identified, it is not known how these function within specific contexts. This is important as its uptake and use varies according to mode of prescribing and area of practice. A case study was undertaken in nine practice settings across England where nurses prescribed medicines for patients with diabetes. Thematic analysis was conducted on qualitative data from 31 semi-structured interviews undertaken between 2007 and 2008. Participants were qualified nurse prescribers, administrative staff, physicians and non-nurse prescribers. Nurses prescribed more often following the expansion of nurse independent prescribing rights in 2006. Initial implementation problems had been resolved and few current problems were reported. As nurses' roles were well-established, no major alterations to service provision were required to implement nurse prescribing. Access to formal and informal resources for support and training were available. Participants were accepting and supportive of this initiative to improve the efficiency of diabetes services. The main factors that promoted implementation of nurse prescribing in this setting were the ability to prescribe independently, acceptance of the prescribing role, good working relationships between doctors and nurses, and sound organizational and interpersonal support. The history of established nursing roles in diabetes care, and increasing service demand, meant that these diabetes services were primed to assimilate nurse prescribing.

  19. Multi-Source Learning for Joint Analysis of Incomplete Multi-Modality Neuroimaging Data

    PubMed Central

    Yuan, Lei; Wang, Yalin; Thompson, Paul M.; Narayan, Vaibhav A.; Ye, Jieping

    2013-01-01

    Incomplete data present serious problems when integrating largescale brain imaging data sets from different imaging modalities. In the Alzheimer’s Disease Neuroimaging Initiative (ADNI), for example, over half of the subjects lack cerebrospinal fluid (CSF) measurements; an independent half of the subjects do not have fluorodeoxyglucose positron emission tomography (FDG-PET) scans; many lack proteomics measurements. Traditionally, subjects with missing measures are discarded, resulting in a severe loss of available information. We address this problem by proposing two novel learning methods where all the samples (with at least one available data source) can be used. In the first method, we divide our samples according to the availability of data sources, and we learn shared sets of features with state-of-the-art sparse learning methods. Our second method learns a base classifier for each data source independently, based on which we represent each source using a single column of prediction scores; we then estimate the missing prediction scores, which, combined with the existing prediction scores, are used to build a multi-source fusion model. To illustrate the proposed approaches, we classify patients from the ADNI study into groups with Alzheimer’s disease (AD), mild cognitive impairment (MCI) and normal controls, based on the multi-modality data. At baseline, ADNI’s 780 participants (172 AD, 397 MCI, 211 Normal), have at least one of four data types: magnetic resonance imaging (MRI), FDG-PET, CSF and proteomics. These data are used to test our algorithms. Comprehensive experiments show that our proposed methods yield stable and promising results. PMID:24014189

  20. Mathematical theory of a relaxed design problem in structural optimization

    NASA Technical Reports Server (NTRS)

    Kikuchi, Noboru; Suzuki, Katsuyuki

    1990-01-01

    Various attempts have been made to construct a rigorous mathematical theory of optimization for size, shape, and topology (i.e. layout) of an elastic structure. If these are represented by a finite number of parametric functions, as Armand described, it is possible to construct an existence theory of the optimum design using compactness argument in a finite dimensional design space or a closed admissible set of a finite dimensional design space. However, if the admissible design set is a subset of non-reflexive Banach space such as L(sup infinity)(Omega), construction of the existence theory of the optimum design becomes suddenly difficult and requires to extend (i.e. generalize) the design problem to much more wider class of design that is compatible to mechanics of structures in the sense of variational principle. Starting from the study by Cheng and Olhoff, Lurie, Cherkaev, and Fedorov introduced a new concept of convergence of design variables in a generalized sense and construct the 'G-Closure' theory of an extended (relaxed) optimum design problem. A similar attempt, but independent in large extent, can also be found in Kohn and Strang in which the shape and topology optimization problem is relaxed to allow to use of perforated composites rather than restricting it to usual solid structures. An identical idea is also stated in Murat and Tartar using the notion of the homogenization theory. That is, introducing possibility of micro-scale perforation together with the theory of homogenization, the optimum design problem is relaxed to construct its mathematical theory. It is also noted that this type of relaxed design problem is perfectly matched to the variational principle in structural mechanics.

  1. Comprehensive clinical assessment in community setting: applicability of the MDS-HC.

    PubMed

    Morris, J N; Fries, B E; Steel, K; Ikegami, N; Bernabei, R; Carpenter, G I; Gilgen, R; Hirdes, J P; Topinková, E

    1997-08-01

    To describe the results of an international trial of the home care version of the MDS assessment and problem identification system (the MDS-HC), including reliability estimates, a comparison of MDS-HC reliabilities with reliabilities of the same items in the MDS 2.0 nursing home assessment instrument, and an examination of the types of problems found in home care clients using the MDS-HC. Independent, dual assessment of clients of home-care agencies by trained clinicians using a draft of the MDS-HC, with additional descriptive data regarding problem profiles for home care clients. Reliability data from dual assessments of 241 randomly selected clients of home care agencies in five countries, all of whom volunteered to test the MDS-HC. Also included are an expanded sample of 780 home care assessments from these countries and 187 dually assessed residents from 21 nursing homes in the United States. The array of MDS-HC assessment items included measures in the following areas: personal items, cognitive patterns, communication/hearing, vision, mood and behavior, social functioning, informal support services, physical functioning, continence, disease diagnoses health conditions and preventive health measures, nutrition/hydration, dental status, skin condition, environmental assessment, service utilization, and medications. Forty-seven percent of the functional, health status, social environment, and service items in the MDS-HC were taken from the MDS 2.0 for nursing homes. For this item set, it is estimated that the average weighted Kappa is .74 for the MDS-HC and .75 for the MDS 2.0. Similarly, high reliability values were found for items newly introduced in the MDS-HC (weighted Kappa = .70). Descriptive findings also characterize the problems of home care clients, with subanalyses within cognitive performance levels. Findings indicate that the core set of items in the MDS 2.0 work equally well in community and nursing home settings. New items are highly reliable. In tandem, these instruments can be used within the international community, assisting and planning care for older adults within a broad spectrum of service settings, including nursing homes and home care programs. With this community-based, second-generation problem and care plan-driven assessment instrument, disability assessment can be performed consistently across the world.

  2. Assessment Position Affects Problem-Solving Behaviors in a Child With Motor Impairments.

    PubMed

    OʼGrady, Michael G; Dusing, Stacey C

    2016-01-01

    The purpose of this report was to examine problem-solving behaviors of a child with significant motor impairments in positions she could maintain independently, in supine and prone positions, as well as a position that required support, sitting. The child was a 22-month-old girl who could not sit independently and had limited independent mobility. Her problem-solving behaviors were assessed using the Early Problem Solving Indicator, while she was placed in supine or prone position, and again in manually supported sitting position. In manually supported sitting position, the subject demonstrated a higher frequency of problem-solving behaviors and her most developmentally advanced problem-solving behavior. Because a child's position may affect cognitive test results, position should be documented at the time of testing.

  3. 6 Essential Questions for Problem Solving

    ERIC Educational Resources Information Center

    Kress, Nancy Emerson

    2017-01-01

    One of the primary expectations that the author has for her students is for them to develop greater independence when solving complex and unique mathematical problems. The story of how the author supports her students as they gain confidence and independence with complex and unique problem-solving tasks, while honoring their expectations with…

  4. Multi-modal data fusion using source separation: Two effective models based on ICA and IVA and their properties

    PubMed Central

    Adali, Tülay; Levin-Schwartz, Yuri; Calhoun, Vince D.

    2015-01-01

    Fusion of information from multiple sets of data in order to extract a set of features that are most useful and relevant for the given task is inherent to many problems we deal with today. Since, usually, very little is known about the actual interaction among the datasets, it is highly desirable to minimize the underlying assumptions. This has been the main reason for the growing importance of data-driven methods, and in particular of independent component analysis (ICA) as it provides useful decompositions with a simple generative model and using only the assumption of statistical independence. A recent extension of ICA, independent vector analysis (IVA) generalizes ICA to multiple datasets by exploiting the statistical dependence across the datasets, and hence, as we discuss in this paper, provides an attractive solution to fusion of data from multiple datasets along with ICA. In this paper, we focus on two multivariate solutions for multi-modal data fusion that let multiple modalities fully interact for the estimation of underlying features that jointly report on all modalities. One solution is the Joint ICA model that has found wide application in medical imaging, and the second one is the the Transposed IVA model introduced here as a generalization of an approach based on multi-set canonical correlation analysis. In the discussion, we emphasize the role of diversity in the decompositions achieved by these two models, present their properties and implementation details to enable the user make informed decisions on the selection of a model along with its associated parameters. Discussions are supported by simulation results to help highlight the main issues in the implementation of these methods. PMID:26525830

  5. Investigating the effect of mental set on insight problem solving.

    PubMed

    Ollinger, Michael; Jones, Gary; Knoblich, Günther

    2008-01-01

    Mental set is the tendency to solve certain problems in a fixed way based on previous solutions to similar problems. The moment of insight occurs when a problem cannot be solved using solution methods suggested by prior experience and the problem solver suddenly realizes that the solution requires different solution methods. Mental set and insight have often been linked together and yet no attempt thus far has systematically examined the interplay between the two. Three experiments are presented that examine the extent to which sets of noninsight and insight problems affect the subsequent solutions of insight test problems. The results indicate a subtle interplay between mental set and insight: when the set involves noninsight problems, no mental set effects are shown for the insight test problems, yet when the set involves insight problems, both facilitation and inhibition can be seen depending on the type of insight problem presented in the set. A two process model is detailed to explain these findings that combines the representational change mechanism with that of proceduralization.

  6. A Fuzzy Goal Programming for a Multi-Depot Distribution Problem

    NASA Astrophysics Data System (ADS)

    Nunkaew, Wuttinan; Phruksaphanrat, Busaba

    2010-10-01

    A fuzzy goal programming model for solving a Multi-Depot Distribution Problem (MDDP) is proposed in this research. This effective proposed model is applied for solving in the first step of Assignment First-Routing Second (AFRS) approach. Practically, a basic transportation model is firstly chosen to solve this kind of problem in the assignment step. After that the Vehicle Routing Problem (VRP) model is used to compute the delivery cost in the routing step. However, in the basic transportation model, only depot to customer relationship is concerned. In addition, the consideration of customer to customer relationship should also be considered since this relationship exists in the routing step. Both considerations of relationships are solved using Preemptive Fuzzy Goal Programming (P-FGP). The first fuzzy goal is set by a total transportation cost and the second fuzzy goal is set by a satisfactory level of the overall independence value. A case study is used for describing the effectiveness of the proposed model. Results from the proposed model are compared with the basic transportation model that has previously been used in this company. The proposed model can reduce the actual delivery cost in the routing step owing to the better result in the assignment step. Defining fuzzy goals by membership functions are more realistic than crisps. Furthermore, flexibility to adjust goals and an acceptable satisfactory level for decision maker can also be increased and the optimal solution can be obtained.

  7. An evaluation of independent consumer assistance centers on problem resolution and user satisfaction: the consumer perspective.

    PubMed

    Nascimento, Lori Miller; Cousineau, Michael R

    2005-04-01

    Individuals who wish to receive independent assistance to resolve access to care health problems have limited options. The Health Consumer Alliance (HCA) is an independent, coordinated effort of nine legal services organizations that provide free assistance to low-income health consumers in 10 California counties. The need for the HCA stems from the vast number of health consumers with unanswered questions and unresolved problems relating to access to care issues, among both insured and uninsured populations. However, little is known about the effectiveness of independent consumer assistance centers. This paper examines the effectiveness of a network of independent consumer assistance programs in resolving consumer problems and consumers' level of satisfaction with services received. As the project evaluators, we conducted telephone surveys with 1,291 users of the HCA to assess if this independent program resolved consumer problems, and to measure the level of satisfaction among HCA users. Specifically, we asked questions about the HCA's influence on problem resolution, consumer satisfaction, health insurance status and use of preventive care services. From 1997 to 2001, more than 46,000 consumers contacted the seven health consumer centers (HCCs). According to our sample of respondents, results show that the HCCs are an important resource for low-income Californians trying to access health care. After contacting the HCCs, 62 percent of the participants report that their problems were resolved. In addition, 87 percent of the participants said the HCCs were helpful and 95 percent said they would be likely to contact the HCC again if necessary.

  8. Simultaneous two-view epipolar geometry estimation and motion segmentation by 4D tensor voting.

    PubMed

    Tong, Wai-Shun; Tang, Chi-Keung; Medioni, Gérard

    2004-09-01

    We address the problem of simultaneous two-view epipolar geometry estimation and motion segmentation from nonstatic scenes. Given a set of noisy image pairs containing matches of n objects, we propose an unconventional, efficient, and robust method, 4D tensor voting, for estimating the unknown n epipolar geometries, and segmenting the static and motion matching pairs into n independent motions. By considering the 4D isotropic and orthogonal joint image space, only two tensor voting passes are needed, and a very high noise to signal ratio (up to five) can be tolerated. Epipolar geometries corresponding to multiple, rigid motions are extracted in succession. Only two uncalibrated frames are needed, and no simplifying assumption (such as affine camera model or homographic model between images) other than the pin-hole camera model is made. Our novel approach consists of propagating a local geometric smoothness constraint in the 4D joint image space, followed by global consistency enforcement for extracting the fundamental matrices corresponding to independent motions. We have performed extensive experiments to compare our method with some representative algorithms to show that better performance on nonstatic scenes are achieved. Results on challenging data sets are presented.

  9. Life satisfaction and self-reported problems after spinal cord injury: measurement of underlying dimensions.

    PubMed

    Krause, James S; Reed, Karla S

    2009-08-01

    Evaluate the utility of the current 7-scale structure of the Life Situation Questionnaire-Revised (LSQ-R) using confirmatory factor analysis (CFA) and explore the factor structure of each set of items. Adults (N = 1,543) with traumatic spinal cord injury (SCI) were administered the 20 satisfaction and 30 problems items from the LSQ-R. CFA suggests that the existing 7-scale structure across the 50 items was within the acceptable range (root-mean-square error of approximation [RMSEA] = 0.078), although it fell just outside of this range for women. Factor analysis revealed 3 satisfaction factors and 6 problems factors. The overall fit of the problems items (RMSEA = 0.070) was superior to that of the satisfaction items (RMSEA = 0.80). RMSEA fell just outside of the acceptable range for Whites and men on the satisfaction scales. All scales had acceptable internal consistency. Results suggest the original scoring of the LSQ-R remains viable, although individual results should be reviewed for special population. Factor analysis of subsets of items allows satisfaction and problems items to be used independently, depending on the study purpose. (c) 2009 APA

  10. Quantum Measurement, Correlation, and Contextuality

    NASA Astrophysics Data System (ADS)

    Ozawa, Masanao

    2011-03-01

    The problem has long been discussed as to whether non-commuting observables are simultaneously measurable, since Heisenberg introduced the uncertainty principle in 1927. The problem was settled state-independently: Two observables are simultaneously measurable in every state if and only if the corresponding operators commute. However, the problem has been open for state-dependent formulation. Saying that two observables are nowhere commuting if there exist no common eigenstates, the problem at stake is whether nowhere commuting observable can be simultaneously measurable in a certain state. There have been two historical arguments claiming the case: (i) In an eigenstate of an observable A one can determine both the values of A and any other observable B . (ii) In an EPR state one can determine both the values of Q ⊗ 1 and P ⊗ 1 . In this talk, we give a necessary and sufficient condition for two observables to be simultaneously measurable in a given state, show that the above two cases actually satisfy the required mathematical conditions, and give a classification of all the possible simultaneous measurements of nowhere commuting observables for the Hilbert space with dimension 2. Related problems on quantum contextuality will also be discussed using a linguistic method based on quantum logic and quantum set theory.

  11. On designing for quality

    NASA Technical Reports Server (NTRS)

    Vajingortin, L. D.; Roisman, W. P.

    1991-01-01

    The problem of ensuring the required quality of products and/or technological processes often becomes more difficult due to the fact that there is not general theory of determining the optimal sets of value of the primary factors, i.e., of the output parameters of the parts and units comprising an object and ensuring the correspondence of the object's parameters to the quality requirements. This is the main reason for the amount of time taken to finish complex vital article. To create this theory, one has to overcome a number of difficulties and to solve the following tasks: the creation of reliable and stable mathematical models showing the influence of the primary factors on the output parameters; finding a new technique of assigning tolerances for primary factors with regard to economical, technological, and other criteria, the technique being based on the solution of the main problem; well reasoned assignment of nominal values for primary factors which serve as the basis for creating tolerances. Each of the above listed tasks is of independent importance. An attempt is made to give solutions for this problem. The above problem dealing with quality ensuring an mathematically formalized aspect is called the multiple inverse problem.

  12. Effects of cluster location and cluster distribution on performance on the traveling salesman problem.

    PubMed

    MacGregor, James N

    2015-10-01

    Research on human performance in solving traveling salesman problems typically uses point sets as stimuli, and most models have proposed a processing stage at which stimulus dots are clustered. However, few empirical studies have investigated the effects of clustering on performance. In one recent study, researchers compared the effects of clustered, random, and regular stimuli, and concluded that clustering facilitates performance (Dry, Preiss, & Wagemans, 2012). Another study suggested that these results may have been influenced by the location rather than the degree of clustering (MacGregor, 2013). Two experiments are reported that mark an attempt to disentangle these factors. The first experiment tested several combinations of degree of clustering and cluster location, and revealed mixed evidence that clustering influences performance. In a second experiment, both factors were varied independently, showing that they interact. The results are discussed in terms of the importance of clustering effects, in particular, and perceptual factors, in general, during performance of the traveling salesman problem.

  13. Classification of brain MRI with big data and deep 3D convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Wegmayr, Viktor; Aitharaju, Sai; Buhmann, Joachim

    2018-02-01

    Our ever-aging society faces the growing problem of neurodegenerative diseases, in particular dementia. Magnetic Resonance Imaging provides a unique tool for non-invasive investigation of these brain diseases. However, it is extremely difficult for neurologists to identify complex disease patterns from large amounts of three-dimensional images. In contrast, machine learning excels at automatic pattern recognition from large amounts of data. In particular, deep learning has achieved impressive results in image classification. Unfortunately, its application to medical image classification remains difficult. We consider two reasons for this difficulty: First, volumetric medical image data is considerably scarcer than natural images. Second, the complexity of 3D medical images is much higher compared to common 2D images. To address the problem of small data set size, we assemble the largest dataset ever used for training a deep 3D convolutional neural network to classify brain images as healthy (HC), mild cognitive impairment (MCI) or Alzheimers disease (AD). We use more than 20.000 images from subjects of these three classes, which is almost 9x the size of the previously largest data set. The problem of high dimensionality is addressed by using a deep 3D convolutional neural network, which is state-of-the-art in large-scale image classification. We exploit its ability to process the images directly, only with standard preprocessing, but without the need for elaborate feature engineering. Compared to other work, our workflow is considerably simpler, which increases clinical applicability. Accuracy is measured on the ADNI+AIBL data sets, and the independent CADDementia benchmark.

  14. Exploring biorthonormal transformations of pair-correlation functions in atomic structure variational calculations

    NASA Astrophysics Data System (ADS)

    Verdebout, S.; Jönsson, P.; Gaigalas, G.; Godefroid, M.; Froese Fischer, C.

    2010-04-01

    Multiconfiguration expansions frequently target valence correlation and correlation between valence electrons and the outermost core electrons. Correlation within the core is often neglected. A large orbital basis is needed to saturate both the valence and core-valence correlation effects. This in turn leads to huge numbers of configuration state functions (CSFs), many of which are unimportant. To avoid the problems inherent to the use of a single common orthonormal orbital basis for all correlation effects in the multiconfiguration Hartree-Fock (MCHF) method, we propose to optimize independent MCHF pair-correlation functions (PCFs), bringing their own orthonormal one-electron basis. Each PCF is generated by allowing single- and double-excitations from a multireference (MR) function. This computational scheme has the advantage of using targeted and optimally localized orbital sets for each PCF. These pair-correlation functions are coupled together and with each component of the MR space through a low dimension generalized eigenvalue problem. Nonorthogonal orbital sets being involved, the interaction and overlap matrices are built using biorthonormal transformation of the coupled basis sets followed by a counter-transformation of the PCF expansions. Applied to the ground state of beryllium, the new method gives total energies that are lower than the ones from traditional complete active space (CAS)-MCHF calculations using large orbital active sets. It is fair to say that we now have the possibility to account for, in a balanced way, correlation deep down in the atomic core in variational calculations.

  15. Comparison of Feature Selection Techniques in Machine Learning for Anatomical Brain MRI in Dementia.

    PubMed

    Tohka, Jussi; Moradi, Elaheh; Huttunen, Heikki

    2016-07-01

    We present a comparative split-half resampling analysis of various data driven feature selection and classification methods for the whole brain voxel-based classification analysis of anatomical magnetic resonance images. We compared support vector machines (SVMs), with or without filter based feature selection, several embedded feature selection methods and stability selection. While comparisons of the accuracy of various classification methods have been reported previously, the variability of the out-of-training sample classification accuracy and the set of selected features due to independent training and test sets have not been previously addressed in a brain imaging context. We studied two classification problems: 1) Alzheimer's disease (AD) vs. normal control (NC) and 2) mild cognitive impairment (MCI) vs. NC classification. In AD vs. NC classification, the variability in the test accuracy due to the subject sample did not vary between different methods and exceeded the variability due to different classifiers. In MCI vs. NC classification, particularly with a large training set, embedded feature selection methods outperformed SVM-based ones with the difference in the test accuracy exceeding the test accuracy variability due to the subject sample. The filter and embedded methods produced divergent feature patterns for MCI vs. NC classification that suggests the utility of the embedded feature selection for this problem when linked with the good generalization performance. The stability of the feature sets was strongly correlated with the number of features selected, weakly correlated with the stability of classification accuracy, and uncorrelated with the average classification accuracy.

  16. Engineering the future with America's high school students

    NASA Technical Reports Server (NTRS)

    Farrance, M. A.; Jenner, J. W.

    1993-01-01

    The number of students enrolled in engineering is declining while the need for engineers is increasing. One contributing factor is that most high school students have little or no knowledge about what engineering is, or what engineers do. To teach young students about engineering, engineers need good tools. This paper presents a course of study developed and used by the authors in a junior college course for high school students. Students learned about engineering through independent student projects, in-class problem solving, and use of career information resources. Selected activities from the course can be adapted to teach students about engineering in other settings. Among the most successful techniques were the student research paper assignments, working out a solution to an engineering problem as a class exercise, and the use of technical materials to illustrate engineering concepts and demonstrate 'tools of the trade'.

  17. A rapid method of toilet training the institutionalized retarded1

    PubMed Central

    Azrin, N. H.; Foxx, R. M.

    1971-01-01

    Incontinence is a major unsolved problem in the institutional care of the profoundly retarded. A reinforcement and social analysis of incontinence was used to develop a procedure that would rapidly toilet train retardates and motivate them to remain continent during the day in their ward setting. Nine profoundly retarded adults were given intensive training (median of four days per patient), the distinctive features of which were artificially increasing the frequency of urinations, positive reinforcement of correct toileting but a delay for “accidents”, use of new automatic apparatus for signalling elimination, shaping of independent toileting, cleanliness training, and staff reinforcement procedures. Incontinence was reduced immediately by about 90% and eventually decreased to near-zero. These results indicate the present procedure is an effective, rapid, enduring, and administratively feasible solution to the problem of incontinence of the institutionalized retarded. PMID:16795291

  18. Environmental urban runoff monitoring

    NASA Astrophysics Data System (ADS)

    Yu, Byunggu; Behera, Pradeep K.; Kim, Seon Ho; Ramirez Rochac, Juan F.; Branham, Travis

    2010-04-01

    Urban stormwater runoff has been a critical and chronic problem in the quantity and quality of receiving waters, resulting in a major environmental concern. To address this problem engineers and professionals have developed a number of solutions which include various monitoring and modeling techniques. The most fundamental issue in these solutions is accurate monitoring of the quantity and quality of the runoff from both combined and separated sewer systems. This study proposes a new water quantity monitoring system, based on recent developments in sensor technology. Rather than using a single independent sensor, we harness an intelligent sensor platform that integrates various sensors, a wireless communication module, data storage, a battery, and processing power such that more comprehensive, efficient, and scalable data acquisition becomes possible. Our experimental results show the feasibility and applicability of such a sensor platform in the laboratory test setting.

  19. User's manual for three dimensional FDTD version C code for scattering from frequency-independent dielectric and magnetic materials

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.

    1992-01-01

    The Penn State Finite Difference Time Domain Electromagnetic Scattering Code Version C is a three-dimensional numerical electromagnetic scattering code based on the Finite Difference Time Domain (FDTD) technique. The supplied version of the code is one version of our current three-dimensional FDTD code set. The manual given here provides a description of the code and corresponding results for several scattering problems. The manual is organized into 14 sections: introduction, description of the FDTD method, operation, resource requirements, Version C code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include file (COMMONC.FOR), a section briefly discussing radar cross section computations, a section discussing some scattering results, a new problem checklist, references, and figure titles.

  20. Finite-time and fixed-time synchronization analysis of inertial memristive neural networks with time-varying delays.

    PubMed

    Wei, Ruoyu; Cao, Jinde; Alsaedi, Ahmed

    2018-02-01

    This paper investigates the finite-time synchronization and fixed-time synchronization problems of inertial memristive neural networks with time-varying delays. By utilizing the Filippov discontinuous theory and Lyapunov stability theory, several sufficient conditions are derived to ensure finite-time synchronization of inertial memristive neural networks. Then, for the purpose of making the setting time independent of initial condition, we consider the fixed-time synchronization. A novel criterion guaranteeing the fixed-time synchronization of inertial memristive neural networks is derived. Finally, three examples are provided to demonstrate the effectiveness of our main results.

  1. Phase-synchronisation in continuous flow models of production networks

    NASA Astrophysics Data System (ADS)

    Scholz-Reiter, Bernd; Tervo, Jan Topi; Freitag, Michael

    2006-04-01

    To improve their position at the market, many companies concentrate on their core competences and hence cooperate with suppliers and distributors. Thus, between many independent companies strong linkages develop and production and logistics networks emerge. These networks are characterised by permanently increasing complexity, and are nowadays forced to adapt to dynamically changing markets. This factor complicates an enterprise-spreading production planning and control enormously. Therefore, a continuous flow model for production networks will be derived regarding these special logistic problems. Furthermore, phase-synchronisation effects will be presented and their dependencies to the set of network parameters will be investigated.

  2. On the detection of pornographic digital images

    NASA Astrophysics Data System (ADS)

    Schettini, Raimondo; Brambilla, Carla; Cusano, Claudio; Ciocca, Gianluigi

    2003-06-01

    The paper addresses the problem of distinguishing between pornographic and non-pornographic photographs, for the design of semantic filters for the web. Both, decision forests of trees built according to CART (Classification And Regression Trees) methodology and Support Vectors Machines (SVM), have been used to perform the classification. The photographs are described by a set of low-level features, features that can be automatically computed simply on gray-level and color representation of the image. The database used in our experiments contained 1500 photographs, 750 of which labeled as pornographic on the basis of the independent judgement of several viewers.

  3. Formal Verification of Large Software Systems

    NASA Technical Reports Server (NTRS)

    Yin, Xiang; Knight, John

    2010-01-01

    We introduce a scalable proof structure to facilitate formal verification of large software systems. In our approach, we mechanically synthesize an abstract specification from the software implementation, match its static operational structure to that of the original specification, and organize the proof as the conjunction of a series of lemmas about the specification structure. By setting up a different lemma for each distinct element and proving each lemma independently, we obtain the important benefit that the proof scales easily for large systems. We present details of the approach and an illustration of its application on a challenge problem from the security domain

  4. Interaction Analysis of Longevity Interventions Using Survival Curves.

    PubMed

    Nowak, Stefan; Neidhart, Johannes; Szendro, Ivan G; Rzezonka, Jonas; Marathe, Rahul; Krug, Joachim

    2018-01-06

    A long-standing problem in ageing research is to understand how different factors contributing to longevity should be expected to act in combination under the assumption that they are independent. Standard interaction analysis compares the extension of mean lifespan achieved by a combination of interventions to the prediction under an additive or multiplicative null model, but neither model is fundamentally justified. Moreover, the target of longevity interventions is not mean life span but the entire survival curve. Here we formulate a mathematical approach for predicting the survival curve resulting from a combination of two independent interventions based on the survival curves of the individual treatments, and quantify interaction between interventions as the deviation from this prediction. We test the method on a published data set comprising survival curves for all combinations of four different longevity interventions in Caenorhabditis elegans . We find that interactions are generally weak even when the standard analysis indicates otherwise.

  5. Interaction Analysis of Longevity Interventions Using Survival Curves

    PubMed Central

    Nowak, Stefan; Neidhart, Johannes; Szendro, Ivan G.; Rzezonka, Jonas; Marathe, Rahul; Krug, Joachim

    2018-01-01

    A long-standing problem in ageing research is to understand how different factors contributing to longevity should be expected to act in combination under the assumption that they are independent. Standard interaction analysis compares the extension of mean lifespan achieved by a combination of interventions to the prediction under an additive or multiplicative null model, but neither model is fundamentally justified. Moreover, the target of longevity interventions is not mean life span but the entire survival curve. Here we formulate a mathematical approach for predicting the survival curve resulting from a combination of two independent interventions based on the survival curves of the individual treatments, and quantify interaction between interventions as the deviation from this prediction. We test the method on a published data set comprising survival curves for all combinations of four different longevity interventions in Caenorhabditis elegans. We find that interactions are generally weak even when the standard analysis indicates otherwise. PMID:29316622

  6. Module Extraction for Efficient Object Queries over Ontologies with Large ABoxes

    PubMed Central

    Xu, Jia; Shironoshita, Patrick; Visser, Ubbo; John, Nigel; Kabuka, Mansur

    2015-01-01

    The extraction of logically-independent fragments out of an ontology ABox can be useful for solving the tractability problem of querying ontologies with large ABoxes. In this paper, we propose a formal definition of an ABox module, such that it guarantees complete preservation of facts about a given set of individuals, and thus can be reasoned independently w.r.t. the ontology TBox. With ABox modules of this type, isolated or distributed (parallel) ABox reasoning becomes feasible, and more efficient data retrieval from ontology ABoxes can be attained. To compute such an ABox module, we present a theoretical approach and also an approximation for SHIQ ontologies. Evaluation of the module approximation on different types of ontologies shows that, on average, extracted ABox modules are significantly smaller than the entire ABox, and the time for ontology reasoning based on ABox modules can be improved significantly. PMID:26848490

  7. Independence of the uniformity principle from Church's thesis in intuitionistic set theory

    NASA Astrophysics Data System (ADS)

    Khakhanyan, V. Kh

    2013-12-01

    We prove the independence of the strong uniformity principle from Church's thesis with choice in intuitionistic set theory with the axiom of extensionality extended by Markov's principle and the double complement for sets.

  8. Development of parallel algorithms for electrical power management in space applications

    NASA Technical Reports Server (NTRS)

    Berry, Frederick C.

    1989-01-01

    The application of parallel techniques for electrical power system analysis is discussed. The Newton-Raphson method of load flow analysis was used along with the decomposition-coordination technique to perform load flow analysis. The decomposition-coordination technique enables tasks to be performed in parallel by partitioning the electrical power system into independent local problems. Each independent local problem represents a portion of the total electrical power system on which a loan flow analysis can be performed. The load flow analysis is performed on these partitioned elements by using the Newton-Raphson load flow method. These independent local problems will produce results for voltage and power which can then be passed to the coordinator portion of the solution procedure. The coordinator problem uses the results of the local problems to determine if any correction is needed on the local problems. The coordinator problem is also solved by an iterative method much like the local problem. The iterative method for the coordination problem will also be the Newton-Raphson method. Therefore, each iteration at the coordination level will result in new values for the local problems. The local problems will have to be solved again along with the coordinator problem until some convergence conditions are met.

  9. Patient satisfaction with E-Oral Health care in rural and remote settings: a systematic review protocol.

    PubMed

    Emami, Elham; Kadoch, Naomi; Homayounfar, Sara; Harnagea, Hermina; Dupont, Patrice; Giraudeau, Nicolas; Mariño, Rodrigo

    2017-08-29

    Individuals living in rural and remote settings face oral health problems and access-to-care barriers due to the shortage of oral health care providers in these areas, geographic remoteness, lack of appropriate infrastructure and lower socio-economic status. E-Oral Health technology could mitigate these barriers by providing the delivery of some aspects of health care and exchange of information across geographic distances. This review will systematically evaluate the literature on patient satisfaction with received E-Oral Health care in rural and remote communities. This systematic review will include interventional and observational studies in which E-Oral Health technology is used as an intervention in rural and remote communities of any country worldwide. Conventional oral health care will be used as a comparator when provided. Patient satisfaction with received E-Oral Health care will be considered as a primary outcome for this review. Cochrane Central Register of Controlled Trials, MEDLINE, EMBASE and Global Health will be searched using a comprehensive search strategy. Two review authors will independently screen results to identify potentially eligible studies and independently extract the data from the included studies. A third author will resolve any discrepancies between reviewers. Two independent researchers will assess the risk of bias and the Grading of Recommendations Assessment, Development, and Evaluation. The potential implications and benefits of E-Oral Health care can inform policymakers and health care professionals to take advantage of this technology to address health care challenges in these areas. PROSPERO CRD42016039942 .

  10. A longitudinal comparative study of the physical and mental health problems of affected residents of the firework disaster Enschede, The Netherlands.

    PubMed

    Grievink, L; van der Velden, P G; Stellato, R K; Dusseldorp, A; Gersons, B P R; Kleber, R J; Lebret, E

    2007-05-01

    After the firework disaster in Enschede, The Netherlands, on 13 May 2000, a longitudinal health study was carried out. Study questions were: (1) did the health status change over this period; and (2) how is the health status 18 months after the disaster compared with controls? A longitudinal comparative study with two surveys at 3 weeks and 18 months after the disaster. A control group for the affected residents was included in the second survey. Respondents filled in a set of validated questionnaires measuring their physical and mental health problems. The prevalence of physical and emotional role limitations, severe sleeping problems, feelings of depression and anxiety, as well as intrusion and avoidance decreased from 3 weeks to 18 months after the disaster for the affected residents. Independent of background characteristics and other life events, residents had 1.5 to three times more health problems than the control group; for example, physical role limitations (odds ratio [OR]=1.5, 95% confidence interval [CI] 1.2-2.0) and anxiety (OR=3.1, 95% CI 2.4-4.2). Although health problems decreased compared with 3 weeks after the disaster, 18 months after the disaster, the affected residents had more health problems than the people from the control group.

  11. Solving Constraint-Satisfaction Problems with Distributed Neocortical-Like Neuronal Networks.

    PubMed

    Rutishauser, Ueli; Slotine, Jean-Jacques; Douglas, Rodney J

    2018-05-01

    Finding actions that satisfy the constraints imposed by both external inputs and internal representations is central to decision making. We demonstrate that some important classes of constraint satisfaction problems (CSPs) can be solved by networks composed of homogeneous cooperative-competitive modules that have connectivity similar to motifs observed in the superficial layers of neocortex. The winner-take-all modules are sparsely coupled by programming neurons that embed the constraints onto the otherwise homogeneous modular computational substrate. We show rules that embed any instance of the CSP's planar four-color graph coloring, maximum independent set, and sudoku on this substrate and provide mathematical proofs that guarantee these graph coloring problems will convergence to a solution. The network is composed of nonsaturating linear threshold neurons. Their lack of right saturation allows the overall network to explore the problem space driven through the unstable dynamics generated by recurrent excitation. The direction of exploration is steered by the constraint neurons. While many problems can be solved using only linear inhibitory constraints, network performance on hard problems benefits significantly when these negative constraints are implemented by nonlinear multiplicative inhibition. Overall, our results demonstrate the importance of instability rather than stability in network computation and offer insight into the computational role of dual inhibitory mechanisms in neural circuits.

  12. Generation and use of observational data patterns in the evaluation of data quality for AmeriFlux and FLUXNET

    NASA Astrophysics Data System (ADS)

    Pastorello, G.; Agarwal, D.; Poindexter, C.; Papale, D.; Trotta, C.; Ribeca, A.; Canfora, E.; Faybishenko, B.; Gunter, D.; Chu, H.

    2015-12-01

    The fluxes-measuring sites that are part of AmeriFlux are operated and maintained in a fairly independent fashion, both in terms of scientific goals and operational practices. This is also the case for most sites from other networks in FLUXNET. This independence leads to a degree of heterogeneity in the data sets collected at the sites, which is also reflected in data quality levels. The generation of derived data products and data synthesis efforts, two of the main goals of these networks, are directly affected by the heterogeneity in data quality. In a collaborative effort between AmeriFlux and ICOS, a series of quality checks are being conducted for the data sets before any network-level data processing and product generation take place. From these checks, a set of common data issues were identified, and are being cataloged and classified into data quality patterns. These patterns are now being used as a basis for implementing automation for certain data quality checks, speeding up the process of applying the checks and evaluating the data. Currently, most data checks are performed individually in each data set, requiring visual inspection and inputs from a data curator. This manual process makes it difficult to scale the quality checks, creating a bottleneck for the data processing. One goal of the automated checks is to free up time of data curators so they can focus on new or less common issues. As new issues are identified, they can also be cataloged and classified, extending the coverage of existing patterns or potentially generating new patterns, helping both improve existing automated checks and create new ones. This approach is helping make data quality evaluation faster, more systematic, and reproducible. Furthermore, these patterns are also helping with documenting common causes and solutions for data problems. This can help tower teams with diagnosing problems in data collection and processing, and also in correcting historical data sets. In this presentation, using AmeriFlux fluxes and micrometeorological data, we discuss our approach to creating observational data patterns, and how we are using them to implement new automated checks. We also detail examples of these observational data patterns, illustrating how they are being used.

  13. Prison Health Care Governance: Guaranteeing Clinical Independence

    PubMed Central

    Pont, Jörg; Enggist, Stefan; Stöver, Heino; Williams, Brie; Greifinger, Robert

    2018-01-01

    Clinical independence is an essential component of good health care and health care professionalism, particularly in correctional settings (jails, prisons, and other places of detention), where the relationship between patients and caregivers is not based on free choice and where the punitive correctional setting can challenge optimal medical care. Independence for the delivery of health care services is defined by international standards as a critical element for quality health care in correctional settings, yet many correctional facilities do not meet these standards because of a lack of awareness, persisting legal regulations, contradictory terms of employment for health professionals, or current health care governance structures. We present recommendations for the implementation of independent health care in correctional settings. PMID:29470125

  14. Gender Differences in Solving Mathematics Problems among Two-Year College Students in a Developmental Algebra Class and Related Factors.

    ERIC Educational Resources Information Center

    Schonberger, Ann K.

    A study was conducted at the University of Maine at Orono (UMO) to examine gender differences with respect to mathematical problem-solving ability, visual spatial ability, abstract reasoning ability, field independence/dependence, independent learning style, and developmental problem-solving ability (i.e., formal reasoning ability). Subjects…

  15. Inversion of geophysical potential field data using the finite element method

    NASA Astrophysics Data System (ADS)

    Lamichhane, Bishnu P.; Gross, Lutz

    2017-12-01

    The inversion of geophysical potential field data can be formulated as an optimization problem with a constraint in the form of a partial differential equation (PDE). It is common practice, if possible, to provide an analytical solution for the forward problem and to reduce the problem to a finite dimensional optimization problem. In an alternative approach the optimization is applied to the problem and the resulting continuous problem which is defined by a set of coupled PDEs is subsequently solved using a standard PDE discretization method, such as the finite element method (FEM). In this paper, we show that under very mild conditions on the data misfit functional and the forward problem in the three-dimensional space, the continuous optimization problem and its FEM discretization are well-posed including the existence and uniqueness of respective solutions. We provide error estimates for the FEM solution. A main result of the paper is that the FEM spaces used for the forward problem and the Lagrange multiplier need to be identical but can be chosen independently from the FEM space used to represent the unknown physical property. We will demonstrate the convergence of the solution approximations in a numerical example. The second numerical example which investigates the selection of FEM spaces, shows that from the perspective of computational efficiency one should use 2 to 4 times finer mesh for the forward problem in comparison to the mesh of the physical property.

  16. Effects of self-graphing and goal setting on the math fact fluency of students with disabilities.

    PubMed

    Figarola, Patricia M; Gunter, Philip L; Reffel, Julia M; Worth, Susan R; Hummel, John; Gerber, Brian L

    2008-01-01

    We evaluated the impact of goal setting and students' participation in graphing their own performance data on the rate of math fact calculations. Participants were 3 students with mild disabilities in the first and second grades; 2 of the 3 students were also identified with Attention-Deficit/Hyperactivity Disorder (ADHD). They were taught to use Microsoft Excel® software to graph their rate of correct calculations when completing timed, independent practice sheets consisting of single-digit mathematics problems. Two students' rates of correct calculations nearly always met or exceeded the aim line established for their correct calculations. Additional interventions were required for the third student. Results are discussed in terms of implications and future directions for increasing the use of evaluation components in classrooms for students at risk for behavior disorders and academic failure.

  17. Coordinate transformation by minimizing correlations between parameters

    NASA Technical Reports Server (NTRS)

    Kumar, M.

    1972-01-01

    This investigation was to determine the transformation parameters (three rotations, three translations and a scale factor) between two Cartesian coordinate systems from sets of coordinates given in both systems. The objective was the determination of well separated transformation parameters with reduced correlations between each other, a problem especially relevant when the sets of coordinates are not well distributed. The above objective is achieved by preliminarily determining the three rotational parameters and the scale factor from the respective direction cosines and chord distances (these being independent of the translation parameters) between the common points, and then computing all the seven parameters from a solution in which the rotations and the scale factor are entered as weighted constraints according to their variances and covariances obtained in the preliminary solutions. Numerical tests involving two geodetic reference systems were performed to evaluate the effectiveness of this approach.

  18. A Geometrical-Statistical Approach to Outlier Removal for TDOA Measurements

    NASA Astrophysics Data System (ADS)

    Compagnoni, Marco; Pini, Alessia; Canclini, Antonio; Bestagini, Paolo; Antonacci, Fabio; Tubaro, Stefano; Sarti, Augusto

    2017-08-01

    The curse of outlier measurements in estimation problems is a well known issue in a variety of fields. Therefore, outlier removal procedures, which enables the identification of spurious measurements within a set, have been developed for many different scenarios and applications. In this paper, we propose a statistically motivated outlier removal algorithm for time differences of arrival (TDOAs), or equivalently range differences (RD), acquired at sensor arrays. The method exploits the TDOA-space formalism and works by only knowing relative sensor positions. As the proposed method is completely independent from the application for which measurements are used, it can be reliably used to identify outliers within a set of TDOA/RD measurements in different fields (e.g. acoustic source localization, sensor synchronization, radar, remote sensing, etc.). The proposed outlier removal algorithm is validated by means of synthetic simulations and real experiments.

  19. Electromagnetic beam diffraction by a finite lamellar structure: an aperiodic coupled-wave method.

    PubMed

    Guizal, Brahim; Barchiesi, Dominique; Felbacq, Didier

    2003-12-01

    We have developed a new formulation of the coupled-wave method (CWM) to handle aperiodic lamellar structures, and it will be referred to as the aperiodic coupled-wave method (ACWM). The space is still divided into three regions, but the fields are written by use of their Fourier integrals instead of the Fourier series. In the modulated region the relative permittivity is represented by its Fourier transform, and then a set of integro-differential equations is derived. Discretizing the last system leads to a set of ordinary differential equations that is reduced to an eigenvalue problem, as is usually done in the CWM. To assess the method, we compare our results with three independent formalisms: the Rayleigh perturbation method for small samples, the volume integral method, and the finite-element method.

  20. What you should know about land-cover data

    USGS Publications Warehouse

    Gallant, Alisa L.

    2009-01-01

    Wildlife biologists are using land-characteristics data sets for a variety of applications. Many kinds of landscape variables have been characterized and the resultant data sets or maps are readily accessible. Often, too little consideration is given to the accuracy or traits of these data sets, most likely because biologists do not know how such data are compiled and rendered, or the potential pitfalls that can be encountered when applying these data. To increase understanding of the nature of land-characteristics data sets, I introduce aspects of source information and data-handling methodology that include the following: ambiguity of land characteristics; temporal considerations and the dynamic nature of the landscape; type of source data versus landscape features of interest; data resolution, scale, and geographic extent; data entry and positional problems; rare landscape features; and interpreter variation. I also include guidance for determining the quality of land-characteristics data sets through metadata or published documentation, visual clues, and independent information. The quality or suitability of the data sets for wildlife applications may be improved with thematic or spatial generalization, avoidance of transitional areas on maps, and merging of multiple data sources. Knowledge of the underlying challenges in compiling such data sets will help wildlife biologists to better assess the strengths and limitations and determine how best to use these data.

  1. QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION

    PubMed Central

    Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy

    2016-01-01

    We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method—named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)—for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results. PMID:26778864

  2. Comment on “Rethinking first-principles electron transport theories with projection operators: The problems caused by partitioning the basis set” [J. Chem. Phys. 139, 114104 (2013)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandbyge, Mads, E-mail: mads.brandbyge@nanotech.dtu.dk

    2014-05-07

    In a recent paper Reuter and Harrison [J. Chem. Phys. 139, 114104 (2013)] question the widely used mean-field electron transport theories, which employ nonorthogonal localized basis sets. They claim these can violate an “implicit decoupling assumption,” leading to wrong results for the current, different from what would be obtained by using an orthogonal basis, and dividing surfaces defined in real-space. We argue that this assumption is not required to be fulfilled to get exact results. We show how the current/transmission calculated by the standard Greens function method is independent of whether or not the chosen basis set is nonorthogonal, andmore » that the current for a given basis set is consistent with divisions in real space. The ambiguity known from charge population analysis for nonorthogonal bases does not carry over to calculations of charge flux.« less

  3. Introduction: demography and cultural macroevolution.

    PubMed

    Steele, James; Shennan, Stephen

    2009-04-01

    The papers in this special issue of Human Biology, which derive from a conference sponsored by the Arts and Humanities Research Council (AHRC) Center for the Evolution of Cultural Diversity, lay some of the foundations for an empirical macroevolutionary analysis of cultural dynamics. Our premise here is that cultural dynamics-including the stability of traditions and the rate of origination of new variants-are influenced by independently occurring demographic processes (population size, structure, and distribution as these vary over time as a result of changes in rates of fertility, mortality, and migration). The contributors focus on three sets of problems relevant to empirical studies of cultural macroevolution: large-scale reconstruction of past population dynamics from archaeological and genetic data; juxtaposition of models and evidence of cultural dynamics using large-scale archaeological and historical data sets; and juxtaposition of models and evidence of cultural dynamics from large-scale linguistic data sets. In this introduction we outline some of the theoretical and methodological issues and briefly summarize the individual contributions.

  4. Peer interactions of normal and attention-deficit-disordered boys during free-play, cooperative task, and simulated classroom situations.

    PubMed

    Cunningham, C E; Siegel, L S

    1987-06-01

    Groups of 30 ADD-H boys and 90 normal boys were divided into 30 mixed dyads composed of a normal and an ADD-H boy, and 30 normal dyads composed of 2 normal boys. Dyads were videotaped interacting in 15-minute free-play, 15-minute cooperative task, and 15-minute simulated classroom settings. Mixed dyads engaged in more controlling interaction than normal dyads in both free-play and simulated classroom settings. In the simulated classroom, mixed dyads completed fewer math problems and were less compliant with the commands of peers. ADD-H children spent less simulated classroom time on task and scored lower on drawing tasks than normal peers. Older dyads proved less controlling, more compliant with peer commands, more inclined to play and work independently, less active, and more likely to remain on task during the cooperative task and simulated classroom settings. Results suggest that the ADD-H child prompts a more controlling, less cooperative pattern of responses from normal peers.

  5. Effects of Flipped Learning Using Online Materials in a Surgical Nursing Practicum: A Pilot Stratified Group-Randomized Trial

    PubMed Central

    Lee, Myung Kyung

    2018-01-01

    Objectives This study examined the effect of flipped learning in comparison to traditional learning in a surgical nursing practicum. Methods The subjects of this study were 102 nursing students in their third year of university who were scheduled to complete a clinical nursing practicum in an operating room or surgical unit. Participants were randomly assigned to either a flipped learning group (n = 51) or a traditional learning group (n = 51) for the 1-week, 45-hour clinical nursing practicum. The flipped-learning group completed independent e-learning lessons on surgical nursing and received a brief orientation prior to the commencement of the practicum, while the traditional-learning group received a face-to-face orientation and on-site instruction. After the completion of the practicum, both groups completed a case study and a conference. The student's self-efficacy, self-leadership, and problem-solving skills in clinical practice were measured both before and after the one-week surgical nursing practicum. Results Participants' independent goal setting and evaluation of beliefs and assumptions for the subscales of self-leadership and problem-solving skills were compared for the flipped learning group and the traditional learning group. The results showed greater improvement on these indicators for the flipped learning group in comparison to the traditional learning group. Conclusions The flipped learning method might offer more effective e-learning opportunities in terms of self-leadership and problem-solving than the traditional learning method in surgical nursing practicums. PMID:29503755

  6. Effects of Flipped Learning Using Online Materials in a Surgical Nursing Practicum: A Pilot Stratified Group-Randomized Trial.

    PubMed

    Lee, Myung Kyung; Park, Bu Kyung

    2018-01-01

    This study examined the effect of flipped learning in comparison to traditional learning in a surgical nursing practicum. The subjects of this study were 102 nursing students in their third year of university who were scheduled to complete a clinical nursing practicum in an operating room or surgical unit. Participants were randomly assigned to either a flipped learning group (n = 51) or a traditional learning group (n = 51) for the 1-week, 45-hour clinical nursing practicum. The flipped-learning group completed independent e-learning lessons on surgical nursing and received a brief orientation prior to the commencement of the practicum, while the traditional-learning group received a face-to-face orientation and on-site instruction. After the completion of the practicum, both groups completed a case study and a conference. The student's self-efficacy, self-leadership, and problem-solving skills in clinical practice were measured both before and after the one-week surgical nursing practicum. Participants' independent goal setting and evaluation of beliefs and assumptions for the subscales of self-leadership and problem-solving skills were compared for the flipped learning group and the traditional learning group. The results showed greater improvement on these indicators for the flipped learning group in comparison to the traditional learning group. The flipped learning method might offer more effective e-learning opportunities in terms of self-leadership and problem-solving than the traditional learning method in surgical nursing practicums.

  7. Role of Parent and Peer Relationships and Individual Characteristics in Middle School Children's Behavioral Outcomes in the Face of Community Violence

    PubMed Central

    Salzinger, Suzanne; Rosario, Margaret; Feldman, Richard S.; Ng-Mak, Daisy S.

    2010-01-01

    This study examines processes linking inner-city community violence exposure to subsequent internalizing and externalizing problems. Hypothesized risk and protective factors from three ecological domains -- children's parent and peer relationships and individual characteristics -- were examined for mediating, moderating or independent roles in predicting problem behavior among 667 children over three years of middle school. Mediation was not found. However, parent and peer variables moderated the association between exposure and internalizing problems. Under high exposure, normally protective factors (e.g., attachment to parents) were less effective in mitigating exposure's effects than under low exposure; attachment to friends was more effective. Individual competence was independently associated with decreased internalizing problems. Variables from all domains, and exposure, were independently associated with externalizing problems. Protective factors (e.g., parent attachment) predicted decreased problems; risk factors (e.g., friends' delinquency) predicted increased problems. Results indicate community violence reduction as essential in averting inner-city adolescents' poor behavioral outcomes. PMID:21643493

  8. Hash Bit Selection for Nearest Neighbor Search.

    PubMed

    Xianglong Liu; Junfeng He; Shih-Fu Chang

    2017-11-01

    To overcome the barrier of storage and computation when dealing with gigantic-scale data sets, compact hashing has been studied extensively to approximate the nearest neighbor search. Despite the recent advances, critical design issues remain open in how to select the right features, hashing algorithms, and/or parameter settings. In this paper, we address these by posing an optimal hash bit selection problem, in which an optimal subset of hash bits are selected from a pool of candidate bits generated by different features, algorithms, or parameters. Inspired by the optimization criteria used in existing hashing algorithms, we adopt the bit reliability and their complementarity as the selection criteria that can be carefully tailored for hashing performance in different tasks. Then, the bit selection solution is discovered by finding the best tradeoff between search accuracy and time using a modified dynamic programming method. To further reduce the computational complexity, we employ the pairwise relationship among hash bits to approximate the high-order independence property, and formulate it as an efficient quadratic programming method that is theoretically equivalent to the normalized dominant set problem in a vertex- and edge-weighted graph. Extensive large-scale experiments have been conducted under several important application scenarios of hash techniques, where our bit selection framework can achieve superior performance over both the naive selection methods and the state-of-the-art hashing algorithms, with significant accuracy gains ranging from 10% to 50%, relatively.

  9. Strict Constraint Feasibility in Analysis and Design of Uncertain Systems

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.

    2006-01-01

    This paper proposes a methodology for the analysis and design optimization of models subject to parametric uncertainty, where hard inequality constraints are present. Hard constraints are those that must be satisfied for all parameter realizations prescribed by the uncertainty model. Emphasis is given to uncertainty models prescribed by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles. These models make it possible to consider sets of parameters having comparable as well as dissimilar levels of uncertainty. Two alternative formulations for hyper-rectangular sets are proposed, one based on a transformation of variables and another based on an infinity norm approach. The suite of tools developed enable us to determine if the satisfaction of hard constraints is feasible by identifying critical combinations of uncertain parameters. Since this practice is performed without sampling or partitioning the parameter space, the resulting assessments of robustness are analytically verifiable. Strategies that enable the comparison of the robustness of competing design alternatives, the approximation of the robust design space, and the systematic search for designs with improved robustness characteristics are also proposed. Since the problem formulation is generic and the solution methods only require standard optimization algorithms for their implementation, the tools developed are applicable to a broad range of problems in several disciplines.

  10. Second-order asymptotics for quantum hypothesis testing in settings beyond i.i.d. - quantum lattice systems and more

    NASA Astrophysics Data System (ADS)

    Datta, Nilanjana; Pautrat, Yan; Rouzé, Cambyse

    2016-06-01

    Quantum Stein's lemma is a cornerstone of quantum statistics and concerns the problem of correctly identifying a quantum state, given the knowledge that it is one of two specific states (ρ or σ). It was originally derived in the asymptotic i.i.d. setting, in which arbitrarily many (say, n) identical copies of the state (ρ⊗n or σ⊗n) are considered to be available. In this setting, the lemma states that, for any given upper bound on the probability αn of erroneously inferring the state to be σ, the probability βn of erroneously inferring the state to be ρ decays exponentially in n, with the rate of decay converging to the relative entropy of the two states. The second order asymptotics for quantum hypothesis testing, which establishes the speed of convergence of this rate of decay to its limiting value, was derived in the i.i.d. setting independently by Tomamichel and Hayashi, and Li. We extend this result to settings beyond i.i.d. Examples of these include Gibbs states of quantum spin systems (with finite-range, translation-invariant interactions) at high temperatures, and quasi-free states of fermionic lattice gases.

  11. Using Educational Data Mining Methods to Assess Field-Dependent and Field-Independent Learners' Complex Problem Solving

    ERIC Educational Resources Information Center

    Angeli, Charoula; Valanides, Nicos

    2013-01-01

    The present study investigated the problem-solving performance of 101 university students and their interactions with a computer modeling tool in order to solve a complex problem. Based on their performance on the hidden figures test, students were assigned to three groups of field-dependent (FD), field-mixed (FM), and field-independent (FI)…

  12. A Projection and Density Estimation Method for Knowledge Discovery

    PubMed Central

    Stanski, Adam; Hellwich, Olaf

    2012-01-01

    A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features. PMID:23049675

  13. Using concept maps and goal-setting to support the development of self-regulated learning in a problem-based learning curriculum.

    PubMed

    Thomas, Lisa; Bennett, Sue; Lockyer, Lori

    2016-09-01

    Problem-based learning (PBL) in medical education focuses on preparing independent learners for continuing, self-directed, professional development beyond the classroom. Skills in self-regulated learning (SRL) are important for success in PBL and ongoing professional practice. However, the development of SRL skills is often left to chance. This study presents the investigated outcomes for students when support for the development of SRL was embedded in a PBL medical curriculum. This investigation involved design, delivery and testing of SRL support, embedded into the first phase of a four-year, graduate-entry MBBS degree. The intervention included concept mapping and goal-setting activities through iterative processes of planning, monitoring and reflecting on learning. A mixed-methods approach was used to collect data from seven students to develop case studies of engagement with, and outcomes from, the SRL support. The findings indicate that students who actively engaged with support for SRL demonstrated increases in cognitive and metacognitive functioning. Students also reported a greater sense of confidence in and control over their approaches to learning in PBL. This study advances understanding about how the development of SRL can be integrated into PBL.

  14. Design and Analysis Techniques for Concurrent Blackboard Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Mcmanus, John William

    1992-01-01

    Blackboard systems are a natural progression of knowledge-based systems into a more powerful problem solving technique. They provide a way for several highly specialized knowledge sources to cooperate to solve large, complex problems. Blackboard systems incorporate the concepts developed by rule-based and expert systems programmers and include the ability to add conventionally coded knowledge sources. The small and specialized knowledge sources are easier to develop and test, and can be hosted on hardware specifically suited to the task that they are solving. The Formal Model for Blackboard Systems was developed to provide a consistent method for describing a blackboard system. A set of blackboard system design tools has been developed and validated for implementing systems that are expressed using the Formal Model. The tools are used to test and refine a proposed blackboard system design before the design is implemented. My research has shown that the level of independence and specialization of the knowledge sources directly affects the performance of blackboard systems. Using the design, simulation, and analysis tools, I developed a concurrent object-oriented blackboard system that is faster, more efficient, and more powerful than existing systems. The use of the design and analysis tools provided the highly specialized and independent knowledge sources required for my concurrent blackboard system to achieve its design goals.

  15. Influence of Cognitive Functioning on Age-Related Performance Declines in Visuospatial Sequence Learning.

    PubMed

    Krüger, Melanie; Hinder, Mark R; Puri, Rohan; Summers, Jeffery J

    2017-01-01

    Objectives: The aim of this study was to investigate how age-related performance differences in a visuospatial sequence learning task relate to age-related declines in cognitive functioning. Method: Cognitive functioning of 18 younger and 18 older participants was assessed using a standardized test battery. Participants then undertook a perceptual visuospatial sequence learning task. Various relationships between sequence learning and participants' cognitive functioning were examined through correlation and factor analysis. Results: Older participants exhibited significantly lower performance than their younger counterparts in the sequence learning task as well as in multiple cognitive functions. Factor analysis revealed two independent subsets of cognitive functions associated with performance in the sequence learning task, related to either the processing and storage of sequence information (first subset) or problem solving (second subset). Age-related declines were only found for the first subset of cognitive functions, which also explained a significant degree of the performance differences in the sequence learning task between age-groups. Discussion: The results suggest that age-related performance differences in perceptual visuospatial sequence learning can be explained by declines in the ability to process and store sequence information in older adults, while a set of cognitive functions related to problem solving mediates performance differences independent of age.

  16. Postgraduate training for general practice in the United Kingdom.

    PubMed

    Eisenberg, J M

    1979-04-01

    Although the role of general practice is well established in the United Kingdom's National Health Service, formal postgraduate training for primary care practice is a recent development. Trainees may enter three-year programs of coordinated inpatient and outpatient training or may select a series of independent posts. Programs have been developed to train general practitioners as teachers, and innovative courses have been established. Nevertheless, there is a curious emphasis on inpatient experiences, especially since British general practitioners seldom treat patients in the hospital. In their outpatient experiences trainees are provided with little variety in their instructors, practice settings, and medical problems. The demands on this already strained system will soon be increased due to recent legislation requiring postgraduate training for all new general practitioners. With a better understanding of training for primary care in the National Health Service, those planning American primary care training may avoid the problems and incorporate the attributes of British training for general practice.

  17. Reliability analysis of multicellular system architectures for low-cost satellites

    NASA Astrophysics Data System (ADS)

    Erlank, A. O.; Bridges, C. P.

    2018-06-01

    Multicellular system architectures are proposed as a solution to the problem of low reliability currently seen amongst small, low cost satellites. In a multicellular architecture, a set of independent k-out-of-n systems mimic the cells of a biological organism. In order to be beneficial, a multicellular architecture must provide more reliability per unit of overhead than traditional forms of redundancy. The overheads include power consumption, volume and mass. This paper describes the derivation of an analytical model for predicting a multicellular system's lifetime. The performance of such architectures is compared against that of several common forms of redundancy and proven to be beneficial under certain circumstances. In addition, the problem of peripheral interfaces and cross-strapping is investigated using a purpose-developed, multicellular simulation environment. Finally, two case studies are presented based on a prototype cell implementation, which demonstrate the feasibility of the proposed architecture.

  18. Observations of intermediate degree solar oscillations - 1989 April-June

    NASA Technical Reports Server (NTRS)

    Bachmann, Kurt T.; Schou, Jesper; Brown, Timothy M.

    1993-01-01

    Frequencies, splittings, and line widths from 85 d of full disk Doppler observations of solar p-modes taken between April 4 and June 30, 1989 are presented. Comparison of the present mode parameters with published Big Bear Solar Observatory (BBSO) results yields good agreement in general and is thus a confirmation of their work using an independent instrument and set of analysis routines. Average differences in p-mode frequencies measured by the two experiments in spring-summer 1989 are explained as a result of differences in the exact periods of data collection during a time of rapidly changing solar activity. It is shown that the present a(1) splitting coefficients for p-modes with nu/L less than 45 micro-Hz suffer from a significant systematic error. Evidence is presented to the effect that a detector distortion or alignment problem, not a problem with the power spectra analysis, is the most likely explanation of this a(1) anomaly.

  19. On Bifurcating Time-Periodic Flow of a Navier-Stokes Liquid Past a Cylinder

    NASA Astrophysics Data System (ADS)

    Galdi, Giovanni P.

    2016-10-01

    We provide general sufficient conditions for the existence and uniqueness of branching out of a time-periodic family of solutions from steady-state solutions to the two-dimensional Navier-Stokes equations in the exterior of a cylinder. By separating the time-independent averaged component of the velocity field from its oscillatory one, we show that the problem can be formulated as a coupled elliptic-parabolic nonlinear system in appropriate and distinct function spaces, with the property that the relevant linearized operators become Fredholm of index 0. In this functional setting, the notorious difficulty of 0 being in the essential spectrum entirely disappears and, in fact, it is even meaningless. Our approach is different and, we believe, more natural and simpler than those proposed by previous authors discussing similar questions. Moreover, the latter all fail, when applied to the problem studied here.

  20. A systematic process for persuasive mobile healthcare applications

    NASA Astrophysics Data System (ADS)

    Qasim, Mustafa Moosa; Ahmad, Mazida; Omar, Mazni; Zulkifli, Abdul Nasir; Bakar, Juliana Aida Abu

    2017-10-01

    In recent years there has been an increased focus on persuasive design of mobile in the healthcare domain. However, most of the studies did not follow systematic processes while analysis and designing the persuasive technology applications, and they also failed to provide some of the relevant information needed to design the persuasive applications. Adding to this is a need for more guidance in order to set how the persuasive guidelines can be implemented, which also means that there is a need for a way to transform the persuasive components into software requirements and functionalities. Therefore, this paper proposes a general systematic process to be used independently of the problem domain in order to analyze the customers' significant requirements. Such domain is the obesity problem among Malaysian children, and the most significant treatment of this case is parents' involvement. To this end, this paper will apply a systematic process in monitoring the children's obesity status among parents.

  1. Qualitative fusion technique based on information poor system and its application to factor analysis for vibration of rolling bearings

    NASA Astrophysics Data System (ADS)

    Xia, Xintao; Wang, Zhongyu

    2008-10-01

    For some methods of stability analysis of a system using statistics, it is difficult to resolve the problems of unknown probability distribution and small sample. Therefore, a novel method is proposed in this paper to resolve these problems. This method is independent of probability distribution, and is useful for small sample systems. After rearrangement of the original data series, the order difference and two polynomial membership functions are introduced to estimate the true value, the lower bound and the supper bound of the system using fuzzy-set theory. Then empirical distribution function is investigated to ensure confidence level above 95%, and the degree of similarity is presented to evaluate stability of the system. Cases of computer simulation investigate stable systems with various probability distribution, unstable systems with linear systematic errors and periodic systematic errors and some mixed systems. The method of analysis for systematic stability is approved.

  2. Origins and Early History of Underwater Neutral Buoyancy Simulation of Weightlessness for EVA Procedures Development and Training. Part 2; Winnowing and Regrowth

    NASA Technical Reports Server (NTRS)

    Charles, John B.

    2013-01-01

    The technique of neutral buoyancy during water immersion was applied to a variety of questions pertaining to human performance factors in the early years of the space age. It was independently initiated by numerous aerospace contractors at nearly the same time, but specific applications depended on the problems that the developers were trying to solve. Those problems dealt primarily with human restraint and maneuverability and were often generic across extravehicular activity (EVA) and intravehicular activity (IVA) worksites. The same groups often also considered fractional gravity as well as weightless settings and experimented with ballasting to achieve lunar and Mars-equivalent loads as part of their on-going research and development. Dr. John Charles reviewed the association of those tasks with contemporary perceptions of the direction of NASA's future space exploration activities and with Air Force assessments of the military value of man in space.

  3. A k-Vector Approach to Sampling, Interpolation, and Approximation

    NASA Astrophysics Data System (ADS)

    Mortari, Daniele; Rogers, Jonathan

    2013-12-01

    The k-vector search technique is a method designed to perform extremely fast range searching of large databases at computational cost independent of the size of the database. k-vector search algorithms have historically found application in satellite star-tracker navigation systems which index very large star catalogues repeatedly in the process of attitude estimation. Recently, the k-vector search algorithm has been applied to numerous other problem areas including non-uniform random variate sampling, interpolation of 1-D or 2-D tables, nonlinear function inversion, and solution of systems of nonlinear equations. This paper presents algorithms in which the k-vector search technique is used to solve each of these problems in a computationally-efficient manner. In instances where these tasks must be performed repeatedly on a static (or nearly-static) data set, the proposed k-vector-based algorithms offer an extremely fast solution technique that outperforms standard methods.

  4. Motion and force control for multiple cooperative manipulators

    NASA Technical Reports Server (NTRS)

    Wen, John T.; Kreutz, Kenneth

    1989-01-01

    The motion and force control of multiple robot arms manipulating a commonly held object is addressed. A general control paradigm that decouples the motion and force control problems is introduced. For motion control, there are three natural choices: (1) joint torques, (2) arm-tip force vectors, and (3) the acceleration of a generalized coordinate. Choice (1) allows a class of relatively model-independent control laws by exploiting the Hamiltonian structure of the open-loop system; (2) and (3) require the full model information but produce simpler problems. To resolve the nonuniqueness of the joint torques, two methods are introduced. If the arm and object models are available, the allocation of the desired end-effector control force to the joint actuators can be optimized; otherwise the internal force can be controlled about some set point. It is shown that effective force regulation can be achieved even if little model information is available.

  5. Development of a problem solving evaluation instrument; untangling of specific problem solving assets

    NASA Astrophysics Data System (ADS)

    Adams, Wendy Kristine

    The purpose of my research was to produce a problem solving evaluation tool for physics. To do this it was necessary to gain a thorough understanding of how students solve problems. Although physics educators highly value problem solving and have put extensive effort into understanding successful problem solving, there is currently no efficient way to evaluate problem solving skill. Attempts have been made in the past; however, knowledge of the principles required to solve the subject problem are so absolutely critical that they completely overshadow any other skills students may use when solving a problem. The work presented here is unique because the evaluation tool removes the requirement that the student already have a grasp of physics concepts. It is also unique because I picked a wide range of people and picked a wide range of tasks for evaluation. This is an important design feature that helps make things emerge more clearly. This dissertation includes an extensive literature review of problem solving in physics, math, education and cognitive science as well as descriptions of studies involving student use of interactive computer simulations, the design and validation of a beliefs about physics survey and finally the design of the problem solving evaluation tool. I have successfully developed and validated a problem solving evaluation tool that identifies 44 separate assets (skills) necessary for solving problems. Rigorous validation studies, including work with an independent interviewer, show these assets identified by this content-free evaluation tool are the same assets that students use to solve problems in mechanics and quantum mechanics. Understanding this set of component assets will help teachers and researchers address problem solving within the classroom.

  6. Bayes multiple decision functions.

    PubMed

    Wu, Wensong; Peña, Edsel A

    2013-01-01

    This paper deals with the problem of simultaneously making many ( M ) binary decisions based on one realization of a random data matrix X . M is typically large and X will usually have M rows associated with each of the M decisions to make, but for each row the data may be low dimensional. Such problems arise in many practical areas such as the biological and medical sciences, where the available dataset is from microarrays or other high-throughput technology and with the goal being to decide which among of many genes are relevant with respect to some phenotype of interest; in the engineering and reliability sciences; in astronomy; in education; and in business. A Bayesian decision-theoretic approach to this problem is implemented with the overall loss function being a cost-weighted linear combination of Type I and Type II loss functions. The class of loss functions considered allows for use of the false discovery rate (FDR), false nondiscovery rate (FNR), and missed discovery rate (MDR) in assessing the quality of decision. Through this Bayesian paradigm, the Bayes multiple decision function (BMDF) is derived and an efficient algorithm to obtain the optimal Bayes action is described. In contrast to many works in the literature where the rows of the matrix X are assumed to be stochastically independent, we allow a dependent data structure with the associations obtained through a class of frailty-induced Archimedean copulas. In particular, non-Gaussian dependent data structure, which is typical with failure-time data, can be entertained. The numerical implementation of the determination of the Bayes optimal action is facilitated through sequential Monte Carlo techniques. The theory developed could also be extended to the problem of multiple hypotheses testing, multiple classification and prediction, and high-dimensional variable selection. The proposed procedure is illustrated for the simple versus simple hypotheses setting and for the composite hypotheses setting through simulation studies. The procedure is also applied to a subset of a microarray data set from a colon cancer study.

  7. Gender moderates the effects of independence and dependence desires during the social support process.

    PubMed

    Nagumey, Alexander J; Reich, John W; Newsom, Jason

    2004-03-01

    This investigation examined the roles of gender and desires for independence and dependence in the support process. We assessed 118 older adults who reported needing help with at least 1 activity of daily living as a result of illness or health problems. Men with a high desire to be independent responded negatively to receiving support from their social network. Women's outcomes were generally unaffected by their independence and dependence desires. These results indicate that gender and desires for independence and dependence should be taken into account when examining the social support process, especially in men with health problems.

  8. Individualized Math Problems in Whole Numbers. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. Problems in this set require computations involving whole numbers.…

  9. Body Parts Dependent Joint Regressors for Human Pose Estimation in Still Images.

    PubMed

    Dantone, Matthias; Gall, Juergen; Leistner, Christian; Van Gool, Luc

    2014-11-01

    In this work, we address the problem of estimating 2d human pose from still images. Articulated body pose estimation is challenging due to the large variation in body poses and appearances of the different body parts. Recent methods that rely on the pictorial structure framework have shown to be very successful in solving this task. They model the body part appearances using discriminatively trained, independent part templates and the spatial relations of the body parts using a tree model. Within such a framework, we address the problem of obtaining better part templates which are able to handle a very high variation in appearance. To this end, we introduce parts dependent body joint regressors which are random forests that operate over two layers. While the first layer acts as an independent body part classifier, the second layer takes the estimated class distributions of the first one into account and is thereby able to predict joint locations by modeling the interdependence and co-occurrence of the parts. This helps to overcome typical ambiguities of tree structures, such as self-similarities of legs and arms. In addition, we introduce a novel data set termed FashionPose that contains over 7,000 images with a challenging variation of body part appearances due to a large variation of dressing styles. In the experiments, we demonstrate that the proposed parts dependent joint regressors outperform independent classifiers or regressors. The method also performs better or similar to the state-of-the-art in terms of accuracy, while running with a couple of frames per second.

  10. A Cell-Centered Multigrid Algorithm for All Grid Sizes

    NASA Technical Reports Server (NTRS)

    Gjesdal, Thor

    1996-01-01

    Multigrid methods are optimal; that is, their rate of convergence is independent of the number of grid points, because they use a nested sequence of coarse grids to represent different scales of the solution. This nesting does, however, usually lead to certain restrictions of the permissible size of the discretised problem. In cases where the modeler is free to specify the whole problem, such constraints are of little importance because they can be taken into consideration from the outset. We consider the situation in which there are other competing constraints on the resolution. These restrictions may stem from the physical problem (e.g., if the discretised operator contains experimental data measured on a fixed grid) or from the need to avoid limitations set by the hardware. In this paper we discuss a modification to the cell-centered multigrid algorithm, so that it can be used br problems with any resolution. We discuss in particular a coarsening strategy and choice of intergrid transfer operators that can handle grids with both an even or odd number of cells. The method is described and applied to linear equations obtained by discretization of two- and three-dimensional second-order elliptic PDEs.

  11. Agreement between parents and teachers on behavioral/emotional problems in Japanese school children using the child behavior checklist.

    PubMed

    Satake, Hiroyuki; Yoshida, Keiko; Yamashita, Hiroshi; Kinukawa, Naoko; Takagishi, Tatsuya

    2003-01-01

    We investigated the agreement between Japanese parents' and teachers' ratings concerning their children's behavioral/emotional problems. Mothers (n = 276) and teachers (n = 19) assessed each child (n = 316; 6 to 12 years old ) using Japanese parent and teacher version of the Child Behavior Checklist. Parent-teacher agreement were examined through three indices; mean scores, correlations and D scores (generalized distance between item profile). Mean scores rated by parents were significantly higher than those by teachers. The differences of parents' ratings according to sex of the child or parents' occupational level, and those of teachers' ratings according to sex of the child were consistent with previous Western studies. Parent-teacher correlations were in the low to middle range (0.16-0.36). We obtained significant sets of independent variables accounting for the variance of D scores, but the effect size of these variables was small. These results indicated that, as seen in Western studies, Japanese parents and teachers would also assess their child's problems differently and the child's demographics affect their evaluation. For further research, parent and teacher characteristics which may influence on their perspective of the child's problems could be examined.

  12. Two Methods for Efficient Solution of the Hitting-Set Problem

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh; Fijany, Amir

    2005-01-01

    A paper addresses much of the same subject matter as that of Fast Algorithms for Model-Based Diagnosis (NPO-30582), which appears elsewhere in this issue of NASA Tech Briefs. However, in the paper, the emphasis is more on the hitting-set problem (also known as the transversal problem), which is well known among experts in combinatorics. The authors primary interest in the hitting-set problem lies in its connection to the diagnosis problem: it is a theorem of model-based diagnosis that in the set-theory representation of the components of a system, the minimal diagnoses of a system are the minimal hitting sets of the system. In the paper, the hitting-set problem (and, hence, the diagnosis problem) is translated from a combinatorial to a computational problem by mapping it onto the Boolean satisfiability and integer- programming problems. The paper goes on to describe developments nearly identical to those summarized in the cited companion NASA Tech Briefs article, including the utilization of Boolean-satisfiability and integer- programming techniques to reduce the computation time and/or memory needed to solve the hitting-set problem.

  13. Multiagent distributed watershed management

    NASA Astrophysics Data System (ADS)

    Giuliani, M.; Castelletti, A.; Amigoni, F.; Cai, X.

    2012-04-01

    Deregulation and democratization of water along with increasing environmental awareness are challenging integrated water resources planning and management worldwide. The traditional centralized approach to water management, as described in much of water resources literature, is often unfeasible in most of the modern social and institutional contexts. Thus it should be reconsidered from a more realistic and distributed perspective, in order to account for the presence of multiple and often independent Decision Makers (DMs) and many conflicting stakeholders. Game theory based approaches are often used to study these situations of conflict (Madani, 2010), but they are limited to a descriptive perspective. Multiagent systems (see Wooldridge, 2009), instead, seem to be a more suitable paradigm because they naturally allow to represent a set of self-interested agents (DMs and/or stakeholders) acting in a distributed decision process at the agent level, resulting in a promising compromise alternative between the ideal centralized solution and the actual uncoordinated practices. Casting a water management problem in a multiagent framework allows to exploit the techniques and methods that are already available in this field for solving distributed optimization problems. In particular, in Distributed Constraint Satisfaction Problems (DCSP, see Yokoo et al., 2000), each agent controls some variables according to his own utility function but has to satisfy inter-agent constraints; while in Distributed Constraint Optimization Problems (DCOP, see Modi et al., 2005), the problem is generalized by introducing a global objective function to be optimized that requires a coordination mechanism between the agents. In this work, we apply a DCSP-DCOP based approach to model a steady state hypothetical watershed management problem (Yang et al., 2009), involving several active human agents (i.e. agents who make decisions) and reactive ecological agents (i.e. agents representing environmental interests). Different scenarios of distributed management are simulated, i.e. a situation where all the agents act independently, a situation in which a global coordination takes place and in-between solutions. The solutions are compared with the ones presented in Yang et al. (2009), aiming to present more general multiagent approaches to solve distributed management problems.

  14. Techniques for recognizing identity of several response functions from the data of visual inspection

    NASA Astrophysics Data System (ADS)

    Nechval, Nicholas A.

    1996-08-01

    The purpose of this paper is to present some efficient techniques for recognizing from the observed data whether several response functions are identical to each other. For example, in an industrial setting the problem may be to determine whether the production coefficients established in a small-scale pilot study apply to each of several large- scale production facilities. The techniques proposed here combine sensor information from automated visual inspection of manufactured products which is carried out by means of pixel-by-pixel comparison of the sensed image of the product to be inspected with some reference pattern (or image). Let (a1, . . . , am) be p-dimensional parameters associated with m response models of the same type. This study is concerned with the simultaneous comparison of a1, . . . , am. A generalized maximum likelihood ratio (GMLR) test is derived for testing equality of these parameters, where each of the parameters represents a corresponding vector of regression coefficients. The GMLR test reduces to an equivalent test based on a statistic that has an F distribution. The main advantage of the test lies in its relative simplicity and the ease with which it can be applied. Another interesting test for the same problem is an application of Fisher's method of combining independent test statistics which can be considered as a parallel procedure to the GMLR test. The combination of independent test statistics does not appear to have been used very much in applied statistics. There does, however, seem to be potential data analytic value in techniques for combining distributional assessments in relation to statistically independent samples which are of joint experimental relevance. In addition, a new iterated test for the problem defined above is presented. A rejection of the null hypothesis by this test provides some reason why all the parameters are not equal. A numerical example is discussed in the context of the proposed procedures for hypothesis testing.

  15. Using lod-score differences to determine mode of inheritance: a simple, robust method even in the presence of heterogeneity and reduced penetrance.

    PubMed

    Greenberg, D A; Berger, B

    1994-10-01

    Determining the mode of inheritance is often difficult under the best of circumstances, but when segregation analysis is used, the problems of ambiguous ascertainment procedures, reduced penetrance, heterogeneity, and misdiagnosis make mode-of-inheritance determinations even more unreliable. The mode of inheritance can also be determined using a linkage-based method (maximized maximum lod score or mod score) and association-based methods, which can overcome many of these problems. In this work, we determined how much information is necessary to reliably determine the mode of inheritance from linkage data when heterogeneity and reduced penetrance are present in the data set. We generated data sets under both dominant and recessive inheritance with reduced penetrance and with varying fractions of linked and unlinked families. We then analyzed those data sets, assuming reduced penetrance, both dominant and recessive inheritance, and no heterogeneity. We investigated the reliability of two methods for determining the mode of inheritance from the linkage data. The first method examined the difference (delta) between the maximum lod scores calculated under the two mode-of-inheritance assumptions. We found that if delta was > 1.5, then the higher of the two maximum lod scores reflected the correct mode of inheritance with high reliability and that a delta of 2.5 appeared to practically guarantee a correct mode-of-inheritance inference. Furthermore, this reliability appeared to be virtually independent of alpha, the fraction of linked families in the data set, although the reliability decreased slightly as alpha fell below .50.(ABSTRACT TRUNCATED AT 250 WORDS)

  16. Parallel, Asynchronous Executive (PAX): System concepts, facilities, and architecture

    NASA Technical Reports Server (NTRS)

    Jones, W. H.

    1983-01-01

    The Parallel, Asynchronous Executive (PAX) is a software operating system simulation that allows many computers to work on a single problem at the same time. PAX is currently implemented on a UNIVAC 1100/42 computer system. Independent UNIVAC runstreams are used to simulate independent computers. Data are shared among independent UNIVAC runstreams through shared mass-storage files. PAX has achieved the following: (1) applied several computing processes simultaneously to a single, logically unified problem; (2) resolved most parallel processor conflicts by careful work assignment; (3) resolved by means of worker requests to PAX all conflicts not resolved by work assignment; (4) provided fault isolation and recovery mechanisms to meet the problems of an actual parallel, asynchronous processing machine. Additionally, one real-life problem has been constructed for the PAX environment. This is CASPER, a collection of aerodynamic and structural dynamic problem simulation routines. CASPER is not discussed in this report except to provide examples of parallel-processing techniques.

  17. Half a century of research on Garner interference and the separability-integrality distinction.

    PubMed

    Algom, Daniel; Fitousi, Daniel

    2016-12-01

    Research in the allied domains of selective attention and perceptual independence has made great advances over the past 5 decades ensuing from the foundational ideas and research conceived by Wendell R. Garner. In particular, Garner's speeded classification paradigm has received considerable attention in psychology. The paradigm is widely used to inform research and theory in various domains of cognitive science. It was Garner who provided the consensual definition of the separable-integral partition of stimulus dimensions, delineating a set of converging operations sustaining the distinction. This distinction is a pillar of today's cognitive science. We review the key ideas, definitions, and findings along 2 paths of the evolution of Garnerian research: selective attention, with a focus on Garner interference and its relation to the Stroop effect, and divided attention, with focus on perceptual independence gauged by multivariate models of perception. The review tracks developments in a roughly chronological order. Our review is also integrative as we follow the evolution of a set of nascent ideas into the vast multifaceted enterprise that they comprise today. Finally, the review is also critical as we highlight problems, inconsistencies, and deviations from original intent in the various studies. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  18. Rate-independent dissipation in phase-field modelling of displacive transformations

    NASA Astrophysics Data System (ADS)

    Tůma, K.; Stupkiewicz, S.; Petryk, H.

    2018-05-01

    In this paper, rate-independent dissipation is introduced into the phase-field framework for modelling of displacive transformations, such as martensitic phase transformation and twinning. The finite-strain phase-field model developed recently by the present authors is here extended beyond the limitations of purely viscous dissipation. The variational formulation, in which the evolution problem is formulated as a constrained minimization problem for a global rate-potential, is enhanced by including a mixed-type dissipation potential that combines viscous and rate-independent contributions. Effective computational treatment of the resulting incremental problem of non-smooth optimization is developed by employing the augmented Lagrangian method. It is demonstrated that a single Lagrange multiplier field suffices to handle the dissipation potential vertex and simultaneously to enforce physical constraints on the order parameter. In this way, the initially non-smooth problem of evolution is converted into a smooth stationarity problem. The model is implemented in a finite-element code and applied to solve two- and three-dimensional boundary value problems representative for shape memory alloys.

  19. ISE: An Integrated Search Environment. The manual

    NASA Technical Reports Server (NTRS)

    Chu, Lon-Chan

    1992-01-01

    Integrated Search Environment (ISE), a software package that implements hierarchical searches with meta-control, is described in this manual. ISE is a collection of problem-independent routines to support solving searches. Mainly, these routines are core routines for solving a search problem and they handle the control of searches and maintain the statistics related to searches. By separating the problem-dependent and problem-independent components in ISE, new search methods based on a combination of existing methods can be developed by coding a single master control program. Further, new applications solved by searches can be developed by coding the problem-dependent parts and reusing the problem-independent parts already developed. Potential users of ISE are designers of new application solvers and new search algorithms, and users of experimental application solvers and search algorithms. The ISE is designed to be user-friendly and information rich. In this manual, the organization of ISE is described and several experiments carried out on ISE are also described.

  20. An intelligent CNC machine control system architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, D.J.; Loucks, C.S.

    1996-10-01

    Intelligent, agile manufacturing relies on automated programming of digitally controlled processes. Currently, processes such as Computer Numerically Controlled (CNC) machining are difficult to automate because of highly restrictive controllers and poor software environments. It is also difficult to utilize sensors and process models for adaptive control, or to integrate machining processes with other tasks within a factory floor setting. As part of a Laboratory Directed Research and Development (LDRD) program, a CNC machine control system architecture based on object-oriented design and graphical programming has been developed to address some of these problems and to demonstrate automated agile machining applications usingmore » platform-independent software.« less

  1. Nano-metrology and terrain modelling - convergent practice in surface characterisation

    USGS Publications Warehouse

    Pike, R.J.

    2000-01-01

    The quantification of magnetic-tape and disk topography has a macro-scale counterpart in the Earth sciences - terrain modelling, the numerical representation of relief and pattern of the ground surface. The two practices arose independently and continue to function separately. This methodological paper introduces terrain modelling, discusses its similarities to and differences from industrial surface metrology, and raises the possibility of a unified discipline of quantitative surface characterisation. A brief discussion of an Earth-science problem, subdividing a heterogeneous terrain surface from a set of sample measurements, exemplifies a multivariate statistical procedure that may transfer to tribological applications of 3-D metrological height data.

  2. Obstructive sleep apnea (OSA): a complication of acute infectious mononucleosis infection in a child.

    PubMed

    Cheng, Jeffrey

    2014-03-01

    Independently, obstructive sleep apnea (OSA) and infectious mononucleosis are not uncommon in the pediatric population, but acute onset of OSA, as a respiratory complication in the setting of acute EBV infection is extremely uncommon. Previous reports of this clinical entity are sparse and from nearly two decades ago. Urgent adenotonsillectomy was commonly advocated. This complication may be managed medically with systemic corticosteroids and non-invasive continuous positive airway pressure (CPAP), and a case is presented to highlight an updated management approach to this rarely encountered clinical problem in children. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  3. Verification of EPA's " Preliminary remediation goals for radionuclides" (PRG) electronic calculator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stagich, B. H.

    The U.S. Environmental Protection Agency (EPA) requested an external, independent verification study of their “Preliminary Remediation Goals for Radionuclides” (PRG) electronic calculator. The calculator provides information on establishing PRGs for radionuclides at Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) sites with radioactive contamination (Verification Study Charge, Background). These risk-based PRGs set concentration limits using carcinogenic toxicity values under specific exposure conditions (PRG User’s Guide, Section 1). The purpose of this verification study is to ascertain that the computer codes has no inherit numerical problems with obtaining solutions as well as to ensure that the equations are programmed correctly.

  4. Dimension independence in exterior algebra.

    PubMed Central

    Hawrylycz, M

    1995-01-01

    The identities between homogeneous expressions in rank 1 vectors and rank n - 1 covectors in a Grassmann-Cayley algebra of rank n, in which one set occurs multilinearly, are shown to represent a set of dimension-independent identities. The theorem yields an infinite set of nontrivial geometric identities from a given identity. PMID:11607520

  5. Is There Evidence of Failing to Fail in Our Schools of Nursing?

    PubMed

    Docherty, Angie; Dieckmann, Nathan

    2015-01-01

    To assess evidence for "failing to fail" in undergraduate nursing programs. Literature on grading practices largely focuses on clinical or academic grading. Reviewing both as distinct entities may miss a more systemic grading problem. A cross-sectional survey targeted 235 faculty within university and community colleges in a western state. Chi-square tests of independence explored the relation between institutional and faculty variables. The response rate was 34 percent. Results suggest failing to fail may be evident across the sector in both clinical and academic settings: 43 percent of respondents had awarded higher grades than merited; 17.7 percent had passed written examinations they felt should fail; 66 percent believed they had worked with students who should not have passed their previous placement. Failing to fail cuts across instructional settings. Further exploration is imperative if schools are to better engender a climate for rigorously measuring student attainment.

  6. Elliptic polylogarithms and iterated integrals on elliptic curves. Part I: general formalism

    NASA Astrophysics Data System (ADS)

    Broedel, Johannes; Duhr, Claude; Dulat, Falko; Tancredi, Lorenzo

    2018-05-01

    We introduce a class of iterated integrals, defined through a set of linearly independent integration kernels on elliptic curves. As a direct generalisation of multiple polylogarithms, we construct our set of integration kernels ensuring that they have at most simple poles, implying that the iterated integrals have at most logarithmic singularities. We study the properties of our iterated integrals and their relationship to the multiple elliptic polylogarithms from the mathematics literature. On the one hand, we find that our iterated integrals span essentially the same space of functions as the multiple elliptic polylogarithms. On the other, our formulation allows for a more direct use to solve a large variety of problems in high-energy physics. We demonstrate the use of our functions in the evaluation of the Laurent expansion of some hypergeometric functions for values of the indices close to half integers.

  7. The NIFTy way of Bayesian signal inference

    NASA Astrophysics Data System (ADS)

    Selig, Marco

    2014-12-01

    We introduce NIFTy, "Numerical Information Field Theory", a software package for the development of Bayesian signal inference algorithms that operate independently from any underlying spatial grid and its resolution. A large number of Bayesian and Maximum Entropy methods for 1D signal reconstruction, 2D imaging, as well as 3D tomography, appear formally similar, but one often finds individualized implementations that are neither flexible nor easily transferable. Signal inference in the framework of NIFTy can be done in an abstract way, such that algorithms, prototyped in 1D, can be applied to real world problems in higher-dimensional settings. NIFTy as a versatile library is applicable and already has been applied in 1D, 2D, 3D and spherical settings. A recent application is the D3PO algorithm targeting the non-trivial task of denoising, deconvolving, and decomposing photon observations in high energy astronomy.

  8. Mixed variational formulations of finite element analysis of elastoacoustic/slosh fluid-structure interaction

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.; Ohayon, Roger

    1991-01-01

    A general three-field variational principle is obtained for the motion of an acoustic fluid enclosed in a rigid or flexible container by the method of canonical decomposition applied to a modified form of the wave equation in the displacement potential. The general principle is specialized to a mixed two-field principle that contains the fluid displacement potential and pressure as independent fields. This principle contains a free parameter alpha. Semidiscrete finite-element equations of motion based on this principle are displayed and applied to the transient response and free-vibrations of the coupled fluid-structure problem. It is shown that a particular setting of alpha yields a rich set of formulations that can be customized to fit physical and computational requirements. The variational principle is then extended to handle slosh motions in a uniform gravity field, and used to derive semidiscrete equations of motion that account for such effects.

  9. Effects of Self-Graphing and Goal Setting on the Math Fact Fluency of Students with Disabilities

    PubMed Central

    Figarola, Patricia M; Gunter, Philip L; Reffel, Julia M; Worth, Susan R; Hummel, John; Gerber, Brian L

    2008-01-01

    We evaluated the impact of goal setting and students' participation in graphing their own performance data on the rate of math fact calculations. Participants were 3 students with mild disabilities in the first and second grades; 2 of the 3 students were also identified with Attention-Deficit/Hyperactivity Disorder (ADHD). They were taught to use Microsoft Excel® software to graph their rate of correct calculations when completing timed, independent practice sheets consisting of single-digit mathematics problems. Two students' rates of correct calculations nearly always met or exceeded the aim line established for their correct calculations. Additional interventions were required for the third student. Results are discussed in terms of implications and future directions for increasing the use of evaluation components in classrooms for students at risk for behavior disorders and academic failure. PMID:22477686

  10. Independent component analysis decomposition of hospital emergency department throughput measures

    NASA Astrophysics Data System (ADS)

    He, Qiang; Chu, Henry

    2016-05-01

    We present a method adapted from medical sensor data analysis, viz. independent component analysis of electroencephalography data, to health system analysis. Timely and effective care in a hospital emergency department is measured by throughput measures such as median times patients spent before they were admitted as an inpatient, before they were sent home, before they were seen by a healthcare professional. We consider a set of five such measures collected at 3,086 hospitals distributed across the U.S. One model of the performance of an emergency department is that these correlated throughput measures are linear combinations of some underlying sources. The independent component analysis decomposition of the data set can thus be viewed as transforming a set of performance measures collected at a site to a collection of outputs of spatial filters applied to the whole multi-measure data. We compare the independent component sources with the output of the conventional principal component analysis to show that the independent components are more suitable for understanding the data sets through visualizations.

  11. Psychedelics not linked to mental health problems or suicidal behavior: a population study.

    PubMed

    Johansen, Pål-Ørjan; Krebs, Teri Suzanne

    2015-03-01

    A recent large population study of 130,000 adults in the United States failed to find evidence for a link between psychedelic use (lysergic acid diethylamide, psilocybin or mescaline) and mental health problems. Using a new data set consisting of 135,095 randomly selected United States adults, including 19,299 psychedelic users, we examine the associations between psychedelic use and mental health. After adjusting for sociodemographics, other drug use and childhood depression, we found no significant associations between lifetime use of psychedelics and increased likelihood of past year serious psychological distress, mental health treatment, suicidal thoughts, suicidal plans and suicide attempt, depression and anxiety. We failed to find evidence that psychedelic use is an independent risk factor for mental health problems. Psychedelics are not known to harm the brain or other body organs or to cause addiction or compulsive use; serious adverse events involving psychedelics are extremely rare. Overall, it is difficult to see how prohibition of psychedelics can be justified as a public health measure. © The Author(s) 2015.

  12. Conceptions of schizophrenia as a problem of nerves: a cross-cultural comparison of Mexican-Americans and Anglo-Americans.

    PubMed

    Jenkins, J H

    1988-01-01

    This paper explores indigenous conceptions of psychosis within family settings. The cultural categories nervios and 'nerves', as applied by Mexican-American and Anglo-American relatives to family members diagnosed with schizophrenia, are examined. While Mexican-Americans tended to consider nervios an appropriate interpretation of the problem, Anglo-Americans explicitly dismissed the parallel English term 'nerves'. Anglo-American relatives were likely to consider the problem as 'mental' in nature, often with specific reference to psychiatric diagnostic labels such as 'schizophrenia'. Although variations in conceptions appear related to both ethnicity and socioeconomic status, significant cultural differences were observed independent of socioeconomic status. These results raise questions concerning contemporary anthropological views that psychosis is conceptualized in substantially similar ways cross-culturally, and underscore the need for more contextualized understanding of the meaning and application of indigenous concepts of mental disorder. The paper concludes with a discussion of psychocultural meanings associated with ethnopsychiatric labels for schizophrenia and their importance for the social and moral status of patients and their kin.

  13. A fully implicit finite element method for bidomain models of cardiac electromechanics

    PubMed Central

    Dal, Hüsnü; Göktepe, Serdar; Kaliske, Michael; Kuhl, Ellen

    2012-01-01

    We propose a novel, monolithic, and unconditionally stable finite element algorithm for the bidomain-based approach to cardiac electromechanics. We introduce the transmembrane potential, the extracellular potential, and the displacement field as independent variables, and extend the common two-field bidomain formulation of electrophysiology to a three-field formulation of electromechanics. The intrinsic coupling arises from both excitation-induced contraction of cardiac cells and the deformation-induced generation of intra-cellular currents. The coupled reaction-diffusion equations of the electrical problem and the momentum balance of the mechanical problem are recast into their weak forms through a conventional isoparametric Galerkin approach. As a novel aspect, we propose a monolithic approach to solve the governing equations of excitation-contraction coupling in a fully coupled, implicit sense. We demonstrate the consistent linearization of the resulting set of non-linear residual equations. To assess the algorithmic performance, we illustrate characteristic features by means of representative three-dimensional initial-boundary value problems. The proposed algorithm may open new avenues to patient specific therapy design by circumventing stability and convergence issues inherent to conventional staggered solution schemes. PMID:23175588

  14. Treatments for the challenging behaviours of adults with intellectual disabilities.

    PubMed

    Matson, Johnny L; Neal, Daniene; Kozlowski, Alison M

    2012-10-01

    To provide an overview and critical assessment of common problems and best evidence practice in treatments for the challenging behaviours (CBs) of adults with intellectual disabilities (IDs). Commonly observed problems that present obstacles to successful treatment plans are discussed, followed by an analysis of available research on the efficacy of behavioural and pharmacological therapies. Behavioural and pharmacological interventions are most commonly used when addressing CBs in people with IDs. However, within each of these techniques, there are methods that have support in the literature for efficacy and those that do not. As clinicians, it is important to follow research so that we are engaging in best practices when developing treatment plans for CBs. One of the most consuming issues for psychiatrists and other mental health professionals who work with people who evince developmental disabilities, such as IDs, are CBs. These problems are very dangerous and are a major impediment to independent, less restrictive living. However, there is a major gap between what researchers show is effective and much of what occurs in real-world settings.

  15. A Comparative Study of Pairwise Learning Methods Based on Kernel Ridge Regression.

    PubMed

    Stock, Michiel; Pahikkala, Tapio; Airola, Antti; De Baets, Bernard; Waegeman, Willem

    2018-06-12

    Many machine learning problems can be formulated as predicting labels for a pair of objects. Problems of that kind are often referred to as pairwise learning, dyadic prediction, or network inference problems. During the past decade, kernel methods have played a dominant role in pairwise learning. They still obtain a state-of-the-art predictive performance, but a theoretical analysis of their behavior has been underexplored in the machine learning literature. In this work we review and unify kernel-based algorithms that are commonly used in different pairwise learning settings, ranging from matrix filtering to zero-shot learning. To this end, we focus on closed-form efficient instantiations of Kronecker kernel ridge regression. We show that independent task kernel ridge regression, two-step kernel ridge regression, and a linear matrix filter arise naturally as a special case of Kronecker kernel ridge regression, implying that all these methods implicitly minimize a squared loss. In addition, we analyze universality, consistency, and spectral filtering properties. Our theoretical results provide valuable insights into assessing the advantages and limitations of existing pairwise learning methods.

  16. Discovery of error-tolerant biclusters from noisy gene expression data.

    PubMed

    Gupta, Rohit; Rao, Navneet; Kumar, Vipin

    2011-11-24

    An important analysis performed on microarray gene-expression data is to discover biclusters, which denote groups of genes that are coherently expressed for a subset of conditions. Various biclustering algorithms have been proposed to find different types of biclusters from these real-valued gene-expression data sets. However, these algorithms suffer from several limitations such as inability to explicitly handle errors/noise in the data; difficulty in discovering small bicliusters due to their top-down approach; inability of some of the approaches to find overlapping biclusters, which is crucial as many genes participate in multiple biological processes. Association pattern mining also produce biclusters as their result and can naturally address some of these limitations. However, traditional association mining only finds exact biclusters, which limits its applicability in real-life data sets where the biclusters may be fragmented due to random noise/errors. Moreover, as they only work with binary or boolean attributes, their application on gene-expression data require transforming real-valued attributes to binary attributes, which often results in loss of information. Many past approaches have tried to address the issue of noise and handling real-valued attributes independently but there is no systematic approach that addresses both of these issues together. In this paper, we first propose a novel error-tolerant biclustering model, 'ET-bicluster', and then propose a bottom-up heuristic-based mining algorithm to sequentially discover error-tolerant biclusters directly from real-valued gene-expression data. The efficacy of our proposed approach is illustrated by comparing it with a recent approach RAP in the context of two biological problems: discovery of functional modules and discovery of biomarkers. For the first problem, two real-valued S.Cerevisiae microarray gene-expression data sets are used to demonstrate that the biclusters obtained from ET-bicluster approach not only recover larger set of genes as compared to those obtained from RAP approach but also have higher functional coherence as evaluated using the GO-based functional enrichment analysis. The statistical significance of the discovered error-tolerant biclusters as estimated by using two randomization tests, reveal that they are indeed biologically meaningful and statistically significant. For the second problem of biomarker discovery, we used four real-valued Breast Cancer microarray gene-expression data sets and evaluate the biomarkers obtained using MSigDB gene sets. The results obtained for both the problems: functional module discovery and biomarkers discovery, clearly signifies the usefulness of the proposed ET-bicluster approach and illustrate the importance of explicitly incorporating noise/errors in discovering coherent groups of genes from gene-expression data.

  17. Low-rank regularization for learning gene expression programs.

    PubMed

    Ye, Guibo; Tang, Mengfan; Cai, Jian-Feng; Nie, Qing; Xie, Xiaohui

    2013-01-01

    Learning gene expression programs directly from a set of observations is challenging due to the complexity of gene regulation, high noise of experimental measurements, and insufficient number of experimental measurements. Imposing additional constraints with strong and biologically motivated regularizations is critical in developing reliable and effective algorithms for inferring gene expression programs. Here we propose a new form of regulation that constrains the number of independent connectivity patterns between regulators and targets, motivated by the modular design of gene regulatory programs and the belief that the total number of independent regulatory modules should be small. We formulate a multi-target linear regression framework to incorporate this type of regulation, in which the number of independent connectivity patterns is expressed as the rank of the connectivity matrix between regulators and targets. We then generalize the linear framework to nonlinear cases, and prove that the generalized low-rank regularization model is still convex. Efficient algorithms are derived to solve both the linear and nonlinear low-rank regularized problems. Finally, we test the algorithms on three gene expression datasets, and show that the low-rank regularization improves the accuracy of gene expression prediction in these three datasets.

  18. Adequate mathematical modelling of environmental processes

    NASA Astrophysics Data System (ADS)

    Chashechkin, Yu. D.

    2012-04-01

    In environmental observations and laboratory visualization both large scale flow components like currents, jets, vortices, waves and a fine structure are registered (different examples are given). The conventional mathematical modeling both analytical and numerical is directed mostly on description of energetically important flow components. The role of a fine structures is still remains obscured. A variety of existing models makes it difficult to choose the most adequate and to estimate mutual assessment of their degree of correspondence. The goal of the talk is to give scrutiny analysis of kinematics and dynamics of flows. A difference between the concept of "motion" as transformation of vector space into itself with a distance conservation and the concept of "flow" as displacement and rotation of deformable "fluid particles" is underlined. Basic physical quantities of the flow that are density, momentum, energy (entropy) and admixture concentration are selected as physical parameters defined by the fundamental set which includes differential D'Alembert, Navier-Stokes, Fourier's and/or Fick's equations and closing equation of state. All of them are observable and independent. Calculations of continuous Lie groups shown that only the fundamental set is characterized by the ten-parametric Galilelian groups reflecting based principles of mechanics. Presented analysis demonstrates that conventionally used approximations dramatically change the symmetries of the governing equations sets which leads to their incompatibility or even degeneration. The fundamental set is analyzed taking into account condition of compatibility. A high order of the set indicated on complex structure of complete solutions corresponding to physical structure of real flows. Analytical solutions of a number problems including flows induced by diffusion on topography, generation of the periodic internal waves a compact sources in week-dissipative media as well as numerical solutions of the same problems are constructed. They include regular perturbed function describing large scale component and a rich family of singular perturbed function corresponding to fine flow components. Solutions are compared with data of laboratory experiments performed on facilities USU "HPC IPMec RAS" under support of Ministry of Education and Science RF (Goscontract No. 16.518.11.7059). Related problems of completeness and accuracy of laboratory and environmental measurements are discussed.

  19. Student Problems. Adult Literacy Independent Learning Packet.

    ERIC Educational Resources Information Center

    Koefer, Ann M.

    This independent learning packet, which is designed for administrators, teachers, counselors, and tutors in Pennsylvania's Region 7 Tri-Valley Literacy Staff Development area as well as for their adult students, examines the following seven problems encountered by students: the job market, child care, single parenting/parenting skills, divorce,…

  20. Systematic evaluation of sequential geostatistical resampling within MCMC for posterior sampling of near-surface geophysical inverse problems

    NASA Astrophysics Data System (ADS)

    Ruggeri, Paolo; Irving, James; Holliger, Klaus

    2015-08-01

    We critically examine the performance of sequential geostatistical resampling (SGR) as a model proposal mechanism for Bayesian Markov-chain-Monte-Carlo (MCMC) solutions to near-surface geophysical inverse problems. Focusing on a series of simple yet realistic synthetic crosshole georadar tomographic examples characterized by different numbers of data, levels of data error and degrees of model parameter spatial correlation, we investigate the efficiency of three different resampling strategies with regard to their ability to generate statistically independent realizations from the Bayesian posterior distribution. Quite importantly, our results show that, no matter what resampling strategy is employed, many of the examined test cases require an unreasonably high number of forward model runs to produce independent posterior samples, meaning that the SGR approach as currently implemented will not be computationally feasible for a wide range of problems. Although use of a novel gradual-deformation-based proposal method can help to alleviate these issues, it does not offer a full solution. Further, we find that the nature of the SGR is found to strongly influence MCMC performance; however no clear rule exists as to what set of inversion parameters and/or overall proposal acceptance rate will allow for the most efficient implementation. We conclude that although the SGR methodology is highly attractive as it allows for the consideration of complex geostatistical priors as well as conditioning to hard and soft data, further developments are necessary in the context of novel or hybrid MCMC approaches for it to be considered generally suitable for near-surface geophysical inversions.

  1. Subpopulations of Older Foster Youths With Differential Risk of Diagnosis for Alcohol Abuse or Dependence*

    PubMed Central

    Keller, Thomas E.; Blakeslee, Jennifer E.; Lemon, Stephenie C.; Courtney, Mark E.

    2010-01-01

    Objective: Distinctive combinations of factors are likely to be associated with serious alcohol problems among adolescents about to emancipate from the foster care system and face the difficult transition to independent adulthood. This study identifies particular subpopulations of older foster youths that differ markedly in the probability of a lifetime diagnosis for alcohol abuse or dependence. Method: Classification and regression tree (CART) analysis was applied to a large, representative sample (N = 732) of individuals, 17 years of age or older, placed in the child welfare system for more than 1 year. CART evaluated two exploratory sets of variables for optimal splits into groups distinguished from each other on the criterion of lifetime alcohol-use disorder diagnosis. Results: Each classification tree yielded four terminal groups with different rates of lifetime alcohol-use disorder diagnosis. Notable groups in the first tree included one characterized by high levels of both delinquency and violence exposure (53% diagnosed) and another that featured lower delinquency but an independent-living placement (21% diagnosed). Notable groups in the second tree included African American adolescents (only 8% diagnosed), White adolescents not close to caregivers (40% diagnosed), and White adolescents closer to caregivers but with a history of psychological abuse (36% diagnosed). Conclusions: Analyses incorporating variables that could be comorbid with or symptomatic of alcohol problems, such as delinquency, yielded classifications potentially useful for assessment and service planning. Analyses without such variables identified other factors, such as quality of caregiving relationships and maltreatment, associated with serious alcohol problems, suggesting opportunities for prevention or intervention. PMID:20946738

  2. Surface and through crack problems in orthotropic plates

    NASA Technical Reports Server (NTRS)

    Erdogan, F.; Wu, B.-H.

    1988-01-01

    The present treatment of the general mode I crack problem in bending- and membrane-loaded orthotropic plates proceeds by formulating the bending problem for a series of planar and through-cracks; by independently varying the six independent constants, the effect of material orthotropy on the stress intensity factor is determined. The surface-crack problem is then formulated by means of the line-spring model, using a transverse-shear theory of plate bending. Attention is given to composite laminates with through-cracks or semielliptic surface cracks. A significant effect is noted for material orthotropy.

  3. Application of the artificial bee colony algorithm for solving the set covering problem.

    PubMed

    Crawford, Broderick; Soto, Ricardo; Cuesta, Rodrigo; Paredes, Fernando

    2014-01-01

    The set covering problem is a formal model for many practical optimization problems. In the set covering problem the goal is to choose a subset of the columns of minimal cost that covers every row. Here, we present a novel application of the artificial bee colony algorithm to solve the non-unicost set covering problem. The artificial bee colony algorithm is a recent swarm metaheuristic technique based on the intelligent foraging behavior of honey bees. Experimental results show that our artificial bee colony algorithm is competitive in terms of solution quality with other recent metaheuristic approaches for the set covering problem.

  4. Application of the Artificial Bee Colony Algorithm for Solving the Set Covering Problem

    PubMed Central

    Crawford, Broderick; Soto, Ricardo; Cuesta, Rodrigo; Paredes, Fernando

    2014-01-01

    The set covering problem is a formal model for many practical optimization problems. In the set covering problem the goal is to choose a subset of the columns of minimal cost that covers every row. Here, we present a novel application of the artificial bee colony algorithm to solve the non-unicost set covering problem. The artificial bee colony algorithm is a recent swarm metaheuristic technique based on the intelligent foraging behavior of honey bees. Experimental results show that our artificial bee colony algorithm is competitive in terms of solution quality with other recent metaheuristic approaches for the set covering problem. PMID:24883356

  5. An Independent Filter for Gene Set Testing Based on Spectral Enrichment.

    PubMed

    Frost, H Robert; Li, Zhigang; Asselbergs, Folkert W; Moore, Jason H

    2015-01-01

    Gene set testing has become an indispensable tool for the analysis of high-dimensional genomic data. An important motivation for testing gene sets, rather than individual genomic variables, is to improve statistical power by reducing the number of tested hypotheses. Given the dramatic growth in common gene set collections, however, testing is often performed with nearly as many gene sets as underlying genomic variables. To address the challenge to statistical power posed by large gene set collections, we have developed spectral gene set filtering (SGSF), a novel technique for independent filtering of gene set collections prior to gene set testing. The SGSF method uses as a filter statistic the p-value measuring the statistical significance of the association between each gene set and the sample principal components (PCs), taking into account the significance of the associated eigenvalues. Because this filter statistic is independent of standard gene set test statistics under the null hypothesis but dependent under the alternative, the proportion of enriched gene sets is increased without impacting the type I error rate. As shown using simulated and real gene expression data, the SGSF algorithm accurately filters gene sets unrelated to the experimental outcome resulting in significantly increased gene set testing power.

  6. Renormalization group invariance and optimal QCD renormalization scale-setting: a key issues review.

    PubMed

    Wu, Xing-Gang; Ma, Yang; Wang, Sheng-Quan; Fu, Hai-Bing; Ma, Hong-Hao; Brodsky, Stanley J; Mojaza, Matin

    2015-12-01

    A valid prediction for a physical observable from quantum field theory should be independent of the choice of renormalization scheme--this is the primary requirement of renormalization group invariance (RGI). Satisfying scheme invariance is a challenging problem for perturbative QCD (pQCD), since a truncated perturbation series does not automatically satisfy the requirements of the renormalization group. In a previous review, we provided a general introduction to the various scale setting approaches suggested in the literature. As a step forward, in the present review, we present a discussion in depth of two well-established scale-setting methods based on RGI. One is the 'principle of maximum conformality' (PMC) in which the terms associated with the β-function are absorbed into the scale of the running coupling at each perturbative order; its predictions are scheme and scale independent at every finite order. The other approach is the 'principle of minimum sensitivity' (PMS), which is based on local RGI; the PMS approach determines the optimal renormalization scale by requiring the slope of the approximant of an observable to vanish. In this paper, we present a detailed comparison of the PMC and PMS procedures by analyzing two physical observables R(e+e-) and [Formula: see text] up to four-loop order in pQCD. At the four-loop level, the PMC and PMS predictions for both observables agree within small errors with those of conventional scale setting assuming a physically-motivated scale, and each prediction shows small scale dependences. However, the convergence of the pQCD series at high orders, behaves quite differently: the PMC displays the best pQCD convergence since it eliminates divergent renormalon terms; in contrast, the convergence of the PMS prediction is questionable, often even worse than the conventional prediction based on an arbitrary guess for the renormalization scale. PMC predictions also have the property that any residual dependence on the choice of initial scale is highly suppressed even for low-order predictions. Thus the PMC, based on the standard RGI, has a rigorous foundation; it eliminates an unnecessary systematic error for high precision pQCD predictions and can be widely applied to virtually all high-energy hadronic processes, including multi-scale problems.

  7. Renormalization group invariance and optimal QCD renormalization scale-setting: a key issues review

    NASA Astrophysics Data System (ADS)

    Wu, Xing-Gang; Ma, Yang; Wang, Sheng-Quan; Fu, Hai-Bing; Ma, Hong-Hao; Brodsky, Stanley J.; Mojaza, Matin

    2015-12-01

    A valid prediction for a physical observable from quantum field theory should be independent of the choice of renormalization scheme—this is the primary requirement of renormalization group invariance (RGI). Satisfying scheme invariance is a challenging problem for perturbative QCD (pQCD), since a truncated perturbation series does not automatically satisfy the requirements of the renormalization group. In a previous review, we provided a general introduction to the various scale setting approaches suggested in the literature. As a step forward, in the present review, we present a discussion in depth of two well-established scale-setting methods based on RGI. One is the ‘principle of maximum conformality’ (PMC) in which the terms associated with the β-function are absorbed into the scale of the running coupling at each perturbative order; its predictions are scheme and scale independent at every finite order. The other approach is the ‘principle of minimum sensitivity’ (PMS), which is based on local RGI; the PMS approach determines the optimal renormalization scale by requiring the slope of the approximant of an observable to vanish. In this paper, we present a detailed comparison of the PMC and PMS procedures by analyzing two physical observables R e+e- and Γ(H\\to b\\bar{b}) up to four-loop order in pQCD. At the four-loop level, the PMC and PMS predictions for both observables agree within small errors with those of conventional scale setting assuming a physically-motivated scale, and each prediction shows small scale dependences. However, the convergence of the pQCD series at high orders, behaves quite differently: the PMC displays the best pQCD convergence since it eliminates divergent renormalon terms; in contrast, the convergence of the PMS prediction is questionable, often even worse than the conventional prediction based on an arbitrary guess for the renormalization scale. PMC predictions also have the property that any residual dependence on the choice of initial scale is highly suppressed even for low-order predictions. Thus the PMC, based on the standard RGI, has a rigorous foundation; it eliminates an unnecessary systematic error for high precision pQCD predictions and can be widely applied to virtually all high-energy hadronic processes, including multi-scale problems.

  8. Serial position curves in free recall.

    PubMed

    Laming, Donald

    2010-01-01

    The scenario for free recall set out in Laming (2009) is developed to provide models for the serial position curves from 5 selected sets of data, for final free recall, and for multitrial free recall. The 5 sets of data reflect the effects of rate of presentation, length of list, delay of recall, and suppression of rehearsal. Each model accommodates the serial position curve for first recalls (where those data are available) as well as that for total recalls. Both curves are fit with the same parameter values, as also (with 1 exception) are all of the conditions compared within each experiment. The distributions of numbers of recalls are also examined and shown to have variances increased above what would be expected if successive recalls were independent. This is taken to signify that, in those experiments in which rehearsals were not recorded, the retrieval of words for possible recall follows the same pattern that is observed following overt rehearsal, namely, that retrieval consists of runs of consecutive elements from memory. Finally, 2 sets of data are examined that the present approach cannot accommodate. It is argued that the problem with these data derives from an interaction between the patterns of (covert) rehearsal and the parameters of list presentation.

  9. Mortality determinants and prediction of outcome in high risk newborns.

    PubMed

    Dalvi, R; Dalvi, B V; Birewar, N; Chari, G; Fernandez, A R

    1990-06-01

    The aim of this study was to determine independent patient-related predictors of mortality in high risk newborns admitted at our centre. The study population comprised 100 consecutive newborns each, from the premature unit (PU) and sick baby care unit (SBCU), respectively. Thirteen high risk factors (variables) for each of the two units, were entered into a multivariate regression analysis. Variables with independent predictive value for poor outcome (i.e., death) in PU were, weight less than 1 kg, hyaline membrane disease, neurologic problems, and intravenous therapy. High risk factors in SBCU included, blood gas abnormality, bleeding phenomena, recurrent convulsions, apnea, and congenital anomalies. Identification of these factors guided us in defining priority areas for improvement in our system of neonatal care. Also, based on these variables a simple predictive score for outcome was constructed. The prediction equation and the score were cross-validated by applying them to a 'test-set' of 100 newborns each for PU and SBCU. Results showed a comparable sensitivity, specificity and error rate.

  10. Classification of JET Neutron and Gamma Emissivity Profiles

    NASA Astrophysics Data System (ADS)

    Craciunescu, T.; Murari, A.; Kiptily, V.; Vega, J.; Contributors, JET

    2016-05-01

    In thermonuclear plasmas, emission tomography uses integrated measurements along lines of sight (LOS) to determine the two-dimensional (2-D) spatial distribution of the volume emission intensity. Due to the availability of only a limited number views and to the coarse sampling of the LOS, the tomographic inversion is a limited data set problem. Several techniques have been developed for tomographic reconstruction of the 2-D gamma and neutron emissivity on JET. In specific experimental conditions the availability of LOSs is restricted to a single view. In this case an explicit reconstruction of the emissivity profile is no longer possible. However, machine learning classification methods can be used in order to derive the type of the distribution. In the present approach the classification is developed using the theory of belief functions which provide the support to fuse the results of independent clustering and supervised classification. The method allows to represent the uncertainty of the results provided by different independent techniques, to combine them and to manage possible conflicts.

  11. On the evaporation of solar dark matter: spin-independent effective operators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Zheng-Liang; Wu, Yue-Liang; Yang, Zi-Qing

    2016-09-13

    As a part of the effort to investigate the implications of dark matter (DM)-nucleon effective interactions on the solar DM detection, in this paper we focus on the evaporation of the solar DM for a set of the DM-nucleon spin-independent (SI) effective operators. In order to put the evaluation of the evaporation rate on a more reliable ground, we calculate the non-thermal distribution of the solar DM using the Monte Carlo methods, rather than adopting the Maxwellian approximation. We then specify relevant signal parameter spaces for the solar DM detection for various SI effective operators. Based on the analysis, wemore » determine the minimum DM masses for which the DM-nucleon coupling strengths can be probed from the solar neutrino observations. As an interesting application, our investigation also shows that evaporation effect can not be neglectd in a recent proposal aiming to solve the solar abundance problem by invoking the momentum-dependent asymmetric DM in the Sun.« less

  12. On solving the compressible Navier-Stokes equations for unsteady flows at very low Mach numbers

    NASA Technical Reports Server (NTRS)

    Pletcher, R. H.; Chen, K.-H.

    1993-01-01

    The properties of a preconditioned, coupled, strongly implicit finite difference scheme for solving the compressible Navier-Stokes equations in primitive variables are investigated for two unsteady flows at low speeds, namely the impulsively started driven cavity and the startup of pipe flow. For the shear-driven cavity flow, the computational effort was observed to be nearly independent of Mach number, especially at the low end of the range considered. This Mach number independence was also observed for steady pipe flow calculations; however, rather different conclusions were drawn for the unsteady calculations. In the pressure-driven pipe startup problem, the compressibility of the fluid began to significantly influence the physics of the flow development at quite low Mach numbers. The present scheme was observed to produce the expected characteristics of completely incompressible flow when the Mach number was set at very low values. Good agreement with incompressible results available in the literature was observed.

  13. The Effect of Distributed Practice in Undergraduate Statistics Homework Sets: A Randomized Trial

    ERIC Educational Resources Information Center

    Crissinger, Bryan R.

    2015-01-01

    Most homework sets in statistics courses are constructed so that students concentrate or "mass" their practice on a certain topic in one problem set. Distributed practice homework sets include review problems in each set so that practice on a topic is distributed across problem sets. There is a body of research that points to the…

  14. Families at risk of poor parenting: a model for service delivery, assessment, and intervention.

    PubMed

    Ayoub, C; Jacewitz, M M

    1982-01-01

    The At Risk Parent Child Program is a multidisciplinary network agency designed for the secondary prevention of poor parenting and the extremes of child abuse and neglect. This model system of service delivery emphasizes (1) the coordination of existing community resources to access a target population of families at risk of parenting problems, (2) the provision of multiple special services in a neutral location (ambulatory pediatric clinic), and (3) the importance of intensive individual contact with a clinical professional who serves as primary therapist, social advocate and service coordinator for client families. Identification and assessment of families is best done during prenatal and perinatal periods. Both formal and informal procedures for screening for risk factors are described, and a simple set of at risk criteria for use by hospital nursing staff is provided. Preventive intervention strategies include special medical, psychological, social and developmental services, offered in an inpatient; outpatient, or in-home setting. Matching family needs to modality and setting of treatment is a major program concern. All direct services to at risk families are supplied by professionals employed within existing local agencies (hospital, public health department, state guidance center, and medical school pediatric clinic). Multiple agency involvement allows a broad-based screening capacity which allows thousands of families routine access to program services. The administrative center of the network stands as an independent, community-funded core which coordinates and monitors direct clinical services, and provides local political advocacy for families at risk of parenting problems.

  15. Second-order asymptotics for quantum hypothesis testing in settings beyond i.i.d.—quantum lattice systems and more

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Datta, Nilanjana; Rouzé, Cambyse; Pautrat, Yan

    2016-06-15

    Quantum Stein’s lemma is a cornerstone of quantum statistics and concerns the problem of correctly identifying a quantum state, given the knowledge that it is one of two specific states (ρ or σ). It was originally derived in the asymptotic i.i.d. setting, in which arbitrarily many (say, n) identical copies of the state (ρ{sup ⊗n} or σ{sup ⊗n}) are considered to be available. In this setting, the lemma states that, for any given upper bound on the probability α{sub n} of erroneously inferring the state to be σ, the probability β{sub n} of erroneously inferring the state to be ρmore » decays exponentially in n, with the rate of decay converging to the relative entropy of the two states. The second order asymptotics for quantum hypothesis testing, which establishes the speed of convergence of this rate of decay to its limiting value, was derived in the i.i.d. setting independently by Tomamichel and Hayashi, and Li. We extend this result to settings beyond i.i.d. Examples of these include Gibbs states of quantum spin systems (with finite-range, translation-invariant interactions) at high temperatures, and quasi-free states of fermionic lattice gases.« less

  16. A novel fully automatic scheme for fiducial marker-based alignment in electron tomography.

    PubMed

    Han, Renmin; Wang, Liansan; Liu, Zhiyong; Sun, Fei; Zhang, Fa

    2015-12-01

    Although the topic of fiducial marker-based alignment in electron tomography (ET) has been widely discussed for decades, alignment without human intervention remains a difficult problem. Specifically, the emergence of subtomogram averaging has increased the demand for batch processing during tomographic reconstruction; fully automatic fiducial marker-based alignment is the main technique in this process. However, the lack of an accurate method for detecting and tracking fiducial markers precludes fully automatic alignment. In this paper, we present a novel, fully automatic alignment scheme for ET. Our scheme has two main contributions: First, we present a series of algorithms to ensure a high recognition rate and precise localization during the detection of fiducial markers. Our proposed solution reduces fiducial marker detection to a sampling and classification problem and further introduces an algorithm to solve the parameter dependence of marker diameter and marker number. Second, we propose a novel algorithm to solve the tracking of fiducial markers by reducing the tracking problem to an incomplete point set registration problem. Because a global optimization of a point set registration occurs, the result of our tracking is independent of the initial image position in the tilt series, allowing for the robust tracking of fiducial markers without pre-alignment. The experimental results indicate that our method can achieve an accurate tracking, almost identical to the current best one in IMOD with half automatic scheme. Furthermore, our scheme is fully automatic, depends on fewer parameters (only requires a gross value of the marker diameter) and does not require any manual interaction, providing the possibility of automatic batch processing of electron tomographic reconstruction. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. The environmental zero-point problem in evolutionary reaction norm modeling.

    PubMed

    Ergon, Rolf

    2018-04-01

    There is a potential problem in present quantitative genetics evolutionary modeling based on reaction norms. Such models are state-space models, where the multivariate breeder's equation in some form is used as the state equation that propagates the population state forward in time. These models use the implicit assumption of a constant reference environment, in many cases set to zero. This zero-point is often the environment a population is adapted to, that is, where the expected geometric mean fitness is maximized. Such environmental reference values follow from the state of the population system, and they are thus population properties. The environment the population is adapted to, is, in other words, an internal population property, independent of the external environment. It is only when the external environment coincides with the internal reference environment, or vice versa, that the population is adapted to the current environment. This is formally a result of state-space modeling theory, which is an important theoretical basis for evolutionary modeling. The potential zero-point problem is present in all types of reaction norm models, parametrized as well as function-valued, and the problem does not disappear when the reference environment is set to zero. As the environmental reference values are population characteristics, they ought to be modeled as such. Whether such characteristics are evolvable is an open question, but considering the complexity of evolutionary processes, such evolvability cannot be excluded without good arguments. As a straightforward solution, I propose to model the reference values as evolvable mean traits in their own right, in addition to other reaction norm traits. However, solutions based on an evolvable G matrix are also possible.

  18. LCTV Holographic Imaging

    NASA Technical Reports Server (NTRS)

    Knopp, Jerome

    1996-01-01

    Astronauts are required to interface with complex systems that require sophisticated displays to communicate effectively. Lightweight, head-mounted real-time displays that present holographic images for comfortable viewing may be the ideal solution. We describe an implementation of a liquid crystal television (LCTV) as a spatial light modulator (SLM) for the display of holograms. The implementation required the solution of a complex set of problems. These include field calculations, determination of the LCTV-SLM complex transmittance characteristics and a precise knowledge of the signal mapping between the LCTV and frame grabbing board that controls it. Realizing the hologram is further complicated by the coupling that occurs between the phase and amplitude in the LCTV transmittance. A single drive signal (a gray level signal from a framegrabber) determines both amplitude and phase. Since they are not independently controllable (as is true in the ideal SLM) one must deal with the problem of optimizing (in some sense) the hologram based on this constraint. Solutions for the above problems have been found. An algorithm has been for field calculations that uses an efficient outer product formulation. Juday's MEDOF 7 (Minimum Euclidean Distance Optimal Filter) algorithm used for originally for filter calculations has been successfully adapted to handle metrics appropriate for holography. This has solved the problem of optimizing the hologram to the constraints imposed by coupling. Two laboratory methods have been developed for determining an accurate mapping of framegrabber pixels to LCTV pixels. A friendly software system has been developed that integrates the hologram calculation and realization process using a simple set of instructions. The computer code and all the laboratory measurement techniques determining SLM parameters have been proven with the production of a high quality test image.

  19. Systematic investigation of non-Boussinesq effects in variable-density groundwater flow simulations.

    PubMed

    Guevara Morel, Carlos R; van Reeuwijk, Maarten; Graf, Thomas

    2015-12-01

    The validity of three mathematical models describing variable-density groundwater flow is systematically evaluated: (i) a model which invokes the Oberbeck-Boussinesq approximation (OB approximation), (ii) a model of intermediate complexity (NOB1) and (iii) a model which solves the full set of equations (NOB2). The NOB1 and NOB2 descriptions have been added to the HydroGeoSphere (HGS) model, which originally contained an implementation of the OB description. We define the Boussinesq parameter ερ=βω Δω where βω is the solutal expansivity and Δω is the characteristic difference in solute mass fraction. The Boussinesq parameter ερ is used to systematically investigate three flow scenarios covering a range of free and mixed convection problems: 1) the low Rayleigh number Elder problem (Van Reeuwijk et al., 2009), 2) a convective fingering problem (Xie et al., 2011) and 3) a mixed convective problem (Schincariol et al., 1994). Results indicate that small density differences (ερ≤ 0.05) produce no apparent changes in the total solute mass in the system, plume penetration depth, center of mass and mass flux independent of the mathematical model used. Deviations between OB, NOB1 and NOB2 occur for large density differences (ερ>0.12), where lower description levels will underestimate the vertical plume position and overestimate mass flux. Based on the cases considered here, we suggest the following guidelines for saline convection: the OB approximation is valid for cases with ερ<0.05, and the full NOB set of equations needs to be used for cases with ερ>0.10. Whether NOB effects are important in the intermediate region differ from case to case. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Coping, problem solving, depression, and health-related quality of life in patients receiving outpatient stroke rehabilitation.

    PubMed

    Visser, Marieke M; Heijenbrok-Kal, Majanka H; Spijker, Adriaan Van't; Oostra, Kristine M; Busschbach, Jan J; Ribbers, Gerard M

    2015-08-01

    To investigate whether patients with high and low depression scores after stroke use different coping strategies and problem-solving skills and whether these variables are related to psychosocial health-related quality of life (HRQOL) independent of depression. Cross-sectional study. Two rehabilitation centers. Patients participating in outpatient stroke rehabilitation (N=166; mean age, 53.06±10.19y; 53% men; median time poststroke, 7.29mo). Not applicable. Coping strategy was measured using the Coping Inventory for Stressful Situations; problem-solving skills were measured using the Social Problem Solving Inventory-Revised: Short Form; depression was assessed using the Center for Epidemiologic Studies Depression Scale; and HRQOL was measured using the five-level EuroQol five-dimensional questionnaire and the Stroke-Specific Quality of Life Scale. Independent samples t tests and multivariable regression analyses, adjusted for patient characteristics, were performed. Compared with patients with low depression scores, patients with high depression scores used less positive problem orientation (P=.002) and emotion-oriented coping (P<.001) and more negative problem orientation (P<.001) and avoidance style (P<.001). Depression score was related to all domains of both general HRQOL (visual analog scale: β=-.679; P<.001; utility: β=-.009; P<.001) and stroke-specific HRQOL (physical HRQOL: β=-.020; P=.001; psychosocial HRQOL: β=-.054, P<.001; total HRQOL: β=-.037; P<.001). Positive problem orientation was independently related to psychosocial HRQOL (β=.086; P=.018) and total HRQOL (β=.058; P=.031). Patients with high depression scores use different coping strategies and problem-solving skills than do patients with low depression scores. Independent of depression, positive problem-solving skills appear to be most significantly related to better HRQOL. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  1. Grid-Independent Compressive Imaging and Fourier Phase Retrieval

    ERIC Educational Resources Information Center

    Liao, Wenjing

    2013-01-01

    This dissertation is composed of two parts. In the first part techniques of band exclusion(BE) and local optimization(LO) are proposed to solve linear continuum inverse problems independently of the grid spacing. The second part is devoted to the Fourier phase retrieval problem. Many situations in optics, medical imaging and signal processing call…

  2. Evaluation and Implementation of Media-Independent Handover in Hastily Formed Networks

    DTIC Science & Technology

    2013-03-01

    the Media Independent Handover ( MIH ) in HFNs can be an adequate solution for these problems. MIH could be the solution to not only the mobility...and roaming problems but also for other HFN problems due to the intelligent layer-two functions it offers. We tried to combine MIH and Session...showed the limitations of MIH and its open source implementation (ODTONE). We were also able to describe the steps needed for the integration of SIP

  3. Comparison of three methods of solution to the inverse problem of groundwater hydrology for multiple pumping stimulation

    NASA Astrophysics Data System (ADS)

    Giudici, Mauro; Casabianca, Davide; Comunian, Alessandro

    2015-04-01

    The basic classical inverse problem of groundwater hydrology aims at determining aquifer transmissivity (T ) from measurements of hydraulic head (h), estimates or measures of source terms and with the least possible knowledge on hydraulic transmissivity. The theory of inverse problems shows that this is an example of ill-posed problem, for which non-uniqueness and instability (or at least ill-conditioning) might preclude the computation of a physically acceptable solution. One of the methods to reduce the problems with non-uniqueness, ill-conditioning and instability is a tomographic approach, i.e., the use of data corresponding to independent flow situations. The latter might correspond to different hydraulic stimulations of the aquifer, i.e., to different pumping schedules and flux rates. Three inverse methods have been analyzed and tested to profit from the use of multiple sets of data: the Differential System Method (DSM), the Comparison Model Method (CMM) and the Double Constraint Method (DCM). DSM and CMM need h all over the domain and thus the first step for their application is the interpolation of measurements of h at sparse points. Moreover, they also need the knowledge of the source terms (aquifer recharge, well pumping rates) all over the aquifer. DSM is intrinsically based on the use of multiple data sets, which permit to write a first-order partial differential equation for T , whereas CMM and DCM were originally proposed to invert a single data set and have been extended to work with multiple data sets in this work. CMM and DCM are based on Darcy's law, which is used to update an initial guess of the T field with formulas based on a comparison of different hydraulic gradients. In particular, the CMM algorithm corrects the T estimate with ratio of the observed hydraulic gradient and that obtained with a comparison model which shares the same boundary conditions and source terms as the model to be calibrated, but a tentative T field. On the other hand the DCM algorithm applies the ratio of the hydraulic gradients obtained for two different forward models, one with the same boundary conditions and source terms as the model to be calibrated and the other one with prescribed head at the positions where in- or out-flow is known and h is measured. For DCM and CMM, multiple stimulation is used by updating the T field separately for each data set and then combining the resulting updated fields with different possible statistics (arithmetic, geometric or harmonic mean, median, least change, etc.). The three algorithms are tested and their characteristics and results are compared with a field data set, which was provided by prof. Fritz Stauffer (ETH) and corresponding to a pumping test in a thin alluvial aquifer in northern Switzerland. Three data sets are available and correspond to the undisturbed state, to the flow field created by a single pumping well and to the situation created by an 'hydraulic dipole', i.e., an extraction and an injection wells. These data sets permit to test the three inverse methods and the different options which can be chosen for their use.

  4. Using machine learning techniques to automate sky survey catalog generation

    NASA Technical Reports Server (NTRS)

    Fayyad, Usama M.; Roden, J. C.; Doyle, R. J.; Weir, Nicholas; Djorgovski, S. G.

    1993-01-01

    We describe the application of machine classification techniques to the development of an automated tool for the reduction of a large scientific data set. The 2nd Palomar Observatory Sky Survey provides comprehensive photographic coverage of the northern celestial hemisphere. The photographic plates are being digitized into images containing on the order of 10(exp 7) galaxies and 10(exp 8) stars. Since the size of this data set precludes manual analysis and classification of objects, our approach is to develop a software system which integrates independently developed techniques for image processing and data classification. Image processing routines are applied to identify and measure features of sky objects. Selected features are used to determine the classification of each object. GID3* and O-BTree, two inductive learning techniques, are used to automatically learn classification decision trees from examples. We describe the techniques used, the details of our specific application, and the initial encouraging results which indicate that our approach is well-suited to the problem. The benefits of the approach are increased data reduction throughput, consistency of classification, and the automated derivation of classification rules that will form an objective, examinable basis for classifying sky objects. Furthermore, astronomers will be freed from the tedium of an intensely visual task to pursue more challenging analysis and interpretation problems given automatically cataloged data.

  5. Evolutionary optimization of radial basis function classifiers for data mining applications.

    PubMed

    Buchtala, Oliver; Klimek, Manuel; Sick, Bernhard

    2005-10-01

    In many data mining applications that address classification problems, feature and model selection are considered as key tasks. That is, appropriate input features of the classifier must be selected from a given (and often large) set of possible features and structure parameters of the classifier must be adapted with respect to these features and a given data set. This paper describes an evolutionary algorithm (EA) that performs feature and model selection simultaneously for radial basis function (RBF) classifiers. In order to reduce the optimization effort, various techniques are integrated that accelerate and improve the EA significantly: hybrid training of RBF networks, lazy evaluation, consideration of soft constraints by means of penalty terms, and temperature-based adaptive control of the EA. The feasibility and the benefits of the approach are demonstrated by means of four data mining problems: intrusion detection in computer networks, biometric signature verification, customer acquisition with direct marketing methods, and optimization of chemical production processes. It is shown that, compared to earlier EA-based RBF optimization techniques, the runtime is reduced by up to 99% while error rates are lowered by up to 86%, depending on the application. The algorithm is independent of specific applications so that many ideas and solutions can be transferred to other classifier paradigms.

  6. Perianth organization and intra-specific floral variability.

    PubMed

    Herrera, J; Arista, M; Ortiz, P L

    2008-11-01

    Floral symmetry and fusion of perianth parts are factors that contribute to fine-tune the match between flowers and their animal pollination vectors. In the present study, we investigated whether the possession of a sympetalous (fused) corolla and bilateral symmetry of flowers translate into decreased intra-specific variability as a result of natural stabilizing selection exerted by pollinators. Average size of the corolla and intra-specific variability were determined in two sets of southern Spanish entomophilous plant species. In the first set, taxa were paired by family to control for the effect of phylogeny (phylogenetically independent contrasts), whereas in the second set species were selected at random. Flower size data from a previous study (with different species) were also used to test the hypothesis that petal fusion contributes to decrease intra-specific variability. In the phylogenetically independent contrasts, floral symmetry was a significant correlate of intra-specific variation, with bilaterally symmetrical flowers showing more constancy than radially symmetrical flowers (i.e. unsophisticated from a functional perspective). As regards petal fusion, species with fused petals were on average more constant than choripetalous species, but the difference was not statistically significant. The reanalysis of data from a previous study yielded largely similar results, with a distinct effect of symmetry on variability, but no effect of petal fusion. The randomly-chosen species sample, on the other hand, failed to reveal any significant effect of either symmetry or petal fusion on intra-specific variation. The problem of low-statistical power in this kind of analysis, and the difficulty of testing an evolutionary hypothesis that involves phenotypic traits with a high degree of morphological correlation is discussed.

  7. Strategies for reducing large fMRI data sets for independent component analysis.

    PubMed

    Wang, Ze; Wang, Jiongjiong; Calhoun, Vince; Rao, Hengyi; Detre, John A; Childress, Anna R

    2006-06-01

    In independent component analysis (ICA), principal component analysis (PCA) is generally used to reduce the raw data to a few principal components (PCs) through eigenvector decomposition (EVD) on the data covariance matrix. Although this works for spatial ICA (sICA) on moderately sized fMRI data, it is intractable for temporal ICA (tICA), since typical fMRI data have a high spatial dimension, resulting in an unmanageable data covariance matrix. To solve this problem, two practical data reduction methods are presented in this paper. The first solution is to calculate the PCs of tICA from the PCs of sICA. This approach works well for moderately sized fMRI data; however, it is highly computationally intensive, even intractable, when the number of scans increases. The second solution proposed is to perform PCA decomposition via a cascade recursive least squared (CRLS) network, which provides a uniform data reduction solution for both sICA and tICA. Without the need to calculate the covariance matrix, CRLS extracts PCs directly from the raw data, and the PC extraction can be terminated after computing an arbitrary number of PCs without the need to estimate the whole set of PCs. Moreover, when the whole data set becomes too large to be loaded into the machine memory, CRLS-PCA can save data retrieval time by reading the data once, while the conventional PCA requires numerous data retrieval steps for both covariance matrix calculation and PC extractions. Real fMRI data were used to evaluate the PC extraction precision, computational expense, and memory usage of the presented methods.

  8. Combining item response theory with multiple imputation to equate health assessment questionnaires.

    PubMed

    Gu, Chenyang; Gutman, Roee

    2017-09-01

    The assessment of patients' functional status across the continuum of care requires a common patient assessment tool. However, assessment tools that are used in various health care settings differ and cannot be easily contrasted. For example, the Functional Independence Measure (FIM) is used to evaluate the functional status of patients who stay in inpatient rehabilitation facilities, the Minimum Data Set (MDS) is collected for all patients who stay in skilled nursing facilities, and the Outcome and Assessment Information Set (OASIS) is collected if they choose home health care provided by home health agencies. All three instruments or questionnaires include functional status items, but the specific items, rating scales, and instructions for scoring different activities vary between the different settings. We consider equating different health assessment questionnaires as a missing data problem, and propose a variant of predictive mean matching method that relies on Item Response Theory (IRT) models to impute unmeasured item responses. Using real data sets, we simulated missing measurements and compared our proposed approach to existing methods for missing data imputation. We show that, for all of the estimands considered, and in most of the experimental conditions that were examined, the proposed approach provides valid inferences, and generally has better coverages, relatively smaller biases, and shorter interval estimates. The proposed method is further illustrated using a real data set. © 2016, The International Biometric Society.

  9. Probing for quantum speedup in spin-glass problems with planted solutions

    NASA Astrophysics Data System (ADS)

    Hen, Itay; Job, Joshua; Albash, Tameem; Rønnow, Troels F.; Troyer, Matthias; Lidar, Daniel A.

    2015-10-01

    The availability of quantum annealing devices with hundreds of qubits has made the experimental demonstration of a quantum speedup for optimization problems a coveted, albeit elusive goal. Going beyond earlier studies of random Ising problems, here we introduce a method to construct a set of frustrated Ising-model optimization problems with tunable hardness. We study the performance of a D-Wave Two device (DW2) with up to 503 qubits on these problems and compare it to a suite of classical algorithms, including a highly optimized algorithm designed to compete directly with the DW2. The problems are generated around predetermined ground-state configurations, called planted solutions, which makes them particularly suitable for benchmarking purposes. The problem set exhibits properties familiar from constraint satisfaction (SAT) problems, such as a peak in the typical hardness of the problems, determined by a tunable clause density parameter. We bound the hardness regime where the DW2 device either does not or might exhibit a quantum speedup for our problem set. While we do not find evidence for a speedup for the hardest and most frustrated problems in our problem set, we cannot rule out that a speedup might exist for some of the easier, less frustrated problems. Our empirical findings pertain to the specific D-Wave processor and problem set we studied and leave open the possibility that future processors might exhibit a quantum speedup on the same problem set.

  10. Optics Corrections with LOCO in the Fermilab Booster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Cheng-Yang; Prost, Lionel; Seiya, Kiyomi

    2016-06-01

    The optics of the Fermilab Booster has been corrected with LOCO (Linear Optics from Closed Orbits). However, the first corrections did not show any improvement in capture efficiency at injection. A detailed analysis of the results showed that the problem lay in the MADX optics file. Both the quadrupole and chromatic strengths were originally set as constants independent of beam energy. However, careful comparison between the measured and calculated tunes and chromatcity show that these strengths are energy dependent. After the MADX model was modified with these new energy dependent strengths, the LOCO corrected lattice has been applied to Booster.more » The effect of the corrected lattice will be discussed here.« less

  11. A circular median filter approach for resolving directional ambiguities in wind fields retrieved from spaceborne scatterometer data

    NASA Technical Reports Server (NTRS)

    Schultz, Howard

    1990-01-01

    The retrieval algorithm for spaceborne scatterometry proposed by Schultz (1985) is extended. A circular median filter (CMF) method is presented, which operates on wind directions independently of wind speed, removing any implicit wind speed dependence. A cell weighting scheme is included in the algorithm, permitting greater weights to be assigned to more reliable data. The mathematical properties of the ambiguous solutions to the wind retrieval problem are reviewed. The CMF algorithm is tested on twelve simulated data sets. The effects of spatially correlated likelihood assignment errors on the performance of the CMF algorithm are examined. Also, consideration is given to a wind field smoothing technique that uses a CMF.

  12. Simulated Pitot tube designed to detect blockage by ice, volcanic dust, sand, insects and to clear it: phase 1

    NASA Astrophysics Data System (ADS)

    Jackson, David A.

    2014-05-01

    A simulated coaxial Pitot tube has been developed using fibre optic sensors combined with actuators to monitor and maintain its correct operation under different environmental conditions. Experiments are reported showing that the dynamic and static tubes can be cleared of ice. It is also demonstrated that the dynamic tube can be cleared of dust and sand which is not the case for the static tube in the coaxial configuration. An approach is proposed to overcome this problem involving a conventional configuration where the static tube is operated independently orthogonal to the dynamic tube with a second set of sensors and actuators.

  13. Flight program language requirements. Volume 2: Requirements and evaluations

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The efforts and results are summarized for a study to establish requirements for a flight programming language for future onboard computer applications. Several different languages were available as potential candidates for future NASA flight programming efforts. The study centered around an evaluation of the four most pertinent existing aerospace languages. Evaluation criteria were established, and selected kernels from the current Saturn 5 and Skylab flight programs were used as benchmark problems for sample coding. An independent review of the language specifications incorporated anticipated future programming requirements into the evaluation. A set of detailed language requirements was synthesized from these activities. The details of program language requirements and of the language evaluations are described.

  14. Output control using feedforward and cascade controllers

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    An open-loop solution to the output control problem in SISO (single-input, single-output) systems by means of feedforward and cascade controllers is investigated. A simple characterization of feedforward controllers, which achieve steady-state disturbance rejection, is given in a transfer-function setting. Cascade controllers which cause steady-state command tracking are characterized. Disturbance decoupling and command matching controllers are identified. Conditions for existence of feedforward and cascade controllers are given. For unstable systems, it is shown that a stabilizing feedback controller can be used without affecting the feedforward and cascade controllers used for output control; hence, the three controllers can be designed independently. Output control by a combination of feedforward and feedback is discussed.

  15. Patterns of Home and School Behavior Problems in Rural and Urban Settings

    PubMed Central

    Hope, Timothy L; Bierman, Karen L

    2009-01-01

    This study examined the cross-situational patterns of behavior problems shown by children in rural and urban communities at school entry. Behavior problems exhibited in home settings were not expected to vary significantly across urban and rural settings. In contrast, it was anticipated that child behavior at school would be heavily influenced by the increased exposure to aggressive models and deviant peer support experienced by children in urban as compared to rural schools, leading to higher rates of school conduct problems for children in urban settings. Statistical comparisons of the patterns of behavior problems shown by representative samples of 89 rural and 221 urban children provided support for these hypotheses, as significant rural-urban differences emerged in school and not in home settings. Cross-situational patterns of behavior problems also varied across setting, with home-only patterns of problems characterizing more children at the rural site and school-only, patterns of behavior problems characterizing more children at the urban sites. In addition, whereas externalizing behavior was the primary school problem exhibited by urban children, rural children displayed significantly higher rates of internalizing problems at school. The implications of these results are discussed for developmental models of behavior problems and for preventive interventions. PMID:19834584

  16. Guidance law development for aeroassisted transfer vehicles using matched asymptotic expansions

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Melamed, Nahum

    1993-01-01

    This report addresses and clarifies a number of issues related to the Matched Asymptotic Expansion (MAE) analysis of skip trajectories, or any class of problems that give rise to inner layers that are not associated directly with satisfying boundary conditions. The procedure for matching inner and outer solutions, and using the composite solution to satisfy boundary conditions is developed and rigorously followed to obtain a set of algebraic equations for the problem of inclination change with minimum energy loss. A detailed evaluation of the zeroth order guidance algorithm for aeroassisted orbit transfer is performed. It is shown that by exploiting the structure of the MAE solution procedure, the original problem, which requires the solution of a set of 20 implicit algebraic equations, can be reduced to a problem of 6 implicit equations in 6 unknowns. A solution that is near optimal, requires a minimum of computation, and thus can be implemented in real time and on-board the vehicle, has been obtained. Guidance law implementation entails treating the current state as a new initial state and repetitively solving the zeroth order MAE problem to obtain the feedback controls. Finally, a general procedure is developed for constructing a MAE solution up to first order, of the Hamilton-Jacobi-Bellman equation based on the method of characteristics. The development is valid for a class of perturbation problems whose solution exhibits two-time-scale behavior. A regular expansion for problems of this type is shown to be inappropriate since it is not valid over a narrow range of the independent variable. That is, it is not uniformly valid. Of particular interest here is the manner in which matching and boundary conditions are enforced when the expansion is carried out to first order. Two cases are distinguished-one where the left boundary condition coincides with, or lies to the right of, the singular region, and another one where the left boundary condition lies to the left of the singular region. A simple example is used to illustrate the procedure where the obtained solution is uniformly valid to O(Epsilon(exp 2)). The potential application of this procedure to aeroassisted plane change is also described and partially evaluated.

  17. Naming Problems Do Not Reflect a Second Independent Core Deficit in Dyslexia: Double Deficits Explored

    ERIC Educational Resources Information Center

    Vaessen, Anniek; Gerretsen, Patty; Blomert, Leo

    2009-01-01

    The double deficit hypothesis states that naming speed problems represent a second core deficit in dyslexia independent from a phonological deficit. The current study investigated the main assumptions of this hypothesis in a large sample of well-diagnosed dyslexics. The three main findings were that (a) naming speed was consistently related only…

  18. Machine learning methods applied on dental fear and behavior management problems in children.

    PubMed

    Klingberg, G; Sillén, R; Norén, J G

    1999-08-01

    The etiologies of dental fear and dental behavior management problems in children were investigated in a database of information on 2,257 Swedish children 4-6 and 9-11 years old. The analyses were performed using computerized inductive techniques within the field of artificial intelligence. The database held information regarding dental fear levels and behavior management problems, which were defined as outcomes, i.e. dependent variables. The attributes, i.e. independent variables, included data on dental health and dental treatments, information about parental dental fear, general anxiety, socioeconomic variables, etc. The data contained both numerical and discrete variables. The analyses were performed using an inductive analysis program (XpertRule Analyser, Attar Software Ltd, Lancashire, UK) that presents the results in a hierarchic diagram called a knowledge tree. The importance of the different attributes is represented by their position in this diagram. The results show that inductive methods are well suited for analyzing multifactorial and complex relationships in large data sets, and are thus a useful complement to multivariate statistical techniques. The knowledge trees for the two outcomes, dental fear and behavior management problems, were very different from each other, suggesting that the two phenomena are not equivalent. Dental fear was found to be more related to non-dental variables, whereas dental behavior management problems seemed connected to dental variables.

  19. Cutting planes for the multistage stochastic unit commitment problem

    DOE PAGES

    Jiang, Ruiwei; Guan, Yongpei; Watson, Jean -Paul

    2016-04-20

    As renewable energy penetration rates continue to increase in power systems worldwide, new challenges arise for system operators in both regulated and deregulated electricity markets to solve the security-constrained coal-fired unit commitment problem with intermittent generation (due to renewables) and uncertain load, in order to ensure system reliability and maintain cost effectiveness. In this paper, we study a security-constrained coal-fired stochastic unit commitment model, which we use to enhance the reliability unit commitment process for day-ahead power system operations. In our approach, we first develop a deterministic equivalent formulation for the problem, which leads to a large-scale mixed-integer linear program.more » Then, we verify that the turn on/off inequalities provide a convex hull representation of the minimum-up/down time polytope under the stochastic setting. Next, we develop several families of strong valid inequalities mainly through lifting schemes. In particular, by exploring sequence independent lifting and subadditive approximation lifting properties for the lifting schemes, we obtain strong valid inequalities for the ramping and general load balance polytopes. Lastly, branch-and-cut algorithms are developed to employ these valid inequalities as cutting planes to solve the problem. Our computational results verify the effectiveness of the proposed approach.« less

  20. General topology meets model theory, on and

    PubMed Central

    Malliaris, Maryanthe; Shelah, Saharon

    2013-01-01

    Cantor proved in 1874 [Cantor G (1874) J Reine Angew Math 77:258–262] that the continuum is uncountable, and Hilbert’s first problem asks whether it is the smallest uncountable cardinal. A program arose to study cardinal invariants of the continuum, which measure the size of the continuum in various ways. By Gödel [Gödel K (1939) Proc Natl Acad Sci USA 25(4):220–224] and Cohen [Cohen P (1963) Proc Natl Acad Sci USA 50(6):1143–1148], Hilbert’s first problem is independent of ZFC (Zermelo-Fraenkel set theory with the axiom of choice). Much work both before and since has been done on inequalities between these cardinal invariants, but some basic questions have remained open despite Cohen’s introduction of forcing. The oldest and perhaps most famous of these is whether “,” which was proved in a special case by Rothberger [Rothberger F (1948) Fund Math 35:29–46], building on Hausdorff [Hausdorff (1936) Fund Math 26:241–255]. In this paper we explain how our work on the structure of Keisler’s order, a large-scale classification problem in model theory, led to the solution of this problem in ZFC as well as of an a priori unrelated open question in model theory. PMID:23836659

  1. Incorporation of epidemiological findings into radiation protection standards.

    PubMed

    Goldsmith, J R

    In standard setting there is a tendency to use data from experimental studies in preference to findings from epidemiological studies. Yet the epidemiological studies are usually the first and at times the only source of data on such critical effects as cancer, reproductive failure, and chronic cardiac and cardiovascular disease in exposed humans. A critique of the protection offered by current and proposed standards for ionizing and non-ionizing radiation illustrates some of the problems. Similar problems occur with water and air pollutants and with occupational exposures of many types. The following sorts of problems were noted: (a) Consideration of both thermal and non-thermal effects especially of non-ionizing radiation. (b) Interpretation of non-significant results as equivalent to no effect. (c) Accepting author's interpretation of a study, rather than examining its data independently for evidence of hazard. (d) Discounting data on unanticipated effects because of poor fit to preconceptions. (e) Dependence on threshold assumptions and demonstrations of dose-response relationships. (f) Choice of insensitive epidemiological indicators and procedures. (g) Consideration of each study separately, rather than giving weight to the conjunction of evidence from all available studies. These problems may be minimized by greater involvement of epidemiologists and their professional organizations in decisions about health protection.

  2. Cutting planes for the multistage stochastic unit commitment problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Ruiwei; Guan, Yongpei; Watson, Jean -Paul

    As renewable energy penetration rates continue to increase in power systems worldwide, new challenges arise for system operators in both regulated and deregulated electricity markets to solve the security-constrained coal-fired unit commitment problem with intermittent generation (due to renewables) and uncertain load, in order to ensure system reliability and maintain cost effectiveness. In this paper, we study a security-constrained coal-fired stochastic unit commitment model, which we use to enhance the reliability unit commitment process for day-ahead power system operations. In our approach, we first develop a deterministic equivalent formulation for the problem, which leads to a large-scale mixed-integer linear program.more » Then, we verify that the turn on/off inequalities provide a convex hull representation of the minimum-up/down time polytope under the stochastic setting. Next, we develop several families of strong valid inequalities mainly through lifting schemes. In particular, by exploring sequence independent lifting and subadditive approximation lifting properties for the lifting schemes, we obtain strong valid inequalities for the ramping and general load balance polytopes. Lastly, branch-and-cut algorithms are developed to employ these valid inequalities as cutting planes to solve the problem. Our computational results verify the effectiveness of the proposed approach.« less

  3. OIL—Output input language for data connectivity between geoscientific software applications

    NASA Astrophysics Data System (ADS)

    Amin Khan, Khalid; Akhter, Gulraiz; Ahmad, Zulfiqar

    2010-05-01

    Geoscientific computing has become so complex that no single software application can perform all the processing steps required to get the desired results. Thus for a given set of analyses, several specialized software applications are required, which must be interconnected for electronic flow of data. In this network of applications the outputs of one application become inputs of other applications. Each of these applications usually involve more than one data type and may have their own data formats, making them incompatible with other applications in terms of data connectivity. Consequently several data format conversion utilities are developed in-house to provide data connectivity between applications. Practically there is no end to this problem as each time a new application is added to the system, a set of new data conversion utilities need to be developed. This paper presents a flexible data format engine, programmable through a platform independent, interpreted language named; Output Input Language (OIL). Its unique architecture allows input and output formats to be defined independent of each other by two separate programs. Thus read and write for each format is coded only once and data connectivity link between two formats is established by a combination of their read and write programs. This results in fewer programs with no redundancy and maximum reuse, enabling rapid application development and easy maintenance of data connectivity links.

  4. Why should we pay more for layout designers?

    NASA Astrophysics Data System (ADS)

    Khan, Samee U.

    2003-12-01

    In this paper, we discuss the Passive Optical Network (PON) deployment on an arbitrary grid with guarenteed p-1 equipment failure. We show that this problem in general is NP-hard. We propose an algorithm, which guarantees a solution of 4-approximation to the optimal deployment, and further argue that this is the best lower bound achievable in our case.A basic architecture of PON is shown in figure 1.The main component of PON is an optical splitter device. Depending on which direction the light is travelling, it splits the incoming light and distributes it to multiple fibers towards Optical Network Termination (ONT), or combines it into one towards Optical Line Terminal (OLT). The PON technology uses a double-star architecture. The first star topology centers at the OLT, and the second at the optical splitter. PROBLEM DESCRIPTION: We can formulate the problem of optimal p-1 fault-tolerent PON Network Layout (PNL)as a graph theoretical problem. Consider a graph G(V,E), such that V represent the physical locations of the subscriber's, CO, and another location acquired by the CO to expand its network, and E represent the communication lines between two Vi's. If there is no direct communication line c(i,j) between Vi and Vj, we consider the shortest path between them measured in terms of simple distance or cost constraints. Without the loss of generality we assume that c(i,j)=c(j,i). For simplicity we do not further sub-divide V into the obvious categories that represent the locations of OLT, ONT, CO, optical splitters and the subscribers. We can now formulate the PNL problem as follows: "Given an undirected graph G, find the locations of ONTs and splitters such that the cost of the equipment is minimized and for QoS the maximum distance from an ONT to the pth splitter and from a splitter to the pth OLT is minimized". We assume that the OLT is residing inside the CO. The problem definition does not consider the optimization of ONT to the customer premises. This is due to the fact that the distance from ONT to the premises is negligibly small, and fault tolerance for a failed ONT can be answered by replacing the connection from a nearby ONT. We will treat ONT and ONU as more or less the same entity. GENERALIZED PNL IS NP-Hard: Essentially our problem definition consists of two major optimization steps, i.e. ONT to splitters and splitters to the OLT. Showing the hardness of the problem over one optimization step would be sufficient to show that the over all the problem is hard. If we consider the optimization of the first phase of the problem, i.e. minimize the maximum distance of ONTs to the splitters and reduce the cost of the equipment, this can be solved by associating cost to vertices V (equipment cost) and edges E (fiber cost). Thus the problem reduces to finding the smallest number of minimum cost edges from a splitter to an ONT, such that the chosen set of edges identify vertices that connect in a min-max fashion. Lemma 1 Let U be maximal k-independent set such that |U|>=k, then U is a k-dominating set in G^2. Proof (follows form def 1&2) Lemma 2 Let V be a k-dominating set in G, then |U|<=|V| holds for any k-independent set U in G^2. Proof For the two non-trivial cases of U subset V, a) U is not contained in V. Pick a node u at random such that u belongs U-V. Thus a set of nodes S can be defined such that the neighborhood of u N(u) is not contained in V, i.e. S=N(u) intersection V. Let L define the set of nodes that are adjacent to S, then any node v in SUL is contained in G^2 (definition 2) and is adjacent to at most k vertices in SUL. b) U is contained in V. If U is contained in V, then we can define a graph G containing vertices V-(SUL). Thus the lemma would hold, if V'=V-(SUL)=V-S, where V' is a k-dominating set in G. Pick a random node v in G not contained in V, then v belongs to V-V', and v has atleast k neighbors in G not present in V'. Since we assumed that V' is a k-dominating set in G, no neighbor belongs to G (by definition of G). Thus N(v) intersection V' is a subset of V'. Theorem 1 Assuming P!=NP, for any arbitrary fixed a<=p, there does not exist any polynomial time algorithm for PNL. Proof Suppose we have an algorithm A, which gives a solution for the PNL problem, then a solution for dominating set can be obtained. We will now give a polynomial time reduction from PNL to the dominating set problem. Let |V| be the pairwise neighborhood graph such that by picking any vertex v in V, N(v) intersection V is null. Thus the graph to find PNL (figure 2) can be computed by picking vertices as follows:(u,v)= 1 if u,v belongs V(u,v)= 1 if u belongs V and n belongs N(v) (u,v)= f(|V|)+ epsilon otherwise The choice of epsilon, exhibits the epsilon approximation factor in the final layout. If G has the dominating set of size d, then the solution for PNL has a set J such that dUN(v) is a set of nodes with cardinality d+(a-1), where a<=p. For a=1, the problem reduces to minimum k-center problem, so we will consider a>=2. Pick any node v in N(v). It is clear that v has only one neighbor in V', which is at a distance of 1 (triangle inequality). If it is not equal to 1, then it must be covered by a neibours within a distance of 1. Let Z=P intersection V such that Z is subset of V and contains d nodes such that d=|P|-(a-1)|V|, then any arbitrary node v belongs to V-Z must have atleast a nodes in P with a adistance of 1, but by definition and previous argument, only a-1 nodes can form the neighborhood. Thus, Z is a dominating set of size d, but by lemma 2, G cannot contain a dominating set of size d. OUR APPROACH: We assume that the edges of the graph G have the triangle inequality property. Let Si represent the set of weighted vertices w(v) such that once picked, they form a clique in graph G. PNL Algorithm Input: G Output: G' (Final PNL) 1. Construct G1^2,G2^2,....,Gm^2 2. Compute I, (Mi) in each Gi^2 3. Find the smallest index i, such that |Mi|<=k, say Mj 4. Input Mj for step 5. 5. Construct G1j^2,G2j^2,....,Gmj^2 6. Compute I, (Mij) in each Gij^2 7. Compute Si=si(u)|u belongs to Mij 8. Find the minimum index i such that w(Si)<= w(D) 9. Return Sj 10. Compute G'=min Mij belongs V [max si sum{i=0}^{|Sj|] 11. Return G' Theorem 2 The PNL algorithm is complete and will identify a solution, if there exists one. Proof (Trivial and not included due to space) Theorem 3 The PNL algorithm has a lower bound of 4-approximation to the optimal algorithm. Proof (Not included due to space, but the basic argument, is due to the fact that picking a node v with 2-epsilon in G^2, would required the neighbors to be picked in G^4, thus the PNL is no better than 4-epsilon, where epsilon >0) Experiments We made some initial experiments, which are showing promissing results with savings in fiber, equipment cost, due to space, and inital phase of the experiments, we are not including the results here. P.S. My appologies for exceeding the text limit. There is much more detail to the formal proof, I hope the idea is still conveyed. There are also 2 figures which will be faxed.

  5. How does emotion influence different creative performances? The mediating role of cognitive flexibility.

    PubMed

    Lin, Wei-Lun; Tsai, Ping-Hsun; Lin, Hung-Yu; Chen, Hsueh-Chih

    2014-01-01

    Cognitive flexibility is proposed to be one of the factors underlying how positive emotions can improve creativity. However, previous works have seldom set up or empirically measured an independent index to demonstrate its mediating effect, nor have they investigated its mediating role on different types of creative performances, which involve distinct processes. In this study, 120 participants were randomly assigned to positive, neutral or negative affect conditions. Their levels of cognitive flexibility were then measured by a switch task. Finally, their creative performances were calibrated by either an open-ended divergent thinking test or a closed-ended insight problem-solving task. The results showed that positive emotional states could reduce switch costs and enhance both types of creative performances. However, cognitive flexibility exhibited a full mediating effect only on the relationship between positive emotion and insight problem solving, but not between positive emotion and divergent thinking. Divergent thinking was instead more associated with arousal level. These results suggest that emotions might influence different creative performances through distinct mechanisms.

  6. Concept of a Pitot tube able to detect blockage by ice, volcanic ash, sand and insects, and to clear the tube

    NASA Astrophysics Data System (ADS)

    Jackson, David A.

    2015-12-01

    A conceptual coaxial Pitot tube (PT) has been developed using fiber optic sensors combined with actuators to monitor and maintain its correct operation under different environmental conditions. Experiments were performed showing that the dynamic and static tubes can be cleared of ice. It was also demonstrated that the dynamic tube could be cleared of dust and sand which was not the case for the static tube in the coaxial configuration. An approach was proposed to overcome this problem involving a conventional configuration where the static tube was operated independently orthogonal to the dynamic tube, and a second set of sensors and actuators was used. Sensors and associated actuators were developed for temperature and intensity for a linear PT. The aim of this work is to propose a solution for a problem that has caused the loss of the lives of many passengers and crew of aircraft. Resources were not available to test a full implementation of a PT incorporating the proposed modifications.

  7. Cross-Identification of Astronomical Catalogs on Multiple GPUs

    NASA Astrophysics Data System (ADS)

    Lee, M. A.; Budavári, T.

    2013-10-01

    One of the most fundamental problems in observational astronomy is the cross-identification of sources. Observations are made in different wavelengths, at different times, and from different locations and instruments, resulting in a large set of independent observations. The scientific outcome is often limited by our ability to quickly perform meaningful associations between detections. The matching, however, is difficult scientifically, statistically, as well as computationally. The former two require detailed physical modeling and advanced probabilistic concepts; the latter is due to the large volumes of data and the problem's combinatorial nature. In order to tackle the computational challenge and to prepare for future surveys, whose measurements will be exponentially increasing in size past the scale of feasible CPU-based solutions, we developed a new implementation which addresses the issue by performing the associations on multiple Graphics Processing Units (GPUs). Our implementation utilizes up to 6 GPUs in combination with the Thrust library to achieve an over 40x speed up verses the previous best implementation running on a multi-CPU SQL Server.

  8. Analysis of x-ray hand images for bone age assessment

    NASA Astrophysics Data System (ADS)

    Serrat, Joan; Vitria, Jordi M.; Villanueva, Juan J.

    1990-09-01

    In this paper we describe a model-based system for the assessment of skeletal maturity on hand radiographs by the TW2 method. The problem consists in classiflying a set of bones appearing in an image in one of several stages described in an atlas. A first approach consisting in pre-processing segmentation and classification independent phases is also presented. However it is only well suited for well contrasted low noise images without superimposed bones were the edge detection by zero crossing of second directional derivatives is able to extract all bone contours maybe with little gaps and few false edges on the background. Hence the use of all available knowledge about the problem domain is needed to build a rather general system. We have designed a rule-based system for narrow down the rank of possible stages for each bone and guide the analysis process. It calls procedures written in conventional languages for matching stage models against the image and getting features needed in the classification process.

  9. Posterior consistency in conditional distribution estimation

    PubMed Central

    Pati, Debdeep; Dunson, David B.; Tokdar, Surya T.

    2014-01-01

    A wide variety of priors have been proposed for nonparametric Bayesian estimation of conditional distributions, and there is a clear need for theorems providing conditions on the prior for large support, as well as posterior consistency. Estimation of an uncountable collection of conditional distributions across different regions of the predictor space is a challenging problem, which differs in some important ways from density and mean regression estimation problems. Defining various topologies on the space of conditional distributions, we provide sufficient conditions for posterior consistency focusing on a broad class of priors formulated as predictor-dependent mixtures of Gaussian kernels. This theory is illustrated by showing that the conditions are satisfied for a class of generalized stick-breaking process mixtures in which the stick-breaking lengths are monotone, differentiable functions of a continuous stochastic process. We also provide a set of sufficient conditions for the case where stick-breaking lengths are predictor independent, such as those arising from a fixed Dirichlet process prior. PMID:25067858

  10. The Analysis and Construction of Perfectly Matched Layers for the Linearized Euler Equations

    NASA Technical Reports Server (NTRS)

    Hesthaven, J. S.

    1997-01-01

    We present a detailed analysis of a recently proposed perfectly matched layer (PML) method for the absorption of acoustic waves. The split set of equations is shown to be only weakly well-posed, and ill-posed under small low order perturbations. This analysis provides the explanation for the stability problems associated with the split field formulation and illustrates why applying a filter has a stabilizing effect. Utilizing recent results obtained within the context of electromagnetics, we develop strongly well-posed absorbing layers for the linearized Euler equations. The schemes are shown to be perfectly absorbing independent of frequency and angle of incidence of the wave in the case of a non-convecting mean flow. In the general case of a convecting mean flow, a number of techniques is combined to obtain a absorbing layers exhibiting PML-like behavior. The efficacy of the proposed absorbing layers is illustrated though computation of benchmark problems in aero-acoustics.

  11. TOPLHA and ALOHA: comparison between Lower Hybrid wave coupling codes

    NASA Astrophysics Data System (ADS)

    Meneghini, Orso; Hillairet, J.; Goniche, M.; Bilato, R.; Voyer, D.; Parker, R.

    2008-11-01

    TOPLHA and ALOHA are wave coupling simulation tools for LH antennas. Both codes are able to account for realistic 3D antenna geometries and use a 1D plasma model. In the framework of a collaboration between MIT and CEA laboratories, the two codes have been extensively compared. In TOPLHA the EM problem is self consistently formulated by means of a set of multiple coupled integral equations having as domain the triangles of the meshed antenna surface. TOPLHA currently uses the FELHS code for modeling the plasma response. ALOHA instead uses a mode matching approach and its own plasma model. Comparisons have been done for several plasma scenarios on different antenna designs: an array of independent waveguides, a multi-junction antenna and a passive/active multi-junction antenna. When simulating the same geometry and plasma conditions the two codes compare remarkably well both for the reflection coefficients and for the launched spectra. The different approach of the two codes to solve the same problem strengthens the confidence in the final results.

  12. Side-effects and technical problems in cytapheresis with cell separators. Results of a retrospective multicenter study.

    PubMed

    Kretschmer, V

    1987-09-01

    On the basis of a survey, the acute side-effects and technical problems in a total of 77,525 cytaphereses (IFC 36,530, CFC 40,995) in donors at 39 hemapheresis centers were retrospectively analysed statistically. In general, relevant donor side-effects (0.78%-1.05%) were more rare than the primary donor-independent disturbances (1.65%-2.63%). The donor side-effects predominated merely with the use of the cell separators Haemonetics M30/Belco (1.06% vs. 0.57%). These were mainly circulatory reactions (0.83%), which were generally much more frequent with IFC (0.54%) than with CFC (IBM/Cobe 0.11%, CS-3000 0.19%). Potentially fatal complications were not reported. The frequency of side-effects, disturbances and discontinuations correlated inversely with the separation rate of the individual centers per method. Centers in which two or three methods were applied simultaneously reported a higher frequency of side-effects and disturbances. Hemolysis was only observed with IFC (0.09%), but not with the use of the Haemonetics V50. The greater susceptibility to disturbances of technical/methodological/operational origin essentially results from the more elaborate, but not yet perfected technology, including computer control and monitoring, as well as defects in the production of the much more complicated disposable sets. Thus the highest rate of discontinuations was calculated for the system which is so far the most sophisticated technically (CS-3000, 1.85%). Although the primary donor-independent problems sometimes correlate directly with the manifestation of donor side-effects, the greater technological sophistication of automatically controlled and monitored systems cannot be dispensed with, since only in this way can potentially fatal risks for the donors be largely ruled out.(ABSTRACT TRUNCATED AT 250 WORDS)

  13. Sleep and Depression in Postpartum Women: A Population-Based Study

    PubMed Central

    Dørheim, Signe Karen; Bondevik, Gunnar Tschudi; Eberhard-Gran, Malin; Bjorvatn, Bjørn

    2009-01-01

    Study Objectives: (1) To describe the prevalence of and risk factors for postpartum maternal sleep problems and depressive symptoms simultaneously, (2) identify factors independently associated with either condition, and (3) explore associations between specific postpartum sleep components and depression. Design: Cross-sectional. Setting: Population-based. Participants: All women (n = 4191) who had delivered at Stavanger University Hospital from October 2005 to September 2006 were mailed a questionnaire seven weeks postpartum. The response rate was 68% (n = 2830). Interventions: None. Measurements and results: Sleep was measured using the Pittsburgh Sleep Quality Index (PSQI), and depressive symptoms using the Edinburgh Postnatal Depression Scale (EPDS). The prevalence of sleep problems, defined as PSQI > 5, was 57.7%, and the prevalence of depression, defined as EPDS ≥ 10, was 16.5%. The mean self-reported nightly sleep duration was 6.5 hours and sleep efficiency 73%. Depression, previous sleep problems, being primiparous, not exclusively breastfeeding, or having a younger or male infant were factors associated with poor postpartum sleep quality. Poor sleep was also associated with depression when adjusted for other significant risk factors for depression, such as poor partner relationship, previous depression, depression during pregnancy and stressful life events. Sleep disturbances and subjective sleep quality were the aspects of sleep most strongly associated with depression. Conclusions: Poor sleep was associated with depression independently of other risk factors. Poor sleep may increase the risk of depression in some women, but as previously known risk factors were also associated, mothers diagnosed with postpartum depression are not merely reporting symptoms of chronic sleep deprivation. Citation: Dørheim SK; Bondevik GT; Eberhard-Gran M; Bjorvatn B. Sleep and depression in postpartum women: a population-based study. SLEEP 2009;32(7):847-855. PMID:19639747

  14. The traveling salesman problem as a new screening test in early Alzheimer's disease: an exploratory study. Visual problem-solving in AD.

    PubMed

    De Vreese, Luc Pieter; Pradelli, Samantha; Massini, Giulia; Buscema, Massimo; Savarè, Rita; Grossi, Enzo

    2005-12-01

    In the clinical setting, brief general mental status tests tend to detect early-stage Alzheimer's disease (AD) less well than more specific cognitive tests. Some preliminary information was collected on the diagnostic accuracy of the Traveling Salesman Problem (TSP) compared with the Mini-Mental State Examination (MMSE) in recognizing early AD from normal aging. Fifteen AD outpatients (mean +/- SD MMSE: 24.45 +/- 2.61) and 30 age- and education-matched controls were submitted in a single blind protocol to a paper-and-pencil visually-presented version of the TSP, containing a random array of 30 points (TSP30). The task consisted of drawing the shortest continuous path, passing through each point once and only once, and returning to the starting point. Path lengths for subjects' solutions were computed and compared with the optimal solution given by a specific evolutionary algorithm called GenD. TP30 discriminated significantly better between AD subjects and controls (ROC curve AUC = 0.976; 95% CI 0.94-1.01) compared with the MMSE corrected for age and education (ROC curve AUC = 0.877; 95% CI 0.74-1.005). A path length of 478.2354, taken as "cut-off point", classified correctly subjects with a sensitivity of 93.3% and a specificity of 99.3%, whereas a score corrected for age and education of 25.85 on the MMSE had a sensitivity of 73.3% and a specificity of 96.7%. The TSP seems to be particularly sensitive to early AD and independent of patient's age and educational level. The high diagnostic ability, simplicity, and independence of age and education make the TSP promising as a screening test for early AD.

  15. Ill-defined problem solving in amnestic mild cognitive impairment: linking episodic memory to effective solution generation.

    PubMed

    Sheldon, S; Vandermorris, S; Al-Haj, M; Cohen, S; Winocur, G; Moscovitch, M

    2015-02-01

    It is well accepted that the medial temporal lobes (MTL), and the hippocampus specifically, support episodic memory processes. Emerging evidence suggests that these processes also support the ability to effectively solve ill-defined problems which are those that do not have a set routine or solution. To test the relation between episodic memory and problem solving, we examined the ability of individuals with single domain amnestic mild cognitive impairment (aMCI), a condition characterized by episodic memory impairment, to solve ill-defined social problems. Participants with aMCI and age and education matched controls were given a battery of tests that included standardized neuropsychological measures, the Autobiographical Interview (Levine et al., 2002) that scored for episodic content in descriptions of past personal events, and a measure of ill-defined social problem solving. Corroborating previous findings, the aMCI group generated less episodically rich narratives when describing past events. Individuals with aMCI also generated less effective solutions when solving ill-defined problems compared to the control participants. Correlation analyses demonstrated that the ability to recall episodic elements from autobiographical memories was positively related to the ability to effectively solve ill-defined problems. The ability to solve these ill-defined problems was related to measures of activities of daily living. In conjunction with previous reports, the results of the present study point to a new functional role of episodic memory in ill-defined goal-directed behavior and other non-memory tasks that require flexible thinking. Our findings also have implications for the cognitive and behavioural profile of aMCI by suggesting that the ability to effectively solve ill-defined problems is related to sustained functional independence. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Priority setting in general practice: health priorities of older patients differ from treatment priorities of their physicians.

    PubMed

    Voigt, Isabel; Wrede, Jennifer; Diederichs-Egidi, Heike; Dierks, Marie-Luise; Junius-Walker, Ulrike

    2010-12-01

    To ascertain health priorities of older patients and treatment priorities of their general practitioners (GP) on the basis of a geriatric assessment and to determine the agreement between these priorities. The study included a sample of 9 general practitioners in Hannover, Germany, and a stratified sample of 35 patients (2-5 patients per practice, 18 female, average age 77.7 years). Patients were given a geriatric assessment using the Standardized Assessment for Elderly Patients in Primary Care (STEP) to gain an overview of their health and everyday problems. On the basis of these results, patients and their physicians independently rated the importance of each problem disclosed by the assessment. Whereas patients assessed the importance for their everyday lives, physicians assessed the importance for patients' medical care and patients' everyday lives. Each patient had a mean ± standard deviation of 18 ± 9.2 health problems. Thirty five patients disclosed a total of 634 problems; 537 (85%) were rated by patients and physicians. Of these 537 problems, 332 (62%) were rated by patients and 334 (62%) by physicians as important for patients' everyday lives. In addition, 294 (55%) were rated by physicians as important for patients' medical care. Although these proportions of important problems were similar between patients and physicians, there was little overlap in the specific problems that each group considered important. The chance-corrected agreement (Cohen κ) between patients and physicians on the importance of problems for patients' lives was low (κ=0.23). Likewise, patients and physicians disagreed on the problems that physicians considered important for patients' medical care (κ=0.18, P<0.001 for each). The low agreement on health and treatment priorities between patients and physicians necessitates better communication between the two parties to strengthen mutual understanding.

  17. The heterogeneity of attention-deficit/hyperactivity disorder symptoms and conduct problems: Cognitive inhibition, emotion regulation, emotionality, and disorganized attachment.

    PubMed

    Forslund, Tommie; Brocki, Karin C; Bohlin, Gunilla; Granqvist, Pehr; Eninger, Lilianne

    2016-09-01

    This study examined the contributions of several important domains of functioning to attention-deficit/hyperactivity disorder (ADHD) symptoms and conduct problems. Specifically, we investigated whether cognitive inhibition, emotion regulation, emotionality, and disorganized attachment made independent and specific contributions to these externalizing behaviour problems from a multiple pathways perspective. The study included laboratory measures of cognitive inhibition and disorganized attachment in 184 typically developing children (M age = 6 years, 10 months, SD = 1.7). Parental ratings provided measures of emotion regulation, emotionality, and externalizing behaviour problems. Results revealed that cognitive inhibition, regulation of positive emotion, and positive emotionality were independently and specifically related to ADHD symptoms. Disorganized attachment and negative emotionality formed independent and specific relations to conduct problems. Our findings support the multiple pathways perspective on ADHD, with poor regulation of positive emotion and high positive emotionality making distinct contributions to ADHD symptoms. More specifically, our results support the proposal of a temperamentally based pathway to ADHD symptoms. The findings also indicate that disorganized attachment and negative emotionality constitute pathways specific to conduct problems rather than to ADHD symptoms. © 2016 The British Psychological Society.

  18. Individualized Math Problems in Percent. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. This volume includes problems concerned with computing percents.…

  19. Individualized Math Problems in Algebra. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic, and contains problems related to diverse vocations. Solutions are provided for all problems. Problems presented in this package concern ratios used in food…

  20. Individualized Math Problems in Fractions. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. This package contains problems involving computation with common…

  1. Individualized Math Problems in Geometry. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. The volume contains problems in applied geometry. Measurement of…

  2. Individualized Math Problems in Measurement and Conversion. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. This volume includes problems involving measurement, computation of…

  3. Individualized Math Problems in Integers. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. This volume presents problems involving operations with positive and…

  4. The Effects of Cognitive Style and Piagetian Logical Reasoning on Solving a Propositional Relation Algebra Word Problem.

    ERIC Educational Resources Information Center

    Nasser, Ramzi; Carifio, James

    The purpose of this study was to find out whether students perform differently on algebra word problems that have certain key context features and entail proportional reasoning, relative to their level of logical reasoning and their degree of field dependence/independence. Field-independent students tend to restructure and break stimuli into parts…

  5. Men in Limbo: Former Students with Special Educational Needs Caught between Economic Independence and Social Security Dependence

    ERIC Educational Resources Information Center

    Skjong, Gerd; Myklebust, Jon Olav

    2016-01-01

    Individuals in their mid-thirties are expected to be employed and economically independent. However, people with disabilities and health problems--for example, former students with special educational needs (SEN)--may have problems in this domain of adult life. In Norway, individuals with SEN frequently rely on social security and support measures…

  6. Life Management: Moving Out! Solving Practical Problems for Independent Living. Utah Home Economics and Family Life Curriculum Guide.

    ERIC Educational Resources Information Center

    Utah State Office of Education, Salt Lake City.

    This guide, which has been developed for Utah's home economics and family life education program, contains materials for use in teaching a life management course emphasizing the problem-solving skills required for independent living. Discussed first are the assumptions underlying the curriculum, development of the guide, and suggestions for its…

  7. Diving at altitude: from definition to practice.

    PubMed

    Egi, S Murat; Pieri, Massimo; Marroni, Alessandro

    2014-01-01

    Diving above sea level has different motivations for recreational, military, commercial and scientific activities. Despite the apparently wide practice of inland diving, there are three major discrepancies about diving at altitude: threshold elevation that requires changes in sea level procedures; upper altitude limit of the applicability of these modifications; and independent validation of altitude adaptation methods of decompression algorithms. The first problem is solved by converting the normal fluctuation in barometric pressure to an altitude equivalent. Based on the barometric variations recorded from a meteorological center, it is possible to suggest 600 meters as a threshold for classifying a dive as an "altitude" dive. The second problem is solved by proposing the threshold altitude of aviation (2,400 meters) to classify "high" altitude dives. The DAN (Divers Alert Network) Europe diving database (DB) is analyzed to solve the third problem. The database consists of 65,050 dives collected from different dive computers. A total of 1,467 dives were found to be classified as altitude dives. However, by checking the elevation according to the logged geographical coordinates, 1,284 dives were disqualified because the altitude setting had been used as a conservative setting by the dive computer despite the fact that the dive was made at sea level. Furthermore, according to the description put forward in this manuscript, 72 dives were disqualified because the surface level elevation is lower than 600 meters. The number of field data (111 dives) is still very low to use for the validation of any particular method of altitude adaptation concerning decompression algorithms.

  8. Improved education in musculoskeletal conditions is necessary for all doctors.

    PubMed Central

    Akesson, Kristina; Dreinhöfer, Karsten E.; Woolf, A. D.

    2003-01-01

    It is likely that everyone will, at some time, suffer from a problem related to the musculoskeletal system, ranging from a very common problem such as osteoarthritis or back pain to severely disabling limb trauma or rheumatoid arthritis. Many musculoskeletal problems are chronic conditions. The most common symptoms are pain and disability, with an impact not only on individuals' quality of life but also, importantly, on people's ability to earn a living and be independent. It has been estimated that one in four consultations in primary care is caused by problems of the musculoskeletal system and that these conditions may account for up to 60% of all disability pensions. In contrast, teaching at undergraduate and graduate levels--and the resulting competence and confidence of many doctors--do not reflect the impact of these conditions on individuals and society. Many medical students do not have any clinical training in assessing patients with bone and joint problems. Under the umbrella of the Bone and Joint Decade 2000-2010, experts from all parts of the world with an interest in teaching have developed recommendations for an undergraduate curriculum to improve the teaching of musculoskeletal conditions in medical schools. The goal for each medical school should be a course in musculoskeletal medicine concentrating on clinical assessment, common outpatient musculoskeletal problems and recognition of emergencies. Improving competency in the management of musculoskeletal problems within primary care settings through improved education is the next aim, but there are needs for improvement for all professionals and at all levels within the health care system. PMID:14710510

  9. Screening of faba bean (Vicia faba L.) accessions to acidity and aluminium stresses

    PubMed Central

    Stoddard, Frederick L.

    2017-01-01

    Background Faba bean is an important starch-based protein crop produced worldwide. Soil acidity and aluminium toxicity are major abiotic stresses affecting its production, so in regions where soil acidity is a problem, there is a gap between the potential and actual productivity of the crop. Hence, we set out to evaluate acidity and aluminium tolerance in a range of faba bean germplasm using solution culture and pot experiments. Methods A set of 30 accessions was collected from regions where acidity and aluminium are or are not problems. The accessions were grown in solution culture and a subset of 10 was grown first in peat and later in perlite potting media. In solution culture, morphological parameters including taproot length, root regrowth and root tolerance index were measured, and in the pot experiments the key measurements were taproot length, plant biomass, chlorophyll concentration and stomatal conductance. Result Responses to acidity and aluminium were apparently independent. Accessions Dosha and NC 58 were tolerant to both stress. Kassa and GLA 1103 were tolerant to acidity showing less than 3% reduction in taproot length. Aurora and Messay were tolerant to aluminium. Babylon was sensitive to both, with up to 40% reduction in taproot length from acidity and no detectable recovery from Al3+ challenge. Discussion The apparent independence of the responses to acidity and aluminium is in agreement with the previous research findings, suggesting that crop accessions separately adapt to H+ and Al3+ toxicity as a result of the difference in the nature of soil parent materials where the accession originated. Differences in rankings between experiments were minor and attributable to heterogeneity of seed materials and the specific responses of accessions to the rooting media. Use of perlite as a potting medium offers an ideal combination of throughput, inertness of support medium, access to leaves for detection of their stress responses, and harvest of clean roots for evaluation of their growth. PMID:28194315

  10. Screening of faba bean (Vicia faba L.) accessions to acidity and aluminium stresses.

    PubMed

    Belachew, Kiflemariam Y; Stoddard, Frederick L

    2017-01-01

    Faba bean is an important starch-based protein crop produced worldwide. Soil acidity and aluminium toxicity are major abiotic stresses affecting its production, so in regions where soil acidity is a problem, there is a gap between the potential and actual productivity of the crop. Hence, we set out to evaluate acidity and aluminium tolerance in a range of faba bean germplasm using solution culture and pot experiments. A set of 30 accessions was collected from regions where acidity and aluminium are or are not problems. The accessions were grown in solution culture and a subset of 10 was grown first in peat and later in perlite potting media. In solution culture, morphological parameters including taproot length, root regrowth and root tolerance index were measured, and in the pot experiments the key measurements were taproot length, plant biomass, chlorophyll concentration and stomatal conductance. Responses to acidity and aluminium were apparently independent. Accessions Dosha and NC 58 were tolerant to both stress. Kassa and GLA 1103 were tolerant to acidity showing less than 3% reduction in taproot length. Aurora and Messay were tolerant to aluminium. Babylon was sensitive to both, with up to 40% reduction in taproot length from acidity and no detectable recovery from Al 3+ challenge. The apparent independence of the responses to acidity and aluminium is in agreement with the previous research findings, suggesting that crop accessions separately adapt to H + and Al 3+ toxicity as a result of the difference in the nature of soil parent materials where the accession originated. Differences in rankings between experiments were minor and attributable to heterogeneity of seed materials and the specific responses of accessions to the rooting media. Use of perlite as a potting medium offers an ideal combination of throughput, inertness of support medium, access to leaves for detection of their stress responses, and harvest of clean roots for evaluation of their growth.

  11. Our 1% Problem: Independent Schools and the Income Gap

    ERIC Educational Resources Information Center

    Bartels, Fred

    2012-01-01

    The subject of independent schools and inequality is rife with contradictions. In some ways, independent schools work to ameliorate inequities. In other ways, they reinforce and exacerbate them. Those in independent schools who work on social justice, equity, and diversity issues deal with these contradictions every day. Most believe, most of the…

  12. Field Dependence/Independence Cognitive Style and Problem Posing: An Investigation with Sixth Grade Students

    ERIC Educational Resources Information Center

    Nicolaou, Aristoklis Andreas; Xistouri, Xenia

    2011-01-01

    Field dependence/independence cognitive style was found to relate to general academic achievement and specific areas of mathematics; in the majority of studies, field-independent students were found to be superior to field-dependent students. The present study investigated the relationship between field dependence/independence cognitive style and…

  13. Development Parenting Model to Increase the Independence of Children

    ERIC Educational Resources Information Center

    Sunarty, Kustiah; Dirawan, Gufran Darma

    2015-01-01

    This study examines parenting and the child's independence model. The research problem is whether there is a relationship between parenting and the child's independence. The purpose of research is to determine: firstly, the type of parenting in an effort to increase the independence of the child; and the relationship between parenting models and…

  14. Structural similarity based kriging for quantitative structure activity and property relationship modeling.

    PubMed

    Teixeira, Ana L; Falcao, Andre O

    2014-07-28

    Structurally similar molecules tend to have similar properties, i.e. closer molecules in the molecular space are more likely to yield similar property values while distant molecules are more likely to yield different values. Based on this principle, we propose the use of a new method that takes into account the high dimensionality of the molecular space, predicting chemical, physical, or biological properties based on the most similar compounds with measured properties. This methodology uses ordinary kriging coupled with three different molecular similarity approaches (based on molecular descriptors, fingerprints, and atom matching) which creates an interpolation map over the molecular space that is capable of predicting properties/activities for diverse chemical data sets. The proposed method was tested in two data sets of diverse chemical compounds collected from the literature and preprocessed. One of the data sets contained dihydrofolate reductase inhibition activity data, and the second molecules for which aqueous solubility was known. The overall predictive results using kriging for both data sets comply with the results obtained in the literature using typical QSPR/QSAR approaches. However, the procedure did not involve any type of descriptor selection or even minimal information about each problem, suggesting that this approach is directly applicable to a large spectrum of problems in QSAR/QSPR. Furthermore, the predictive results improve significantly with the similarity threshold between the training and testing compounds, allowing the definition of a confidence threshold of similarity and error estimation for each case inferred. The use of kriging for interpolation over the molecular metric space is independent of the training data set size, and no reparametrizations are necessary when more compounds are added or removed from the set, and increasing the size of the database will consequentially improve the quality of the estimations. Finally it is shown that this model can be used for checking the consistency of measured data and for guiding an extension of the training set by determining the regions of the molecular space for which new experimental measurements could be used to maximize the model's predictive performance.

  15. The Value of Clinical Jazz: Teaching Critical Reflection on, in, and Toward Action.

    PubMed

    Casapulla, Sharon; Longenecker, Randall; Beverly, Elizabeth A

    2016-05-01

    Clinical Jazz is a small-group strategy in medical education designed to develop interpersonal skills and improve doctor-patient and interprofessional relationships. The purpose of this study was to explore medical students' and faculty facilitators' perceived value of Clinical Jazz. We conducted a modified Nominal Group Process with participating medical students (n=21), faculty facilitators (n=5), and research team members (n=3). Students and faculty facilitators independently answered the question, "What do you value about Clinical Jazz?" We then conducted content and thematic analyses on the resulting data. Three themes emerged during analysis: (1) students and faculty appreciated the opportunity to learn and practice a thoughtful and structured process for problem solving, (2) students and faculty valued the safety of the group process in sharing a diversity of perspectives on topics in medicine, and (3) students and faculty acknowledged the importance of addressing real and challenging problems that are rarely addressed in formal lectures and other planned small-group settings. Clinical Jazz provides students and faculty with the opportunity to address the hidden and/or informal curriculum in medical education, while providing a safe space and time to solve important clinical and interprofessional problems.

  16. Irrelevance Reasoning in Knowledge Based Systems

    NASA Technical Reports Server (NTRS)

    Levy, A. Y.

    1993-01-01

    This dissertation considers the problem of reasoning about irrelevance of knowledge in a principled and efficient manner. Specifically, it is concerned with two key problems: (1) developing algorithms for automatically deciding what parts of a knowledge base are irrelevant to a query and (2) the utility of relevance reasoning. The dissertation describes a novel tool, the query-tree, for reasoning about irrelevance. Based on the query-tree, we develop several algorithms for deciding what formulas are irrelevant to a query. Our general framework sheds new light on the problem of detecting independence of queries from updates. We present new results that significantly extend previous work in this area. The framework also provides a setting in which to investigate the connection between the notion of irrelevance and the creation of abstractions. We propose a new approach to research on reasoning with abstractions, in which we investigate the properties of an abstraction by considering the irrelevance claims on which it is based. We demonstrate the potential of the approach for the cases of abstraction of predicates and projection of predicate arguments. Finally, we describe an application of relevance reasoning to the domain of modeling physical devices.

  17. Problems of low-parameter equations of state

    NASA Astrophysics Data System (ADS)

    Petrik, G. G.

    2017-11-01

    The paper focuses on the system approach to problems of low-parametric equations of state (EOS). It is a continuation of the investigations in the field of substantiated prognosis of properties on two levels, molecular and thermodynamic. Two sets of low-parameter EOS have been considered based on two very simple molecular-level models. The first one consists of EOS of van der Waals type (a modification of van der Waals EOS proposed for spheres). The main problem of these EOS is a weak connection with the micro-level, which raise many uncertainties. The second group of EOS has been derived by the author independently of the ideas of van der Waals based on the model of interacting point centers (IPC). All the parameters of the EOS have a meaning and are associated with the manifestation of attractive and repulsive forces. The relationship between them is found to be the control parameter of the thermodynamic level. In this case, EOS IPC passes into a one-parameter family. It is shown that many EOS of vdW-type can be included in the framework of the PC model. Simultaneously, all their parameters acquire a physical meaning.

  18. A bi-objective integer programming model for partly-restricted flight departure scheduling

    PubMed Central

    Guan, Wei; Zhang, Wenyi; Jiang, Shixiong; Fan, Lingling

    2018-01-01

    The normal studies on air traffic departure scheduling problem (DSP) mainly deal with an independent airport in which the departure traffic is not affected by surrounded airports, which, however, is not a consistent case. In reality, there still exist cases where several commercial airports are closely located and one of them possesses a higher priority. During the peak hours, the departure activities of the lower-priority airports are usually required to give way to those of higher-priority airport. These giving-way requirements can inflict a set of changes on the modeling of departure scheduling problem with respect to the lower-priority airports. To the best of our knowledge, studies on DSP under this condition are scarce. Accordingly, this paper develops a bi-objective integer programming model to address the flight departure scheduling of the partly-restricted (e.g., lower-priority) one among several adjacent airports. An adapted tabu search algorithm is designed to solve the current problem. It is demonstrated from the case study of Tianjin Binhai International Airport in China that the proposed method can obviously improve the operation efficiency, while still realizing superior equity and regularity among restricted flows. PMID:29715299

  19. Neural network for intelligent query of an FBI forensic database

    NASA Astrophysics Data System (ADS)

    Uvanni, Lee A.; Rainey, Timothy G.; Balasubramanian, Uma; Brettle, Dean W.; Weingard, Fred; Sibert, Robert W.; Birnbaum, Eric

    1997-02-01

    Examiner is an automated fired cartridge case identification system utilizing a dual-use neural network pattern recognition technology, called the statistical-multiple object detection and location system (S-MODALS) developed by Booz(DOT)Allen & Hamilton, Inc. in conjunction with Rome Laboratory. S-MODALS was originally designed for automatic target recognition (ATR) of tactical and strategic military targets using multisensor fusion [electro-optical (EO), infrared (IR), and synthetic aperture radar (SAR)] sensors. Since S-MODALS is a learning system readily adaptable to problem domains other than automatic target recognition, the pattern matching problem of microscopic marks for firearms evidence was analyzed using S-MODALS. The physics; phenomenology; discrimination and search strategies; robustness requirements; error level and confidence level propagation that apply to the pattern matching problem of military targets were found to be applicable to the ballistic domain as well. The Examiner system uses S-MODALS to rank a set of queried cartridge case images from the most similar to the least similar image in reference to an investigative fired cartridge case image. The paper presents three independent tests and evaluation studies of the Examiner system utilizing the S-MODALS technology for the Federal Bureau of Investigation.

  20. A bi-objective integer programming model for partly-restricted flight departure scheduling.

    PubMed

    Zhong, Han; Guan, Wei; Zhang, Wenyi; Jiang, Shixiong; Fan, Lingling

    2018-01-01

    The normal studies on air traffic departure scheduling problem (DSP) mainly deal with an independent airport in which the departure traffic is not affected by surrounded airports, which, however, is not a consistent case. In reality, there still exist cases where several commercial airports are closely located and one of them possesses a higher priority. During the peak hours, the departure activities of the lower-priority airports are usually required to give way to those of higher-priority airport. These giving-way requirements can inflict a set of changes on the modeling of departure scheduling problem with respect to the lower-priority airports. To the best of our knowledge, studies on DSP under this condition are scarce. Accordingly, this paper develops a bi-objective integer programming model to address the flight departure scheduling of the partly-restricted (e.g., lower-priority) one among several adjacent airports. An adapted tabu search algorithm is designed to solve the current problem. It is demonstrated from the case study of Tianjin Binhai International Airport in China that the proposed method can obviously improve the operation efficiency, while still realizing superior equity and regularity among restricted flows.

  1. Simultaneous multislice refocusing via time optimal control.

    PubMed

    Rund, Armin; Aigner, Christoph Stefan; Kunisch, Karl; Stollberger, Rudolf

    2018-02-09

    Joint design of minimum duration RF pulses and slice-selective gradient shapes for MRI via time optimal control with strict physical constraints, and its application to simultaneous multislice imaging. The minimization of the pulse duration is cast as a time optimal control problem with inequality constraints describing the refocusing quality and physical constraints. It is solved with a bilevel method, where the pulse length is minimized in the upper level, and the constraints are satisfied in the lower level. To address the inherent nonconvexity of the optimization problem, the upper level is enhanced with new heuristics for finding a near global optimizer based on a second optimization problem. A large set of optimized examples shows an average temporal reduction of 87.1% for double diffusion and 74% for turbo spin echo pulses compared to power independent number of slices pulses. The optimized results are validated on a 3T scanner with phantom measurements. The presented design method computes minimum duration RF pulse and slice-selective gradient shapes subject to physical constraints. The shorter pulse duration can be used to decrease the effective echo time in existing echo-planar imaging or echo spacing in turbo spin echo sequences. © 2018 International Society for Magnetic Resonance in Medicine.

  2. Patterns for Effectively Documenting Frameworks

    NASA Astrophysics Data System (ADS)

    Aguiar, Ademar; David, Gabriel

    Good design and implementation are necessary but not sufficient pre-requisites for successfully reusing object-oriented frameworks. Although not always recognized, good documentation is crucial for effective framework reuse, and often hard, costly, and tiresome, coming with many issues, especially when we are not aware of the key problems and respective ways of addressing them. Based on existing literature, case studies and lessons learned, the authors have been mining proven solutions to recurrent problems of documenting object-oriented frameworks, and writing them in pattern form, as patterns are a very effective way of communicating expertise and best practices. This paper presents a small set of patterns addressing problems related to the framework documentation itself, here seen as an autonomous and tangible product independent of the process used to create it. The patterns aim at helping non-experts on cost-effectively documenting object-oriented frameworks. In concrete, these patterns provide guidance on choosing the kinds of documents to produce, how to relate them, and which contents to include. Although the focus is more on the documents themselves, rather than on the process and tools to produce them, some guidelines are also presented in the paper to help on applying the patterns to a specific framework.

  3. Motion and force control of multiple robotic manipulators

    NASA Technical Reports Server (NTRS)

    Wen, John T.; Kreutz-Delgado, Kenneth

    1992-01-01

    This paper addresses the motion and force control problem of multiple robot arms manipulating a cooperatively held object. A general control paradigm is introduced which decouples the motion and force control problems. For motion control, different control strategies are constructed based on the variables used as the control input in the controller design. There are three natural choices; acceleration of a generalized coordinate, arm tip force vectors, and the joint torques. The first two choices require full model information but produce simple models for the control design problem. The last choice results in a class of relatively model independent control laws by exploiting the Hamiltonian structure of the open loop system. The motion control only determines the joint torque to within a manifold, due to the multiple-arm kinematic constraint. To resolve the nonuniqueness of the joint torques, two methods are introduced. If the arm and object models are available, an optimization can be performed to best allocate the desired and effector control force to the joint actuators. The other possibility is to control the internal force about some set point. It is shown that effective force regulation can be achieved even if little model information is available.

  4. Subject order-independent group ICA (SOI-GICA) for functional MRI data analysis.

    PubMed

    Zhang, Han; Zuo, Xi-Nian; Ma, Shuang-Ye; Zang, Yu-Feng; Milham, Michael P; Zhu, Chao-Zhe

    2010-07-15

    Independent component analysis (ICA) is a data-driven approach to study functional magnetic resonance imaging (fMRI) data. Particularly, for group analysis on multiple subjects, temporally concatenation group ICA (TC-GICA) is intensively used. However, due to the usually limited computational capability, data reduction with principal component analysis (PCA: a standard preprocessing step of ICA decomposition) is difficult to achieve for a large dataset. To overcome this, TC-GICA employs multiple-stage PCA data reduction. Such multiple-stage PCA data reduction, however, leads to variable outputs due to different subject concatenation orders. Consequently, the ICA algorithm uses the variable multiple-stage PCA outputs and generates variable decompositions. In this study, a rigorous theoretical analysis was conducted to prove the existence of such variability. Simulated and real fMRI experiments were used to demonstrate the subject-order-induced variability of TC-GICA results using multiple PCA data reductions. To solve this problem, we propose a new subject order-independent group ICA (SOI-GICA). Both simulated and real fMRI data experiments demonstrated the high robustness and accuracy of the SOI-GICA results compared to those of traditional TC-GICA. Accordingly, we recommend SOI-GICA for group ICA-based fMRI studies, especially those with large data sets. Copyright 2010 Elsevier Inc. All rights reserved.

  5. Characterization of Ground Displacement Sources from Variational Bayesian Independent Component Analysis of Space Geodetic Time Series

    NASA Astrophysics Data System (ADS)

    Gualandi, Adriano; Serpelloni, Enrico; Elina Belardinelli, Maria; Bonafede, Maurizio; Pezzo, Giuseppe; Tolomei, Cristiano

    2015-04-01

    A critical point in the analysis of ground displacement time series, as those measured by modern space geodetic techniques (primarly continuous GPS/GNSS and InSAR) is the development of data driven methods that allow to discern and characterize the different sources that generate the observed displacements. A widely used multivariate statistical technique is the Principal Component Analysis (PCA), which allows to reduce the dimensionality of the data space maintaining most of the variance of the dataset explained. It reproduces the original data using a limited number of Principal Components, but it also shows some deficiencies, since PCA does not perform well in finding the solution to the so-called Blind Source Separation (BSS) problem. The recovering and separation of the different sources that generate the observed ground deformation is a fundamental task in order to provide a physical meaning to the possible different sources. PCA fails in the BSS problem since it looks for a new Euclidean space where the projected data are uncorrelated. Usually, the uncorrelation condition is not strong enough and it has been proven that the BSS problem can be tackled imposing on the components to be independent. The Independent Component Analysis (ICA) is, in fact, another popular technique adopted to approach this problem, and it can be used in all those fields where PCA is also applied. An ICA approach enables us to explain the displacement time series imposing a fewer number of constraints on the model, and to reveal anomalies in the data such as transient deformation signals. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, we use a variational bayesian ICA (vbICA) method, which models the probability density function (pdf) of each source signal using a mix of Gaussian distributions. This technique allows for more flexibility in the description of the pdf of the sources, giving a more reliable estimate of them. Here we introduce the vbICA technique and present its application on synthetic data that simulate a GPS network recording ground deformation in a tectonically active region, with synthetic time-series containing interseismic, coseismic, and postseismic deformation, plus seasonal deformation, and white and coloured noise. We study the ability of the algorithm to recover the original (known) sources of deformation, and then apply it to a real scenario: the Emilia seismic sequence (2012, northern Italy), which is an example of seismic sequence occurred in a slowly converging tectonic setting, characterized by several local to regional anthropogenic or natural sources of deformation, mainly subsidence due to fluid withdrawal and sediments compaction. We apply both PCA and vbICA to displacement time-series recorded by continuous GPS and InSAR (Pezzo et al., EGU2015-8950).

  6. Complex fuzzy soft expert sets

    NASA Astrophysics Data System (ADS)

    Selvachandran, Ganeshsree; Hafeed, Nisren A.; Salleh, Abdul Razak

    2017-04-01

    Complex fuzzy sets and its accompanying theory although at its infancy, has proven to be superior to classical type-1 fuzzy sets, due its ability in representing time-periodic problem parameters and capturing the seasonality of the fuzziness that exists in the elements of a set. These are important characteristics that are pervasive in most real world problems. However, there are two major problems that are inherent in complex fuzzy sets: it lacks a sufficient parameterization tool and it does not have a mechanism to validate the values assigned to the membership functions of the elements in a set. To overcome these problems, we propose the notion of complex fuzzy soft expert sets which is a hybrid model of complex fuzzy sets and soft expert sets. This model incorporates the advantages of complex fuzzy sets and soft sets, besides having the added advantage of allowing the users to know the opinion of all the experts in a single model without the need for any additional cumbersome operations. As such, this model effectively improves the accuracy of representation of problem parameters that are periodic in nature, besides having a higher level of computational efficiency compared to similar models in literature.

  7. Optimized velocity distributions for direct dark matter detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibarra, Alejandro; Rappelt, Andreas, E-mail: ibarra@tum.de, E-mail: andreas.rappelt@tum.de

    We present a method to calculate, without making assumptions about the local dark matter velocity distribution, the maximal and minimal number of signal events in a direct detection experiment given a set of constraints from other direct detection experiments and/or neutrino telescopes. The method also allows to determine the velocity distribution that optimizes the signal rates. We illustrate our method with three concrete applications: i) to derive a halo-independent upper limit on the cross section from a set of null results, ii) to confront in a halo-independent way a detection claim to a set of null results and iii) tomore » assess, in a halo-independent manner, the prospects for detection in a future experiment given a set of current null results.« less

  8. Discrete Ordinate Quadrature Selection for Reactor-based Eigenvalue Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarrell, Joshua J; Evans, Thomas M; Davidson, Gregory G

    2013-01-01

    In this paper we analyze the effect of various quadrature sets on the eigenvalues of several reactor-based problems, including a two-dimensional (2D) fuel pin, a 2D lattice of fuel pins, and a three-dimensional (3D) reactor core problem. While many quadrature sets have been applied to neutral particle discrete ordinate transport calculations, the Level Symmetric (LS) and the Gauss-Chebyshev product (GC) sets are the most widely used in production-level reactor simulations. Other quadrature sets, such as Quadruple Range (QR) sets, have been shown to be more accurate in shielding applications. In this paper, we compare the LS, GC, QR, and themore » recently developed linear-discontinuous finite element (LDFE) sets, as well as give a brief overview of other proposed quadrature sets. We show that, for a given number of angles, the QR sets are more accurate than the LS and GC in all types of reactor problems analyzed (2D and 3D). We also show that the LDFE sets are more accurate than the LS and GC sets for these problems. We conclude that, for problems where tens to hundreds of quadrature points (directions) per octant are appropriate, QR sets should regularly be used because they have similar integration properties as the LS and GC sets, have no noticeable impact on the speed of convergence of the solution when compared with other quadrature sets, and yield more accurate results. We note that, for very high-order scattering problems, the QR sets exactly integrate fewer angular flux moments over the unit sphere than the GC sets. The effects of those inexact integrations have yet to be analyzed. We also note that the LDFE sets only exactly integrate the zeroth and first angular flux moments. Pin power comparisons and analyses are not included in this paper and are left for future work.« less

  9. Discrete ordinate quadrature selection for reactor-based Eigenvalue problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarrell, J. J.; Evans, T. M.; Davidson, G. G.

    2013-07-01

    In this paper we analyze the effect of various quadrature sets on the eigenvalues of several reactor-based problems, including a two-dimensional (2D) fuel pin, a 2D lattice of fuel pins, and a three-dimensional (3D) reactor core problem. While many quadrature sets have been applied to neutral particle discrete ordinate transport calculations, the Level Symmetric (LS) and the Gauss-Chebyshev product (GC) sets are the most widely used in production-level reactor simulations. Other quadrature sets, such as Quadruple Range (QR) sets, have been shown to be more accurate in shielding applications. In this paper, we compare the LS, GC, QR, and themore » recently developed linear-discontinuous finite element (LDFE) sets, as well as give a brief overview of other proposed quadrature sets. We show that, for a given number of angles, the QR sets are more accurate than the LS and GC in all types of reactor problems analyzed (2D and 3D). We also show that the LDFE sets are more accurate than the LS and GC sets for these problems. We conclude that, for problems where tens to hundreds of quadrature points (directions) per octant are appropriate, QR sets should regularly be used because they have similar integration properties as the LS and GC sets, have no noticeable impact on the speed of convergence of the solution when compared with other quadrature sets, and yield more accurate results. We note that, for very high-order scattering problems, the QR sets exactly integrate fewer angular flux moments over the unit sphere than the GC sets. The effects of those inexact integrations have yet to be analyzed. We also note that the LDFE sets only exactly integrate the zeroth and first angular flux moments. Pin power comparisons and analyses are not included in this paper and are left for future work. (authors)« less

  10. Two Quantum Protocols for Oblivious Set-member Decision Problem

    NASA Astrophysics Data System (ADS)

    Shi, Run-Hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun

    2015-10-01

    In this paper, we defined a new secure multi-party computation problem, called Oblivious Set-member Decision problem, which allows one party to decide whether a secret of another party belongs to his private set in an oblivious manner. There are lots of important applications of Oblivious Set-member Decision problem in fields of the multi-party collaborative computation of protecting the privacy of the users, such as private set intersection and union, anonymous authentication, electronic voting and electronic auction. Furthermore, we presented two quantum protocols to solve the Oblivious Set-member Decision problem. Protocol I takes advantage of powerful quantum oracle operations so that it needs lower costs in both communication and computation complexity; while Protocol II takes photons as quantum resources and only performs simple single-particle projective measurements, thus it is more feasible with the present technology.

  11. Two Quantum Protocols for Oblivious Set-member Decision Problem

    PubMed Central

    Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun

    2015-01-01

    In this paper, we defined a new secure multi-party computation problem, called Oblivious Set-member Decision problem, which allows one party to decide whether a secret of another party belongs to his private set in an oblivious manner. There are lots of important applications of Oblivious Set-member Decision problem in fields of the multi-party collaborative computation of protecting the privacy of the users, such as private set intersection and union, anonymous authentication, electronic voting and electronic auction. Furthermore, we presented two quantum protocols to solve the Oblivious Set-member Decision problem. Protocol I takes advantage of powerful quantum oracle operations so that it needs lower costs in both communication and computation complexity; while Protocol II takes photons as quantum resources and only performs simple single-particle projective measurements, thus it is more feasible with the present technology. PMID:26514668

  12. Two Quantum Protocols for Oblivious Set-member Decision Problem.

    PubMed

    Shi, Run-Hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun

    2015-10-30

    In this paper, we defined a new secure multi-party computation problem, called Oblivious Set-member Decision problem, which allows one party to decide whether a secret of another party belongs to his private set in an oblivious manner. There are lots of important applications of Oblivious Set-member Decision problem in fields of the multi-party collaborative computation of protecting the privacy of the users, such as private set intersection and union, anonymous authentication, electronic voting and electronic auction. Furthermore, we presented two quantum protocols to solve the Oblivious Set-member Decision problem. Protocol I takes advantage of powerful quantum oracle operations so that it needs lower costs in both communication and computation complexity; while Protocol II takes photons as quantum resources and only performs simple single-particle projective measurements, thus it is more feasible with the present technology.

  13. Generalized minimum dominating set and application in automatic text summarization

    NASA Astrophysics Data System (ADS)

    Xu, Yi-Zhi; Zhou, Hai-Jun

    2016-03-01

    For a graph formed by vertices and weighted edges, a generalized minimum dominating set (MDS) is a vertex set of smallest cardinality such that the summed weight of edges from each outside vertex to vertices in this set is equal to or larger than certain threshold value. This generalized MDS problem reduces to the conventional MDS problem in the limiting case of all the edge weights being equal to the threshold value. We treat the generalized MDS problem in the present paper by a replica-symmetric spin glass theory and derive a set of belief-propagation equations. As a practical application we consider the problem of extracting a set of sentences that best summarize a given input text document. We carry out a preliminary test of the statistical physics-inspired method to this automatic text summarization problem.

  14. Individualized Math Problems in Ratio and Proportion. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. This volume contains problems involving ratio and proportion. Some…

  15. Individualized Math Problems in Graphs and Tables. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. Problems involving the construction and interpretation of graphs and…

  16. Individualized Math Problems in Simple Equations. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. Problems in this volume require solution of linear equations, systems…

  17. Individualized Math Problems in Trigonometry. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. Problems in this volume require the use of trigonometric and inverse…

  18. Individualized Math Problems in Decimals. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    THis is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. Problems in this volume concern use of decimals and are related to the…

  19. Individualized Math Problems in Volume. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. Problems in this booklet require the computation of volumes of solids,…

  20. Anomaly and signature filtering improve classifier performance for detection of suspicious access to EHRs.

    PubMed

    Kim, Jihoon; Grillo, Janice M; Boxwala, Aziz A; Jiang, Xiaoqian; Mandelbaum, Rose B; Patel, Bhakti A; Mikels, Debra; Vinterbo, Staal A; Ohno-Machado, Lucila

    2011-01-01

    Our objective is to facilitate semi-automated detection of suspicious access to EHRs. Previously we have shown that a machine learning method can play a role in identifying potentially inappropriate access to EHRs. However, the problem of sampling informative instances to build a classifier still remained. We developed an integrated filtering method leveraging both anomaly detection based on symbolic clustering and signature detection, a rule-based technique. We applied the integrated filtering to 25.5 million access records in an intervention arm, and compared this with 8.6 million access records in a control arm where no filtering was applied. On the training set with cross-validation, the AUC was 0.960 in the control arm and 0.998 in the intervention arm. The difference in false negative rates on the independent test set was significant, P=1.6×10(-6). Our study suggests that utilization of integrated filtering strategies to facilitate the construction of classifiers can be helpful.

  1. Anomaly and Signature Filtering Improve Classifier Performance For Detection Of Suspicious Access To EHRs

    PubMed Central

    Kim, Jihoon; Grillo, Janice M; Boxwala, Aziz A; Jiang, Xiaoqian; Mandelbaum, Rose B; Patel, Bhakti A; Mikels, Debra; Vinterbo, Staal A; Ohno-Machado, Lucila

    2011-01-01

    Our objective is to facilitate semi-automated detection of suspicious access to EHRs. Previously we have shown that a machine learning method can play a role in identifying potentially inappropriate access to EHRs. However, the problem of sampling informative instances to build a classifier still remained. We developed an integrated filtering method leveraging both anomaly detection based on symbolic clustering and signature detection, a rule-based technique. We applied the integrated filtering to 25.5 million access records in an intervention arm, and compared this with 8.6 million access records in a control arm where no filtering was applied. On the training set with cross-validation, the AUC was 0.960 in the control arm and 0.998 in the intervention arm. The difference in false negative rates on the independent test set was significant, P=1.6×10−6. Our study suggests that utilization of integrated filtering strategies to facilitate the construction of classifiers can be helpful. PMID:22195129

  2. An analysis of training, generalization, and maintenance effects of Primary Care Triple P for parents of preschool-aged children with disruptive behavior.

    PubMed

    Boyle, Cynthia L; Sanders, Matthew R; Lutzker, John R; Prinz, Ronald J; Shapiro, Cheri; Whitaker, Daniel J

    2010-02-01

    A brief primary care intervention for parents of preschool-aged children with disruptive behavior was assessed using a multiple probe design. Primary Care Triple P, a four session behavioral intervention was sequentially introduced within a multiple probe format to each of 9 families to a total of 10 children aged between 3 and 7 years (males = 4, females = 6). Independent observations of parent-child interaction in the home revealed that the intervention was associated with lower levels of child disruptive behavior both in a target training setting and in various generalization settings. Parent report data also confirmed there were significant reductions in intensity and frequency of disruptive behavior, an increase in task specific parental self-efficacy, improved scores on the Parent Experience Survey, and high levels of consumer satisfaction. All short-term intervention effects were maintained at four-month follow-up. Implications for the delivery of brief interventions to prevent conduct problems are discussed.

  3. How to optimise the coverage rate of infant and adult immunisations in Europe

    PubMed Central

    Schmitt, Heinz-J; Booy, Robert; Aston, Robert; Van Damme, Pierre; Schumacher, R Fabian; Campins, Magda; Rodrigo, Carlos; Heikkinen, Terho; Weil-Olivier, Catherine; Finn, Adam; Olcén, Per; Fedson, David; Peltola, Heikki

    2007-01-01

    Background Although vaccination has been proved to be a safe, efficacious, and cost-effective intervention, immunisation rates remain suboptimal in many European countries, resulting in poor control of many vaccine-preventable diseases. Discussion The Summit of Independent European Vaccination Experts focused on the perception of vaccines and vaccination by the general public and healthcare professionals and discussed ways to improve vaccine uptake in Europe. Despite the substantial impact and importance of the media, healthcare professionals were identified as the main advocates for vaccination and the most important source of information about vaccines for the general public. Healthcare professionals should receive more support for their own education on vaccinology, have rapid access to up-to-date information on vaccines, and have easy access to consultation with experts regarding vaccination-related problems. Vaccine information systems should be set up to facilitate promotion of vaccination. Summary Every opportunity to administer vaccines should be used, and active reminder systems should be set up. A European vaccine awareness week should be established. PMID:17535430

  4. Detecting Rhythms in Time Series with RAIN

    PubMed Central

    Thaben, Paul F.; Westermark, Pål O.

    2014-01-01

    A fundamental problem in research on biological rhythms is that of detecting and assessing the significance of rhythms in large sets of data. Classic methods based on Fourier theory are often hampered by the complex and unpredictable characteristics of experimental and biological noise. Robust nonparametric methods are available but are limited to specific wave forms. We present RAIN, a robust nonparametric method for the detection of rhythms of prespecified periods in biological data that can detect arbitrary wave forms. When applied to measurements of the circadian transcriptome and proteome of mouse liver, the sets of transcripts and proteins with rhythmic abundances were significantly expanded due to the increased detection power, when we controlled for false discovery. Validation against independent data confirmed the quality of these results. The large expansion of the circadian mouse liver transcriptomes and proteomes reflected the prevalence of nonsymmetric wave forms and led to new conclusions about function. RAIN was implemented as a freely available software package for R/Bioconductor and is presently also available as a web interface. PMID:25326247

  5. Improving Naive Bayes with Online Feature Selection for Quick Adaptation to Evolving Feature Usefulness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pon, R K; Cardenas, A F; Buttler, D J

    The definition of what makes an article interesting varies from user to user and continually evolves even for a single user. As a result, for news recommendation systems, useless document features can not be determined a priori and all features are usually considered for interestingness classification. Consequently, the presence of currently useless features degrades classification performance [1], particularly over the initial set of news articles being classified. The initial set of document is critical for a user when considering which particular news recommendation system to adopt. To address these problems, we introduce an improved version of the naive Bayes classifiermore » with online feature selection. We use correlation to determine the utility of each feature and take advantage of the conditional independence assumption used by naive Bayes for online feature selection and classification. The augmented naive Bayes classifier performs 28% better than the traditional naive Bayes classifier in recommending news articles from the Yahoo! RSS feeds.« less

  6. Script-independent text line segmentation in freestyle handwritten documents.

    PubMed

    Li, Yi; Zheng, Yefeng; Doermann, David; Jaeger, Stefan; Li, Yi

    2008-08-01

    Text line segmentation in freestyle handwritten documents remains an open document analysis problem. Curvilinear text lines and small gaps between neighboring text lines present a challenge to algorithms developed for machine printed or hand-printed documents. In this paper, we propose a novel approach based on density estimation and a state-of-the-art image segmentation technique, the level set method. From an input document image, we estimate a probability map, where each element represents the probability that the underlying pixel belongs to a text line. The level set method is then exploited to determine the boundary of neighboring text lines by evolving an initial estimate. Unlike connected component based methods ( [1], [2] for example), the proposed algorithm does not use any script-specific knowledge. Extensive quantitative experiments on freestyle handwritten documents with diverse scripts, such as Arabic, Chinese, Korean, and Hindi, demonstrate that our algorithm consistently outperforms previous methods [1]-[3]. Further experiments show the proposed algorithm is robust to scale change, rotation, and noise.

  7. A practical approach for writer-dependent symbol recognition using a writer-independent symbol recognizer.

    PubMed

    LaViola, Joseph J; Zeleznik, Robert C

    2007-11-01

    We present a practical technique for using a writer-independent recognition engine to improve the accuracy and speed while reducing the training requirements of a writer-dependent symbol recognizer. Our writer-dependent recognizer uses a set of binary classifiers based on the AdaBoost learning algorithm, one for each possible pairwise symbol comparison. Each classifier consists of a set of weak learners, one of which is based on a writer-independent handwriting recognizer. During online recognition, we also use the n-best list of the writer-independent recognizer to prune the set of possible symbols and thus reduce the number of required binary classifications. In this paper, we describe the geometric and statistical features used in our recognizer and our all-pairs classification algorithm. We also present the results of experiments that quantify the effect incorporating a writer-independent recognition engine into a writer-dependent recognizer has on accuracy, speed, and user training time.

  8. Merged Long-Term Data Sets from TOMS and SBUV Total Ozone Measurements

    NASA Technical Reports Server (NTRS)

    Stolarski, Richard; McPeters, Richard; Labow, Gordon J.; Hollandsworth, Stacey; Flynn, Larry; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Total ozone has been measured by a series of nadir-viewing satellite instruments. These measurements begin with the Total Ozone Mapping Spectrometer (TOMS) and Solar Backscatter UltraViolet (SBUV) instruments on Nimbus 7, launched in late 1978. The measurements have continued with the Meteor 3 TOMS, Earth Probe TOMS, and NOAA 9,11,14 SBUV/2 instruments. The problem for producing a long-term data set is establishing the relative calibration of the various instruments to better than 1%. There was a nearly two year gap between the Meteor 3 TOMS and the Earth Probe TOMS. This gap is filled by the NOAA 9 and 11 SBUV/2 instruments, but they were in drifting orbits that result in effective gaps in the record when the equator crossing time occurs at large solar zenith angle. We have used recently re-derived calibrations of the SBUV instruments using the D-pair (306/313 nm wavelengths) data at the equator. These equatorial D-pair measurements should maintain the internal calibration of each instrument better than previous approaches. We then use the comparisons between instruments during their overlap periods to establish a consistent calibration over the entire data set. The resulting merged ozone data set is independent of the ground-based Dobson/Brewer network.

  9. Continuous Glucose Monitoring Enables the Detection of Losses in Infusion Set Actuation (LISAs)

    PubMed Central

    Howsmon, Daniel P.; Cameron, Faye; Baysal, Nihat; Ly, Trang T.; Forlenza, Gregory P.; Maahs, David M.; Buckingham, Bruce A.; Hahn, Juergen; Bequette, B. Wayne

    2017-01-01

    Reliable continuous glucose monitoring (CGM) enables a variety of advanced technology for the treatment of type 1 diabetes. In addition to artificial pancreas algorithms that use CGM to automate continuous subcutaneous insulin infusion (CSII), CGM can also inform fault detection algorithms that alert patients to problems in CGM or CSII. Losses in infusion set actuation (LISAs) can adversely affect clinical outcomes, resulting in hyperglycemia due to impaired insulin delivery. Prolonged hyperglycemia may lead to diabetic ketoacidosis—a serious metabolic complication in type 1 diabetes. Therefore, an algorithm for the detection of LISAs based on CGM and CSII signals was developed to improve patient safety. The LISA detection algorithm is trained retrospectively on data from 62 infusion set insertions from 20 patients. The algorithm collects glucose and insulin data, and computes relevant fault metrics over two different sliding windows; an alarm sounds when these fault metrics are exceeded. With the chosen algorithm parameters, the LISA detection strategy achieved a sensitivity of 71.8% and issued 0.28 false positives per day on the training data. Validation on two independent data sets confirmed that similar performance is seen on data that was not used for training. The developed algorithm is able to effectively alert patients to possible infusion set failures in open-loop scenarios, with limited evidence of its extension to closed-loop scenarios. PMID:28098839

  10. Continuous Glucose Monitoring Enables the Detection of Losses in Infusion Set Actuation (LISAs).

    PubMed

    Howsmon, Daniel P; Cameron, Faye; Baysal, Nihat; Ly, Trang T; Forlenza, Gregory P; Maahs, David M; Buckingham, Bruce A; Hahn, Juergen; Bequette, B Wayne

    2017-01-15

    Reliable continuous glucose monitoring (CGM) enables a variety of advanced technology for the treatment of type 1 diabetes. In addition to artificial pancreas algorithms that use CGM to automate continuous subcutaneous insulin infusion (CSII), CGM can also inform fault detection algorithms that alert patients to problems in CGM or CSII. Losses in infusion set actuation (LISAs) can adversely affect clinical outcomes, resulting in hyperglycemia due to impaired insulin delivery. Prolonged hyperglycemia may lead to diabetic ketoacidosis-a serious metabolic complication in type 1 diabetes. Therefore, an algorithm for the detection of LISAs based on CGM and CSII signals was developed to improve patient safety. The LISA detection algorithm is trained retrospectively on data from 62 infusion set insertions from 20 patients. The algorithm collects glucose and insulin data, and computes relevant fault metrics over two different sliding windows; an alarm sounds when these fault metrics are exceeded. With the chosen algorithm parameters, the LISA detection strategy achieved a sensitivity of 71.8% and issued 0.28 false positives per day on the training data. Validation on two independent data sets confirmed that similar performance is seen on data that was not used for training. The developed algorithm is able to effectively alert patients to possible infusion set failures in open-loop scenarios, with limited evidence of its extension to closed-loop scenarios.

  11. Environmental Influences on Independent Collaborative Play

    ERIC Educational Resources Information Center

    Mawson, Brent

    2010-01-01

    Data from two qualitative research projects indicated a relationship between the type of early childhood setting and children's independent collaborative play. The first research project involved 22 three and four-year-old children in a daylong setting and 47 children four-year-old children in a sessional kindergarten. The second project involved…

  12. Effect of structure in problem based learning on science teaching efficacy beliefs and science content knowledge of elementary preservice teachers

    NASA Astrophysics Data System (ADS)

    Sasser, Selena Kay

    This study examined the effects of differing amounts of structure within the problem based learning instructional model on elementary preservice teachers' science teaching efficacy beliefs, including personal science teaching efficacy and science teaching outcome expectancy, and content knowledge acquisition. This study involved sixty (60) undergraduate elementary preservice teachers enrolled in three sections of elementary science methods classes at a large Midwestern research university. This study used a quasi-experimental nonequivalent design to collect and analyze both quantitative and qualitative data. Participants completed instruments designed to assess science teaching efficacy beliefs, science background, and demographic data. Quantitative data from pre and posttests was obtained using the science teaching efficacy belief instrument-preservice (STEBI-B) developed by Enochs and Riggs (1990) and modified by Bleicher (2004). Data collection instruments also included a demographic questionnaire, an analytic rubric, and a structured interview; both created by the researcher. Quantitative data was analyzed by conducting ANCOVA, paired samples t-test, and independent samples t-test. Qualitative data was analyzed using coding and themes. Each of the treatment groups received the same problem scenario, one group experienced a more structured PBL setting, and one group experienced a limited structure PBL setting. Research personnel administered pre and posttests to determine the elementary preservice teachers' science teaching efficacy beliefs. The results show elementary preservice teachers'science teaching efficacy beliefs can be influence by the problem based learning instructional model. This study did not find that the amount of structure in the form of core ideas to consider and resources for further research increased science teaching efficacy beliefs in this sample. Results from the science content knowledge rubric indicated that structure can increase science content knowledge in this sample. Qualitative data from the tutor, fidelity raters, and interviews indicated the participants were excited about the problem and were interested in the science content knowledge related to the problem. They also indicated they were motivated to continue informal study in the problem area. Participants indicated, during the interview, their initial frustration with the lack of knowledge gained from the tutor; however, indicated this led to more learning on their part. This study will contribute to the overall knowledge of problem based learning and its structures, science teaching efficacy beliefs of elementary preservice teachers, and to current teaching and learning practices.

  13. Sensing a Changing Chemical Mixture Using an Electronic Nose

    NASA Technical Reports Server (NTRS)

    Duong, Tuan; Ryan, Margaret

    2008-01-01

    A method of using an electronic nose to detect an airborne mixture of known chemical compounds and measure the temporally varying concentrations of the individual compounds is undergoing development. In a typical intended application, the method would be used to monitor the air in an inhabited space (e.g., the interior of a building) for the release of solvents, toxic fumes, and other compounds that are regarded as contaminants. At the present state of development, the method affords a capability for identifying and quantitating one or two compounds that are members of a set of some number (typically of the order of a dozen) known compounds. In principle, the method could be extended to enable monitoring of more than two compounds. An electronic nose consists of an array of sensors, typically made from polymer carbon composites, the electrical resistances of which change upon exposure to a variety of chemicals. By design, each sensor is unique in its responses to these chemicals: some or all of the sensitivities of a given sensor to the various vapors differ from the corresponding sensitivities of other sensors. In general, the responses of the sensors are nonlinear functions of the concentrations of the chemicals. Hence, mathematically, the monitoring problem is to solve the set of time-dependent nonlinear equations for the sensor responses to obtain the time dependent concentrations of individual compounds. In the present developmental method, successive approximations of the solution are generated by a learning algorithm based on independent-component analysis (ICA) an established information theoretic approach for transforming a vector of observed interdependent signals into a set of signals that are as nearly statistically independent as possible.

  14. Geological nominations at UNESCO World Heritage, an upstream struggle

    NASA Astrophysics Data System (ADS)

    Olive-Garcia, Cécile; van Wyk de Vries, Benjamin

    2017-04-01

    Using my 10 years experience in setting up and defending a UNESCO world Heritage Geological nomination, this presentation aims to give a personal insight into this international process and the differential use of science, subjective perception (aesthetic and 'naturality'), and politics. At this point in the process, new protocols have been tested in order to improve the dialogue, accountability and transparency between the different stake-holders. These are, the State parties, the IUCN, the scientific community, and UNESCO itself. Our proposal is the Chaîne des Puys-Limagne fault ensemble, which combines tectonic, geomorphological evolution and volcanology. The project's essence is a conjunction of inseparable geological features and processes, set in the context of plate tectonics. This very unicit yof diverse forms and processes creates the value of the site. However, it is just this that has caused a problem, as the advisory body has a categorical approach of nominations that separates items to assess them in an unconnected manner.From the start we proposed a combined approach, where a property is seen in its entirety, and the constituent elements seen as interlinked elements reflecting the joint underlying phenomena. At this point, our project has received the first ever open review by an independent technical mission (jointly set up by IUCN, UNESCO and the State party). The subsequent report was broadly supportive of the project's approach and of the value of the ensemble of features. The UNESCO committee in 2016, re-referred the nomination, acknowledging the potential Outstanding Universal Value of the site and requesting the parties to continue the upstream process (e.g. collaborative work), notably on the recommendations and conclusions of the Independent Technical mission report. Meetings are continuing, and I shall provide you with the hot-off-the-press news as this ground breaking nomination progresses.

  15. Using Redundancy To Reduce Errors in Magnetometer Readings

    NASA Technical Reports Server (NTRS)

    Kulikov, Igor; Zak, Michail

    2004-01-01

    A method of reducing errors in noisy magnetic-field measurements involves exploitation of redundancy in the readings of multiple magnetometers in a cluster. By "redundancy"is meant that the readings are not entirely independent of each other because the relationships among the magnetic-field components that one seeks to measure are governed by the fundamental laws of electromagnetism as expressed by Maxwell's equations. Assuming that the magnetometers are located outside a magnetic material, that the magnetic field is steady or quasi-steady, and that there are no electric currents flowing in or near the magnetometers, the applicable Maxwell 's equations are delta x B = 0 and delta(raised dot) B = 0, where B is the magnetic-flux-density vector. By suitable algebraic manipulation, these equations can be shown to impose three independent constraints on the values of the components of B at the various magnetometer positions. In general, the problem of reducing the errors in noisy measurements is one of finding a set of corrected values that minimize an error function. In the present method, the error function is formulated as (1) the sum of squares of the differences between the corrected and noisy measurement values plus (2) a sum of three terms, each comprising the product of a Lagrange multiplier and one of the three constraints. The partial derivatives of the error function with respect to the corrected magnetic-field component values and the Lagrange multipliers are set equal to zero, leading to a set of equations that can be put into matrix.vector form. The matrix can be inverted to solve for a vector that comprises the corrected magnetic-field component values and the Lagrange multipliers.

  16. Measure of functional independence dominates discharge outcome prediction after inpatient rehabilitation for stroke.

    PubMed

    Brown, Allen W; Therneau, Terry M; Schultz, Billie A; Niewczyk, Paulette M; Granger, Carl V

    2015-04-01

    Identifying clinical data acquired at inpatient rehabilitation admission for stroke that accurately predict key outcomes at discharge could inform the development of customized plans of care to achieve favorable outcomes. The purpose of this analysis was to use a large comprehensive national data set to consider a wide range of clinical elements known at admission to identify those that predict key outcomes at rehabilitation discharge. Sample data were obtained from the Uniform Data System for Medical Rehabilitation data set with the diagnosis of stroke for the years 2005 through 2007. This data set includes demographic, administrative, and medical variables collected at admission and discharge and uses the FIM (functional independence measure) instrument to assess functional independence. Primary outcomes of interest were functional independence measure gain, length of stay, and discharge to home. The sample included 148,367 people (75% white; mean age, 70.6±13.1 years; 97% with ischemic stroke) admitted to inpatient rehabilitation a mean of 8.2±12 days after symptom onset. The total functional independence measure score, the functional independence measure motor subscore, and the case-mix group were equally the strongest predictors for any of the primary outcomes. The most clinically relevant 3-variable model used the functional independence measure motor subscore, age, and walking distance at admission (r(2)=0.107). No important additional effect for any other variable was detected when added to this model. This analysis shows that a measure of functional independence in motor performance and age at rehabilitation hospital admission for stroke are predominant predictors of outcome at discharge in a uniquely large US national data set. © 2015 American Heart Association, Inc.

  17. Cost component analysis.

    PubMed

    Lörincz, András; Póczos, Barnabás

    2003-06-01

    In optimizations the dimension of the problem may severely, sometimes exponentially increase optimization time. Parametric function approximatiors (FAPPs) have been suggested to overcome this problem. Here, a novel FAPP, cost component analysis (CCA) is described. In CCA, the search space is resampled according to the Boltzmann distribution generated by the energy landscape. That is, CCA converts the optimization problem to density estimation. Structure of the induced density is searched by independent component analysis (ICA). The advantage of CCA is that each independent ICA component can be optimized separately. In turn, (i) CCA intends to partition the original problem into subproblems and (ii) separating (partitioning) the original optimization problem into subproblems may serve interpretation. Most importantly, (iii) CCA may give rise to high gains in optimization time. Numerical simulations illustrate the working of the algorithm.

  18. Couples' Reports of Relationship Problems in a Naturalistic Therapy Setting

    ERIC Educational Resources Information Center

    Boisvert, Marie-Michele; Wright, John; Tremblay, Nadine; McDuff, Pierre

    2011-01-01

    Understanding couples' relationship problems is fundamental to couple therapy. Although research has documented common relationship problems, no study has used open-ended questions to explore problems in couples seeking therapy in naturalistic settings. The present study used a reliable coding system to explore the relationship problems reported…

  19. Fusion of MultiSpectral and Panchromatic Images Based on Morphological Operators.

    PubMed

    Restaino, Rocco; Vivone, Gemine; Dalla Mura, Mauro; Chanussot, Jocelyn

    2016-04-20

    Nonlinear decomposition schemes constitute an alternative to classical approaches for facing the problem of data fusion. In this paper we discuss the application of this methodology to a popular remote sensing application called pansharpening, which consists in the fusion of a low resolution multispectral image and a high resolution panchromatic image. We design a complete pansharpening scheme based on the use of morphological half gradients operators and demonstrate the suitability of this algorithm through the comparison with state of the art approaches. Four datasets acquired by the Pleiades, Worldview-2, Ikonos and Geoeye-1 satellites are employed for the performance assessment, testifying the effectiveness of the proposed approach in producing top-class images with a setting independent of the specific sensor.

  20. Quantum mechanics: The Bayesian theory generalized to the space of Hermitian matrices

    NASA Astrophysics Data System (ADS)

    Benavoli, Alessio; Facchini, Alessandro; Zaffalon, Marco

    2016-10-01

    We consider the problem of gambling on a quantum experiment and enforce rational behavior by a few rules. These rules yield, in the classical case, the Bayesian theory of probability via duality theorems. In our quantum setting, they yield the Bayesian theory generalized to the space of Hermitian matrices. This very theory is quantum mechanics: in fact, we derive all its four postulates from the generalized Bayesian theory. This implies that quantum mechanics is self-consistent. It also leads us to reinterpret the main operations in quantum mechanics as probability rules: Bayes' rule (measurement), marginalization (partial tracing), independence (tensor product). To say it with a slogan, we obtain that quantum mechanics is the Bayesian theory in the complex numbers.

  1. A qualitative study of early family histories and transitions of homeless youth.

    PubMed

    Tyler, Kimberly A

    2006-10-01

    Using intensive qualitative interviews with 40 homeless youth, this study examined their early family histories for abuse, neglect, and other family problems and the number and types of transitions that youth experienced. Multiple forms of child maltreatment, family alcoholism, drug use, and criminal activity characterized early family histories of many youth. Leaving home because of either running away or being removed by child protective services often resulted in multiple transitions, which regularly included moving from foster care homes to a group home, back to their parents, and then again returning to the streets. Although having experienced family disorganization set youth on trajectories for early independence, there were many unique paths that youth traveled prior to ending up on the streets.

  2. Principal components analysis in clinical studies.

    PubMed

    Zhang, Zhongheng; Castelló, Adela

    2017-09-01

    In multivariate analysis, independent variables are usually correlated to each other which can introduce multicollinearity in the regression models. One approach to solve this problem is to apply principal components analysis (PCA) over these variables. This method uses orthogonal transformation to represent sets of potentially correlated variables with principal components (PC) that are linearly uncorrelated. PCs are ordered so that the first PC has the largest possible variance and only some components are selected to represent the correlated variables. As a result, the dimension of the variable space is reduced. This tutorial illustrates how to perform PCA in R environment, the example is a simulated dataset in which two PCs are responsible for the majority of the variance in the data. Furthermore, the visualization of PCA is highlighted.

  3. Truncation of Spherical Harmonic Series and its Influence on Gravity Field Modelling

    NASA Astrophysics Data System (ADS)

    Fecher, T.; Gruber, T.; Rummel, R.

    2009-04-01

    Least-squares adjustment is a very common and effective tool for the calculation of global gravity field models in terms of spherical harmonic series. However, since the gravity field is a continuous field function its optimal representation by a finite series of spherical harmonics is connected with a set of fundamental problems. Particularly worth mentioning here are cut off errors and aliasing effects. These problems stem from the truncation of the spherical harmonic series and from the fact that the spherical harmonic coefficients cannot be determined independently of each other within the adjustment process in case of discrete observations. The latter is shown by the non-diagonal variance-covariance matrices of gravity field solutions. Sneeuw described in 1994 that the off-diagonal matrix elements - at least if data are equally weighted - are the result of a loss of orthogonality of Legendre polynomials on regular grids. The poster addresses questions arising from the truncation of spherical harmonic series in spherical harmonic analysis and synthesis. Such questions are: (1) How does the high frequency data content (outside the parameter space) affect the estimated spherical harmonic coefficients; (2) Where to truncate the spherical harmonic series in the adjustment process in order to avoid high frequency leakage?; (3) Given a set of spherical harmonic coefficients resulting from an adjustment, what is the effect of using only a truncated version of it?

  4. Tracking wakefulness as it fades: Micro-measures of alertness.

    PubMed

    Jagannathan, Sridhar R; Ezquerro-Nassar, Alejandro; Jachs, Barbara; Pustovaya, Olga V; Bareham, Corinne A; Bekinschtein, Tristan A

    2018-08-01

    A major problem in psychology and physiology experiments is drowsiness: around a third of participants show decreased wakefulness despite being instructed to stay alert. In some non-visual experiments participants keep their eyes closed throughout the task, thus promoting the occurrence of such periods of varying alertness. These wakefulness changes contribute to systematic noise in data and measures of interest. To account for this omnipresent problem in data acquisition we defined criteria and code to allow researchers to detect and control for varying alertness in electroencephalography (EEG) experiments under eyes-closed settings. We first revise a visual-scoring method developed for detection and characterization of the sleep-onset process, and adapt the same for detection of alertness levels. Furthermore, we show the major issues preventing the practical use of this method, and overcome these issues by developing an automated method (micro-measures algorithm) based on frequency and sleep graphoelements, which are capable of detecting micro variations in alertness. The validity of the micro-measures algorithm was verified by training and testing using a dataset where participants are known to fall asleep. In addition, we tested generalisability by independent validation on another dataset. The methods developed constitute a unique tool to assess micro variations in levels of alertness and control trial-by-trial retrospectively or prospectively in every experiment performed with EEG in cognitive neuroscience under eyes-closed settings. Copyright © 2018. Published by Elsevier Inc.

  5. Model-independent constraints on possible modifications of Newtonian gravity

    NASA Technical Reports Server (NTRS)

    Talmadge, C.; Berthias, J.-P.; Hellings, R. W.; Standish, E. M.

    1988-01-01

    New model-independent constraints on possible modifications of Newtonian gravity over solar-system distance scales are presented, and their implications discussed. The constraints arise from the analysis of various planetary astrometric data sets. The results of the model-independent analysis are then applied to set limits on a variation in the l/r-squared behavior of gravity, on possible Yukawa-type interactions with ranges of the order of planetary distance scales, and on a deviation from Newtonian gravity of the type discussed by Milgrom (1983).

  6. Viscous Corrections of the Time Incremental Minimization Scheme and Visco-Energetic Solutions to Rate-Independent Evolution Problems

    NASA Astrophysics Data System (ADS)

    Minotti, Luca; Savaré, Giuseppe

    2018-02-01

    We propose the new notion of Visco-Energetic solutions to rate-independent systems {(X, E,} d) driven by a time dependent energy E and a dissipation quasi-distance d in a general metric-topological space X. As for the classic Energetic approach, solutions can be obtained by solving a modified time Incremental Minimization Scheme, where at each step the dissipation quasi-distance d is incremented by a viscous correction {δ} (for example proportional to the square of the distance d), which penalizes far distance jumps by inducing a localized version of the stability condition. We prove a general convergence result and a typical characterization by Stability and Energy Balance in a setting comparable to the standard energetic one, thus capable of covering a wide range of applications. The new refined Energy Balance condition compensates for the localized stability and provides a careful description of the jump behavior: at every jump the solution follows an optimal transition, which resembles in a suitable variational sense the discrete scheme that has been implemented for the whole construction.

  7. Source credibility and idea improvement have independent effects on unconscious plagiarism errors in recall and generate-new tasks.

    PubMed

    Perfect, Timothy J; Field, Ian; Jones, Robert

    2009-01-01

    Unconscious plagiarism occurs when people try to generate new ideas or when they try to recall their own ideas from among a set generated by a group. In this study, the factors that independently influence these two forms of plagiarism error were examined. Participants initially generated solutions to real-world problems in 2 domains of knowledge in collaboration with a confederate presented as an expert in 1 domain. Subsequently, the participant generated improvements to half of the ideas from each person. Participants returned 1 day later to recall either their own ideas or their partner's ideas and to complete a generate-new task. A double dissociation was observed. Generate-new plagiarism was driven by partner expertise but not by idea improvement, whereas recall plagiarism was driven by improvement but not expertise. This improvement effect on recall plagiarism was seen for the recall-own but not the recall-partner task, suggesting that the increase in recall-own plagiarism is due to mistaken idea ownership, not source confusion.

  8. Polio in Syria: Problem still not solved.

    PubMed

    Al-Moujahed, Ahmad; Alahdab, Fares; Abolaban, Heba; Beletsky, Leo

    2017-01-01

    The reappearance of polio in Syria in mid-2013, 18 years after it was eliminated from the country, manifests the public health catastrophe brought on by the civil war. Among the lessons learned, this outbreak emphasizes the importance of increasing the international financial and logistical support for vaccine and immunization efforts, especially in countries suffering from conflicts. The lack of access to polio accredited laboratory or outright lack of laboratories in settings of conflict should be recognized allowing international surveillance to be strengthened by supplementing the laboratory definition with the clinical definition. In addition, it illustrates the imperative for the United Nations (UN) agencies involved in global health to be able to operate independently from governments during conflicts in order to provide adequate and efficient medical and humanitarian relief for civilians. Proper communicable disease surveillance and control, delivery of vaccinations, and other pivotal healthcare services to these areas require independence from governments and all military actors involved. Moreover, it shows the necessity to adequately support and fund the front-line nongovernmental organizations (NGOs) that are implementing the delivery of medical and humanitarian aid in Syria.

  9. The Efficient Utilization of Open Source Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baty, Samuel R.

    These are a set of slides on the efficient utilization of open source information. Open source information consists of a vast set of information from a variety of sources. Not only does the quantity of open source information pose a problem, the quality of such information can hinder efforts. To show this, two case studies are mentioned: Iran and North Korea, in order to see how open source information can be utilized. The huge breadth and depth of open source information can complicate an analysis, especially because open information has no guarantee of accuracy. Open source information can provide keymore » insights either directly or indirectly: looking at supporting factors (flow of scientists, products and waste from mines, government budgets, etc.); direct factors (statements, tests, deployments). Fundamentally, it is the independent verification of information that allows for a more complete picture to be formed. Overlapping sources allow for more precise bounds on times, weights, temperatures, yields or other issues of interest in order to determine capability. Ultimately, a "good" answer almost never comes from an individual, but rather requires the utilization of a wide range of skill sets held by a team of people.« less

  10. Transfer component skill deficit rates among Veterans who use wheelchairs.

    PubMed

    Koontz, Alicia M; Tsai, Chung-Ying; Hogaboom, Nathan S; Boninger, Michael L

    2016-01-01

    The purpose of this study was to quantify the deficit rates for transfer component skills in a Veteran cohort and explore the relationship between deficit rates and subject characteristics. Seventy-four men and 18 women performed up to four transfers independently from their wheelchair to a mat table while a therapist evaluated their transfer techniques using the Transfer Assessment Instrument. The highest deficit rates concerned the improper use of handgrips (63%). Other common problems included not setting the wheelchair up at the proper angle (50%) and not removing the armrest (58%). Veterans over 60 yr old and Veterans with moderate shoulder pain were more likely to set up their wheelchairs inappropriately than younger Veterans (p = 0.003) and Veterans with mild shoulder pain (p = 0.004). Women were less likely to remove their armrests than men (p = 0.03). Subjects with disabilities other than spinal cord injury were less inclined to set themselves up for a safe and easy transfer than the subjects with spinal cord injury (p ≤ 0.001). The results provide insight into the disparities present in transfer skills among Veterans and will inform the development of future transfer training programs both within and outside of the Department of Veterans Affairs.

  11. Investigation of Saltwater Intrusion and Recirculation of Seawater for Henry Constant Dispersion and Velocity-Dependent Dispersion Problems and Field-Scale Problem

    NASA Astrophysics Data System (ADS)

    Motz, L. H.; Kalakan, C.

    2013-12-01

    Three problems regarding saltwater intrusion, namely the Henry constant dispersion and velocity-dependent dispersion problems and a larger, field-scale velocity-dependent dispersion problem, have been investigated to determine quantitatively how saltwater intrusion and the recirculation of seawater at a coastal boundary are related to the freshwater inflow and the density-driven buoyancy flux. Based on dimensional analysis, saltwater intrusion and the recirculation of seawater are dependent functions of the independent ratio of freshwater advective flux relative to the density-driven vertical buoyancy flux, defined as az (or a for an isotropic aquifer), and the aspect ratio of horizontal and vertical dimensions of the cross-section. For the Henry constant dispersion problem, in which the aquifer is isotropic, saltwater intrusion and recirculation are related to an additional independent dimensionless parameter that is the ratio of the constant dispersion coefficient treated as a scalar quantity, the porosity, and the freshwater advective flux, defined as b. For the Henry velocity-dependent dispersion problem, the ratio b is zero, and saltwater intrusion and recirculation are related to an additional independent dimensionless parameter that is the ratio of the vertical and horizontal dispersivities, or rα = αz/αx. For an anisotropic aquifer, saltwater intrusion and recirculation are also dependent on the ratio of vertical and horizontal hydraulic conductivities, or rK = Kz/Kx. For the field-scale velocity-dependent dispersion problem, saltwater intrusion and recirculation are dependent on the same independent ratios as the Henry velocity-dependent dispersion problem. In the two-dimensional cross-section for all three problems, freshwater inflow occurs at an upgradient boundary, and recirculated seawater outflow occurs at a downgradient coastal boundary. The upgradient boundary is a specified-flux boundary with zero freshwater concentration, and the downgradient boundary is a specified-head boundary with a specified concentration equal to seawater. Equivalent freshwater heads are specified at the downstream boundary to account for density differences between freshwater and saltwater at the downstream boundary. The three problems were solved using the numerical groundwater flow and transport code SEAWAT for two conditions, i.e., first for the uncoupled condition in which the fluid density is constant and thus the flow and transport equations are uncoupled in a constant-density flowfield, and then for the coupled condition in which the fluid density is a function of the total dissolved solids concentration and thus the flow and transport equations are coupled in a variable-density flowfield. A wide range of results for the landward extent of saltwater intrusion and the amount of recirculation of seawater at the coastal boundary was obtained by varying the independent dimensionless ratio az (or a in problem one) in all three problems. The dimensionless dispersion ratio b was also varied in problem one, and the dispersivity ratio rα and the hydraulic conductivity ratio rK were also varied in problems two and three.

  12. ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2008-04-01

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a methodmore » for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.« less

  13. ITS version 5.0 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theoristsmore » alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.« less

  14. Job dissatisfaction as a contributor to stress-related mental health problems among Japanese civil servants.

    PubMed

    Tatsuse, Takashi; Sekine, Michikazu

    2013-01-01

    Although studies on the association of job dissatisfaction with mental health have been conducted in the past, few studies have dealt with the complicated links connecting job stress, job dissatisfaction, and stress-related illness. This study seeks to determine how job dissatisfaction is linked to common mental health issues. This study surveyed 3,172 civil servants (2,233 men and 939 women) in 1998, taking poor mental functioning, fatigue, and sleep disturbance as stress-related mental health problems. We examine how psychosocial risk factors at work and job dissatisfaction are associated independently with poor mental functioning, fatigue, and sleep disturbance after adjustment for other known risk factors, and how job dissatisfaction contributes to change in the degree of association between psychosocial risk factors at work and mental health problems. In general, psychosocial risk factors were independently associated with mental health problems. When adjusted for job dissatisfaction, not only was job satisfaction independently associated with mental health problems but it was also found that the association of psychosocial risk factors with mental health problems declined. Our results suggest that, although longitudinal research is necessary, attitudes toward satisfaction at work can potentially decrease the negative effects of psychosocial risk factors at work on mental health.

  15. How Many Separable Sources? Model Selection In Independent Components Analysis

    PubMed Central

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  16. Task-based learning versus problem-oriented lecture in neurology continuing medical education.

    PubMed

    Vakani, Farhan; Jafri, Wasim; Ahmad, Amina; Sonawalla, Aziz; Sheerani, Mughis

    2014-01-01

    To determine whether general practitioners learned better with task-based learning or problem-oriented lecture in a Continuing Medical Education (CME) set-up. Quasi-experimental study. The Aga Khan University, Karachi campus, from April to June 2012. Fifty-nine physicians were given a choice to opt for either Task-based Learning (TBL) or Problem Oriented Lecture (PBL) in a continuing medical education set-up about headaches. The TBL group had 30 participants divided into 10 small groups, and were assigned case-based tasks. The lecture group had 29 participants. Both groups were given a pre and a post-test. Pre/post assessment was done using one-best MCQs. The reliability coefficient of scores for both the groups was estimated through Cronbach's alpha. An item analysis for difficulty and discriminatory indices was calculated for both the groups. Paired t-test was used to determine the difference between pre- and post-test scores of both groups. Independent t-test was used to compare the impact of the two teaching methods in terms of learning through scores produced by MCQ test. Cronbach's alpha was 0.672 for the lecture group and 0.881 for TBL group. Item analysis for difficulty (p) and discriminatory indexes (d) was obtained for both groups. The results for the lecture group showed pre-test (p) = 42% vs. post-test (p) = 43%; pre- test (d) = 0.60 vs. post-test (d) = 0.40. The TBL group showed pre -test (p) = 48% vs. post-test (p) = 70%; pre-test (d) = 0.69 vs. post-test (d) = 0.73. Lecture group pre-/post-test mean scores were (8.52 ± 2.95 vs. 12.41 ± 2.65; p < 0.001), where TBL group showed (9.70 ± 3.65 vs. 14 ± 3.99; p < 0.001). Independent t-test exhibited an insignificant difference at baseline (lecture 8.52 ± 2.95 vs. TBL 9.70 ± 3.65; p = 0.177). The post-scores were not statistically different lecture 12.41 ± 2.65 vs. TBL 14 ± 3.99; p = 0.07). Both delivery methods were found to be equally effective, showing statistically insignificant differences. However, TBL groups' post-test higher mean scores and radical increase in the post-test difficulty index demonstrated improved learning through TBL delivery and calls for further exploration of longitudinal studies in the context of CME.

  17. Behavioral family intervention for children with developmental disabilities and behavioral problems.

    PubMed

    Roberts, Clare; Mazzucchelli, Trevor; Studman, Lisa; Sanders, Matthew R

    2006-06-01

    The outcomes of a randomized clinical trial of a new behavioral family intervention, Stepping Stones Triple P, for preschoolers with developmental and behavior problems are presented. Forty-eight children with developmental disabilities participated, 27 randomly allocated to an intervention group and 20 to a wait-list control group. Parents completed measures of parenting style and stress, and independent observers assessed parent-child interactions. The intervention was associated with fewer child behavior problems reported by mothers and independent observers, improved maternal and paternal parenting style, and decreased maternal stress. All effects were maintained at 6-month follow-up.

  18. Assessment of the Uniqueness of Wind Tunnel Strain-Gage Balance Load Predictions

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2016-01-01

    A new test was developed to assess the uniqueness of wind tunnel strain-gage balance load predictions that are obtained from regression models of calibration data. The test helps balance users to gain confidence in load predictions of non-traditional balance designs. It also makes it possible to better evaluate load predictions of traditional balances that are not used as originally intended. The test works for both the Iterative and Non-Iterative Methods that are used in the aerospace testing community for the prediction of balance loads. It is based on the hypothesis that the total number of independently applied balance load components must always match the total number of independently measured bridge outputs or bridge output combinations. This hypothesis is supported by a control volume analysis of the inputs and outputs of a strain-gage balance. It is concluded from the control volume analysis that the loads and bridge outputs of a balance calibration data set must separately be tested for linear independence because it cannot always be guaranteed that a linearly independent load component set will result in linearly independent bridge output measurements. Simple linear math models for the loads and bridge outputs in combination with the variance inflation factor are used to test for linear independence. A highly unique and reversible mapping between the applied load component set and the measured bridge output set is guaranteed to exist if the maximum variance inflation factor of both sets is less than the literature recommended threshold of five. Data from the calibration of a six{component force balance is used to illustrate the application of the new test to real-world data.

  19. The Geriatric ICF Core Set reflecting health-related problems in community-living older adults aged 75 years and older without dementia: development and validation.

    PubMed

    Spoorenberg, Sophie L W; Reijneveld, Sijmen A; Middel, Berrie; Uittenbroek, Ronald J; Kremer, Hubertus P H; Wynia, Klaske

    2015-01-01

    The aim of the present study was to develop a valid Geriatric ICF Core Set reflecting relevant health-related problems of community-living older adults without dementia. A Delphi study was performed in order to reach consensus (≥70% agreement) on second-level categories from the International Classification of Functioning, Disability and Health (ICF). The Delphi panel comprised 41 older adults, medical and non-medical experts. Content validity of the set was tested in a cross-sectional study including 267 older adults identified as frail or having complex care needs. Consensus was reached for 30 ICF categories in the Delphi study (fourteen Body functions, ten Activities and Participation and six Environmental Factors categories). Content validity of the set was high: the prevalence of all the problems was >10%, except for d530 Toileting. The most frequently reported problems were b710 Mobility of joint functions (70%), b152 Emotional functions (65%) and b455 Exercise tolerance functions (62%). No categories had missing values. The final Geriatric ICF Core Set is a comprehensive and valid set of 29 ICF categories, reflecting the most relevant health-related problems among community-living older adults without dementia. This Core Set may contribute to optimal care provision and support of the older population. Implications for Rehabilitation The Geriatric ICF Core Set may provide a practical tool for gaining an understanding of the relevant health-related problems of community-living older adults without dementia. The Geriatric ICF Core Set may be used in primary care practice as an assessment tool in order to tailor care and support to the needs of older adults. The Geriatric ICF Core Set may be suitable for use in multidisciplinary teams in integrated care settings, since it is based on a broad range of problems in functioning. Professionals should pay special attention to health problems related to mobility and emotional functioning since these are the most prevalent problems in community-living older adults.

  20. Inter-examiner classification reliability of Mechanical Diagnosis and Therapy for extremity problems - Systematic review.

    PubMed

    Takasaki, Hiroshi; Okuyama, Kousuke; Rosedale, Richard

    2017-02-01

    Mechanical Diagnosis and Therapy (MDT) is used in the treatment of extremity problems. Classifying clinical problems is one method of providing effective treatment to a target population. Classification reliability is a key factor to determine the precise clinical problem and to direct an appropriate intervention. To explore inter-examiner reliability of the MDT classification for extremity problems in three reliability designs: 1) vignette reliability using surveys with patient vignettes, 2) concurrent reliability, where multiple assessors decide a classification by observing someone's assessment, 3) successive reliability, where multiple assessors independently assess the same patient at different times. Systematic review with data synthesis in a quantitative format. Agreement of MDT subgroups was examined using the Kappa value, with the operational definition of acceptable reliability set at ≥ 0.6. The level of evidence was determined considering the methodological quality of the studies. Six studies were included and all studies met the criteria for high quality. Kappa values for the vignette reliability design (five studies) were ≥ 0.7. There was data from two cohorts in one study for the concurrent reliability design and the Kappa values ranged from 0.45 to 1.0. Kappa values for the successive reliability design (data from three cohorts in one study) were < 0.6. The current review found strong evidence of acceptable inter-examiner reliability of MDT classification for extremity problems in the vignette reliability design, limited evidence of acceptable reliability in the concurrent reliability design and unacceptable reliability in the successive reliability design. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Validation of tsunami inundation model TUNA-RP using OAR-PMEL-135 benchmark problem set

    NASA Astrophysics Data System (ADS)

    Koh, H. L.; Teh, S. Y.; Tan, W. K.; Kh'ng, X. Y.

    2017-05-01

    A standard set of benchmark problems, known as OAR-PMEL-135, is developed by the US National Tsunami Hazard Mitigation Program for tsunami inundation model validation. Any tsunami inundation model must be tested for its accuracy and capability using this standard set of benchmark problems before it can be gainfully used for inundation simulation. The authors have previously developed an in-house tsunami inundation model known as TUNA-RP. This inundation model solves the two-dimensional nonlinear shallow water equations coupled with a wet-dry moving boundary algorithm. This paper presents the validation of TUNA-RP against the solutions provided in the OAR-PMEL-135 benchmark problem set. This benchmark validation testing shows that TUNA-RP can indeed perform inundation simulation with accuracy consistent with that in the tested benchmark problem set.

  2. Housing conditions associated with recurrent gastrointestinal infection in urban Aboriginal children in NSW, Australia: findings from SEARCH.

    PubMed

    Andersen, Melanie J; Skinner, Adam; Williamson, Anna B; Fernando, Peter; Wright, Darryl

    2018-06-01

    To examine the associations between housing and gastrointestinal infection in Aboriginal children in urban New South Wales. A total of 1,398 Aboriginal children were recruited through four Aboriginal Community Controlled Health Services. Multilevel regression modelling of survey data estimated associations between housing conditions and recurrent gastrointestinal infection, adjusting for sociodemographic and health factors. Of the sample, 157 children (11%) had recurrent gastrointestinal infection ever and 37 (2.7%) required treatment for recurrent gastrointestinal infection in the past month. Children in homes with 3+ housing problems were 2.51 (95% CrI 1.10, 2.49) times as likely to have recurrent gastrointestinal infection ever and 6.79 (95% CrI 2.11, 30.17) times as likely to have received recent treatment for it (versus 0-2 problems). For every additional housing problem, the prevalence of recurrent gastrointestinal infection ever increased by a factor of 1.28 (95% CrI 1.14, 1.47) and the prevalence of receiving treatment for gastrointestinal infection in the past month increased by a factor of 1.64 (95% CrI 1.20, 2.48). Housing problems were independently associated with recurrent gastrointestinal infection in a dose-dependent manner. Implications for public health: The role of housing as a potential determinant of health in urban Aboriginal children merits further attention in research and policy settings. © 2018 The Authors.

  3. Connes' embedding problem and Tsirelson's problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Junge, M.; Palazuelos, C.; Navascues, M.

    2011-01-15

    We show that Tsirelson's problem concerning the set of quantum correlations and Connes' embedding problem on finite approximations in von Neumann algebras (known to be equivalent to Kirchberg's QWEP conjecture) are essentially equivalent. Specifically, Tsirelson's problem asks whether the set of bipartite quantum correlations generated between tensor product separated systems is the same as the set of correlations between commuting C{sup *}-algebras. Connes' embedding problem asks whether any separable II{sub 1} factor is a subfactor of the ultrapower of the hyperfinite II{sub 1} factor. We show that an affirmative answer to Connes' question implies a positive answer to Tsirelson's. Conversely,more » a positive answer to a matrix valued version of Tsirelson's problem implies a positive one to Connes' problem.« less

  4. The Problems of Diagnosis and Remediation of Dyscalculia.

    ERIC Educational Resources Information Center

    Price, Nigel; Youe, Simon

    2000-01-01

    Focuses on the problems of diagnosis and remediation of dyscalculia. Explores whether there is justification for believing that specific difficulty with mathematics arises jointly with a specific language problem, or whether a specific difficulty with mathematics can arise independently of problems with language. Uses a case study to illuminate…

  5. Collaborative Problem Solving in Shared Space

    ERIC Educational Resources Information Center

    Lin, Lin; Mills, Leila A.; Ifenthaler, Dirk

    2015-01-01

    The purpose of this study was to examine collaborative problem solving in a shared virtual space. The main question asked was: How will the performance and processes differ between collaborative problem solvers and independent problem solvers over time? A total of 104 university students (63 female and 41 male) participated in an experimental…

  6. Early Childhood Profiles of Sleep Problems and Self-Regulation Predict Later School Adjustment

    ERIC Educational Resources Information Center

    Williams, Kate E.; Nicholson, Jan M.; Walker, Sue; Berthelsen, Donna

    2016-01-01

    Background: Children's sleep problems and self-regulation problems have been independently associated with poorer adjustment to school, but there has been limited exploration of longitudinal early childhood profiles that include both indicators. Aims: This study explores the normative developmental pathway for sleep problems and self-regulation…

  7. Military sexual trauma, combat exposure, and negative urgency as independent predictors of PTSD and subsequent alcohol problems among OEF/OIF veterans.

    PubMed

    Hahn, Austin M; Tirabassi, Christine K; Simons, Raluca M; Simons, Jeffrey S

    2015-11-01

    This study tested a path model of relationships between military sexual trauma (MST), combat exposure, negative urgency, posttraumatic stress disorder (PTSD) symptoms, and alcohol use and related problems. The sample consisted of 86 Operation Enduring Freedom/Operation Iraqi Freedom (OEF/OIF) veterans who reported drinking at least one alcoholic beverage per week. PTSD mediated the relationships between MST and alcohol-related problems, negative urgency and alcohol-related problems, and combat exposure and alcohol-related problems. In addition, negative urgency had a direct effect on alcohol problems. These results indicate that MST, combat exposure, and negative urgency independently predict PTSD symptoms and PTSD symptoms mediate their relationship with alcohol-related problems. Findings support previous literature on the effect of combat exposure and negative urgency on PTSD and subsequent alcohol-related problems. The current study also contributes to the limited research regarding the relationship between MST, PSTD, and alcohol use and related problems. Clinical interventions aimed at reducing emotional dysregulation and posttraumatic stress symptomology may subsequently improve alcohol-related outcomes. (c) 2015 APA, all rights reserved).

  8. Military Sexual Trauma, Combat Exposure, and Negative Urgency as Independent Predictors of PTSD and Subsequent Alcohol Problems among OEF/OIF Veterans

    PubMed Central

    Tirabassi, Christine K.; Simons, Raluca M.; Simons, Jeffrey S.

    2015-01-01

    This study tested a path model of relationships between military sexual trauma (MST), combat exposure, negative urgency, posttraumatic stress disorder (PTSD) symptoms, and alcohol use and related problems. The sample consisted of 86 OEF/OIF veterans who reported drinking at least one alcoholic beverage per week. PTSD mediated the relationships between MST and alcohol-related problems, negative urgency and alcohol-related problems, as well as combat exposure and alcohol-related problems. In addition, negative urgency had a direct effect on alcohol problems. These results indicate that MST, combat exposure, and negative urgency independently predict PTSD symptoms and PTSD symptoms mediate their relationship with alcohol-related problems. Findings support previous literature on the effect of combat exposure and negative urgency on PTSD and subsequent alcohol-related problems. The current study also contributes to the limited research regarding the relationship between MST, PSTD, and alcohol use and related problems. Clinical interventions aimed at reducing emotional dysregulation and posttraumatic stress symptomology may subsequently improve alcohol related outcomes. PMID:26524279

  9. Geometric Hitting Set for Segments of Few Orientations

    DOE PAGES

    Fekete, Sandor P.; Huang, Kan; Mitchell, Joseph S. B.; ...

    2016-01-13

    Here we study several natural instances of the geometric hitting set problem for input consisting of sets of line segments (and rays, lines) having a small number of distinct slopes. These problems model path monitoring (e.g., on road networks) using the fewest sensors (the \\hitting points"). We give approximation algorithms for cases including (i) lines of 3 slopes in the plane, (ii) vertical lines and horizontal segments, (iii) pairs of horizontal/vertical segments. Lastly, we give hardness and hardness of approximation results for these problems. We prove that the hitting set problem for vertical lines and horizontal rays is polynomially solvable.

  10. Probability matching in perceptrons: Effects of conditional dependence and linear nonseparability.

    PubMed

    Dawson, Michael R W; Gupta, Maya

    2017-01-01

    Probability matching occurs when the behavior of an agent matches the likelihood of occurrence of events in the agent's environment. For instance, when artificial neural networks match probability, the activity in their output unit equals the past probability of reward in the presence of a stimulus. Our previous research demonstrated that simple artificial neural networks (perceptrons, which consist of a set of input units directly connected to a single output unit) learn to match probability when presented different cues in isolation. The current paper extends this research by showing that perceptrons can match probabilities when presented simultaneous cues, with each cue signaling different reward likelihoods. In our first simulation, we presented up to four different cues simultaneously; the likelihood of reward signaled by the presence of one cue was independent of the likelihood of reward signaled by other cues. Perceptrons learned to match reward probabilities by treating each cue as an independent source of information about the likelihood of reward. In a second simulation, we violated the independence between cues by making some reward probabilities depend upon cue interactions. We did so by basing reward probabilities on a logical combination (AND or XOR) of two of the four possible cues. We also varied the size of the reward associated with the logical combination. We discovered that this latter manipulation was a much better predictor of perceptron performance than was the logical structure of the interaction between cues. This indicates that when perceptrons learn to match probabilities, they do so by assuming that each signal of a reward is independent of any other; the best predictor of perceptron performance is a quantitative measure of the independence of these input signals, and not the logical structure of the problem being learned.

  11. Probability matching in perceptrons: Effects of conditional dependence and linear nonseparability

    PubMed Central

    2017-01-01

    Probability matching occurs when the behavior of an agent matches the likelihood of occurrence of events in the agent’s environment. For instance, when artificial neural networks match probability, the activity in their output unit equals the past probability of reward in the presence of a stimulus. Our previous research demonstrated that simple artificial neural networks (perceptrons, which consist of a set of input units directly connected to a single output unit) learn to match probability when presented different cues in isolation. The current paper extends this research by showing that perceptrons can match probabilities when presented simultaneous cues, with each cue signaling different reward likelihoods. In our first simulation, we presented up to four different cues simultaneously; the likelihood of reward signaled by the presence of one cue was independent of the likelihood of reward signaled by other cues. Perceptrons learned to match reward probabilities by treating each cue as an independent source of information about the likelihood of reward. In a second simulation, we violated the independence between cues by making some reward probabilities depend upon cue interactions. We did so by basing reward probabilities on a logical combination (AND or XOR) of two of the four possible cues. We also varied the size of the reward associated with the logical combination. We discovered that this latter manipulation was a much better predictor of perceptron performance than was the logical structure of the interaction between cues. This indicates that when perceptrons learn to match probabilities, they do so by assuming that each signal of a reward is independent of any other; the best predictor of perceptron performance is a quantitative measure of the independence of these input signals, and not the logical structure of the problem being learned. PMID:28212422

  12. Pilot climate data system user's guide

    NASA Technical Reports Server (NTRS)

    Reph, M. G.; Treinish, L. A.; Bloch, L.

    1984-01-01

    Instructions for using the Pilot Climate Data System (PCDS), an interactive, scientific data management system for locating, obtaining, manipulating, and displaying climate-research data are presented. The PCDS currently provides this supoort for approximately twenty data sets. Figures that illustrate the terminal displays which a user sees when he/she runs the PCDS and some examples of the output from this system are included. The capabilities which are described in detail allow a user to perform the following: (1) obtain comprehensive descriptions of a number of climate parameter data sets and the associated sensor measurements from which they were derived; (2) obtain detailed information about the temporal coverage and data volume of data sets which are readily accessible via the PCDS; (3) extract portions of a data set using criteria such as time range and geographic location, and output the data to tape, user terminal, system printer, or online disk files in a special data-set-independent format; (4) access and manipulate the data in these data-set-independent files, performing such functions as combining the data, subsetting the data, and averaging the data; and (5) create various graphical representations of the data stored in the data-set-independent files.

  13. Linking Family Characteristics with Poor Peer Relations: The Mediating Role of Conduct Problems

    PubMed Central

    Bierman, Karen Linn; Smoot, David L.

    2012-01-01

    Parent, teacher, and peer ratings were collected for 75 grade school boys to test the hypothesis that certain family interaction patterns would be associated with poor peer relations. Path analyses provided support for a mediational model, in which punitive and ineffective discipline was related to child conduct problems in home and school settings which, in turn, predicted poor peer relations. Further analyses suggested that distinct subgroups of boys could be identified who exhibited conduct problems at home only, at school only, in both settings, or in neither setting. Boys who exhibited cross-situational conduct problems were more likely to experience multiple concurrent problems (e.g., in both home and school settings) and were more likely than any other group to experience poor peer relations. However, only about one-third of the boys with poor peer relations in this sample exhibited problem profiles consistent with the proposed model (e.g., experienced high rates of punitive/ineffective home discipline and exhibited conduct problems in home and school settings), suggesting that the proposed model reflects one common (but not exclusive) pathway to poor peer relations. PMID:1865049

  14. Electrodynamics; Problems and solutions

    NASA Astrophysics Data System (ADS)

    Ilie, Carolina C.; Schrecengost, Zachariah S.

    2018-05-01

    This book of problems and solutions is a natural continuation of Ilie and Schrecengost's first book Electromagnetism: Problems and Solutions. Aimed towards students who would like to work independently on more electrodynamics problems in order to deepen their understanding and problem-solving skills, this book discusses main concepts and techniques related to Maxwell's equations, conservation laws, electromagnetic waves, potentials and fields, and radiation.

  15. An improved independent component analysis model for 3D chromatogram separation and its solution by multi-areas genetic algorithm.

    PubMed

    Cui, Lizhi; Poon, Josiah; Poon, Simon K; Chen, Hao; Gao, Junbin; Kwan, Paul; Fan, Kei; Ling, Zhihao

    2014-01-01

    The 3D chromatogram generated by High Performance Liquid Chromatography-Diode Array Detector (HPLC-DAD) has been researched widely in the field of herbal medicine, grape wine, agriculture, petroleum and so on. Currently, most of the methods used for separating a 3D chromatogram need to know the compounds' number in advance, which could be impossible especially when the compounds are complex or white noise exist. New method which extracts compounds from 3D chromatogram directly is needed. In this paper, a new separation model named parallel Independent Component Analysis constrained by Reference Curve (pICARC) was proposed to transform the separation problem to a multi-parameter optimization issue. It was not necessary to know the number of compounds in the optimization. In order to find all the solutions, an algorithm named multi-areas Genetic Algorithm (mGA) was proposed, where multiple areas of candidate solutions were constructed according to the fitness and distances among the chromosomes. Simulations and experiments on a real life HPLC-DAD data set were used to demonstrate our method and its effectiveness. Through simulations, it can be seen that our method can separate 3D chromatogram to chromatogram peaks and spectra successfully even when they severely overlapped. It is also shown by the experiments that our method is effective to solve real HPLC-DAD data set. Our method can separate 3D chromatogram successfully without knowing the compounds' number in advance, which is fast and effective.

  16. PCA determination of the radiometric noise of high spectral resolution infrared observations from spectral residuals: Application to IASI

    NASA Astrophysics Data System (ADS)

    Serio, C.; Masiello, G.; Camy-Peyret, C.; Jacquette, E.; Vandermarcq, O.; Bermudo, F.; Coppens, D.; Tobin, D.

    2018-02-01

    The problem of characterizing and estimating the instrumental or radiometric noise of satellite high spectral resolution infrared spectrometers directly from Earth observations is addressed in this paper. An approach has been developed, which relies on the Principal Component Analysis (PCA) with a suitable criterion to select the optimal number of PC scores. Different selection criteria have been set up and analysed, which is based on the estimation theory of Least Squares and/or Maximum Likelihood Principle. The approach is independent of any forward model and/or radiative transfer calculations. The PCA is used to define an orthogonal basis, which, in turn, is used to derive an optimal linear reconstruction of the observations. The residual vector that is the observation vector minus the calculated or reconstructed one is then used to estimate the instrumental noise. It will be shown that the use of the spectral residuals to assess the radiometric instrumental noise leads to efficient estimators, which are largely independent of possible departures of the true noise from that assumed a priori to model the observational covariance matrix. Application to the Infrared Atmospheric Sounder Interferometer (IASI) has been considered. A series of case studies has been set up, which make use of IASI observations. As a major result, the analysis confirms the high stability and radiometric performance of IASI. The approach also proved to be efficient in characterizing noise features due to mechanical micro-vibrations of the beam splitter of the IASI instrument.

  17. On the Contribution of Curl-Free Current Patterns to the Ultimate Intrinsic Signal-to-Noise Ratio at Ultra-High Field Strength.

    PubMed

    Pfrommer, Andreas; Henning, Anke

    2017-05-01

    The ultimate intrinsic signal-to-noise ratio (SNR) is a coil independent performance measure to compare different receive coil designs. To evaluate this benchmark in a sample, a complete electromagnetic basis set is required. The basis set can be obtained by curl-free and divergence-free surface current distributions, which excite linearly independent solutions to Maxwell's equations. In this work, we quantitatively investigate the contribution of curl-free current patterns to the ultimate intrinsic SNR in a spherical head-sized model at 9.4 T. Therefore, we compare the ultimate intrinsic SNR obtained with having only curl-free or divergence-free current patterns, with the ultimate intrinsic SNR obtained from a combination of curl-free and divergence-free current patterns. The influence of parallel imaging is studied for various acceleration factors. Moreover results for different field strengths (1.5 T up to 11.7 T) are presented at specific voxel positions and acceleration factors. The full-wave electromagnetic problem is analytically solved using dyadic Green's functions. We show, that at ultra-high field strength (B 0 ⩾7T) a combination of curl-free and divergence-free current patterns is required to achieve the best possible SNR at any position in a spherical head-sized model. On 1.5- and 3T platforms, divergence-free current patterns are sufficient to cover more than 90% of the ultimate intrinsic SNR. Copyright © 2017 John Wiley & Sons, Ltd.

  18. Further insight into the incremental value of new markers: the interpretation of performance measures and the importance of clinical context.

    PubMed

    Kerr, Kathleen F; Bansal, Aasthaa; Pepe, Margaret S

    2012-09-15

    In this issue of the Journal, Pencina and et al. (Am J Epidemiol. 2012;176(6):492-494) examine the operating characteristics of measures of incremental value. Their goal is to provide benchmarks for the measures that can help identify the most promising markers among multiple candidates. They consider a setting in which new predictors are conditionally independent of established predictors. In the present article, the authors consider more general settings. Their results indicate that some of the conclusions made by Pencina et al. are limited to the specific scenarios the authors considered. For example, Pencina et al. observed that continuous net reclassification improvement was invariant to the strength of the baseline model, but the authors of the present study show this invariance does not hold generally. Further, they disagree with the suggestion that such invariance would be desirable for a measure of incremental value. They also do not see evidence to support the claim that the measures provide complementary information. In addition, they show that correlation with baseline predictors can lead to much bigger gains in performance than the conditional independence scenario studied by Pencina et al. Finally, the authors note that the motivation of providing benchmarks actually reinforces previous observations that the problem with these measures is they do not have useful clinical interpretations. If they did, researchers could use the measures directly and benchmarks would not be needed.

  19. Inducing mental set constrains procedural flexibility and conceptual understanding in mathematics.

    PubMed

    DeCaro, Marci S

    2016-10-01

    An important goal in mathematics is to flexibly use and apply multiple, efficient procedures to solve problems and to understand why these procedures work. One factor that may limit individuals' ability to notice and flexibly apply strategies is the mental set induced by the problem context. Undergraduate (N = 41, Experiment 1) and fifth- and sixth-grade students (N = 87, Experiment 2) solved mathematical equivalence problems in one of two set-inducing conditions. Participants in the complex-first condition solved problems without a repeated addend on both sides of the equal sign (e.g., 7 + 5 + 9 = 3 + _), which required multistep strategies. Then these students solved problems with a repeated addend (e.g., 7 + 5 + 9 = 7 + _), for which a shortcut strategy could be readily used (i.e., adding 5 + 9). Participants in the shortcut-first condition solved the same problem set but began with the shortcut problems. Consistent with laboratory studies of mental set, participants in the complex-first condition were less likely to use the more efficient shortcut strategy when possible. In addition, these participants were less likely to demonstrate procedural flexibility and conceptual understanding on a subsequent assessment of mathematical equivalence knowledge. These findings suggest that certain problem-solving contexts can help or hinder both flexibility in strategy use and deeper conceptual thinking about the problems.

  20. Determination of criteria weights in solving multi-criteria problems

    NASA Astrophysics Data System (ADS)

    Kasim, Maznah Mat

    2014-12-01

    A multi-criteria (MC) problem comprises of units to be analyzed under a set of evaluation criteria. Solving a MC problem is basically the process of finding the overall performance or overall quality of the units of analysis by using certain aggregation method. Based on these overall measures of each unit, a decision can be made whether to sort them, to select the best or to group them according to certain ranges. Prior to solving the MC problems, the weights of the related criteria have to be determined with the assumption that the weights represent the degree of importance or the degree of contribution towards the overall performance of the units. This paper presents two main approaches which are called as subjective and objective approaches, where the first one involves evaluator(s) while the latter approach depends on the intrinsic information contained in each criterion. The subjective and objective weights are defined if the criteria are assumed to be independent with each other, but if they are dependent, there is another type of weight, which is called as monotone measure weight or compound weights which represent degree of interaction among the criteria. The measure of individual weights or compound weights must be addressed in solving multi-criteria problems so that the solutions are more reliable since in the real world, evaluation criteria always come with different degree of importance or are dependent with each other. As the real MC problems have their own uniqueness, it is up to the decision maker(s) to decide which type of weights and which method are the most applicable ones for the problem under study.

  1. 46 CFR 129.315 - Power sources for OSVs of 100 or more gross tons.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... VESSELS ELECTRICAL INSTALLATIONS Power Sources and Distribution Systems § 129.315 Power sources for OSVs... one set must be independent of the main propulsion plant. A generator not independent of the main propulsion plant must comply with § 111.10-4(d) of this chapter. With any one generating set stopped, the...

  2. Psychological wellbeing, physical impairments and rural aging in a developing country setting.

    PubMed

    Abas, Melanie A; Punpuing, Sureeporn; Jirapramupitak, Tawanchai; Tangchonlatip, Kanchana; Leese, Morven

    2009-07-16

    There has been very little research on wellbeing, physical impairments and disability in older people in developing countries. A community survey of 1147 older parents, one per household, aged sixty and over in rural Thailand. We used the Burvill scale of physical impairment, the Thai Psychological Wellbeing Scale and the brief WHO Disability Assessment Schedule. We rated received and perceived social support separately from children and from others and rated support to children. We used weighted analyses to take account of the sampling design. Impairments due to arthritis, pain, paralysis, vision, stomach problems or breathing were all associated with lower wellbeing. After adjusting for disability, only impairment due to paralysis was independently associated with lowered wellbeing. The effect of having two or more impairments compared to none was associated with lowered wellbeing after adjusting for demographic factors and social support (adjusted difference -2.37 on the well-being scale with SD = 7.9, p < 0.001) but after adjusting for disability the coefficient fell and was non-significant. The parsimonious model for wellbeing included age, wealth, social support, disability and impairment due to paralysis (the effect of paralysis was -2.97, p = 0.001). In this Thai setting, received support from children and from others and perceived good support from and to children were all independently associated with greater wellbeing whereas actual support to children was associated with lower wellbeing. Low received support from children interacted with paralysis in being especially associated with low wellbeing. In this Thai setting, as found in western settings, most of the association between physical impairments and lower wellbeing is explained by disability. Disability is potentially mediating the association between impairment and low wellbeing. Received support may buffer the impact of some impairments on wellbeing in this setting. Giving actual support to children is associated with less wellbeing unless the support being given to children is perceived as good, perhaps reflecting parental obligation to support adult children in need. Improving community disability services for older people and optimizing received social support will be vital in rural areas in developing countries.

  3. Extension of mixture-of-experts networks for binary classification of hierarchical data.

    PubMed

    Ng, Shu-Kay; McLachlan, Geoffrey J

    2007-09-01

    For many applied problems in the context of medically relevant artificial intelligence, the data collected exhibit a hierarchical or clustered structure. Ignoring the interdependence between hierarchical data can result in misleading classification. In this paper, we extend the mechanism for mixture-of-experts (ME) networks for binary classification of hierarchical data. Another extension is to quantify cluster-specific information on data hierarchy by random effects via the generalized linear mixed-effects model (GLMM). The extension of ME networks is implemented by allowing for correlation in the hierarchical data in both the gating and expert networks via the GLMM. The proposed model is illustrated using a real thyroid disease data set. In our study, we consider 7652 thyroid diagnosis records from 1984 to early 1987 with complete information on 20 attribute values. We obtain 10 independent random splits of the data into a training set and a test set in the proportions 85% and 15%. The test sets are used to assess the generalization performance of the proposed model, based on the percentage of misclassifications. For comparison, the results obtained from the ME network with independence assumption are also included. With the thyroid disease data, the misclassification rate on test sets for the extended ME network is 8.9%, compared to 13.9% for the ME network. In addition, based on model selection methods described in Section 2, a network with two experts is selected. These two expert networks can be considered as modeling two groups of patients with high and low incidence rates. Significant variation among the predicted cluster-specific random effects is detected in the patient group with low incidence rate. It is shown that the extended ME network outperforms the ME network for binary classification of hierarchical data. With the thyroid disease data, useful information on the relative log odds of patients with diagnosed conditions at different periods can be evaluated. This information can be taken into consideration for the assessment of treatment planning of the disease. The proposed extended ME network thus facilitates a more general approach to incorporate data hierarchy mechanism in network modeling.

  4. Internet MEMS design tools based on component technology

    NASA Astrophysics Data System (ADS)

    Brueck, Rainer; Schumer, Christian

    1999-03-01

    The micro electromechanical systems (MEMS) industry in Europe is characterized by small and medium sized enterprises specialized on products to solve problems in specific domains like medicine, automotive sensor technology, etc. In this field of business the technology driven design approach known from micro electronics is not appropriate. Instead each design problem aims at its own, specific technology to be used for the solution. The variety of technologies at hand, like Si-surface, Si-bulk, LIGA, laser, precision engineering requires a huge set of different design tools to be available. No single SME can afford to hold licenses for all these tools. This calls for a new and flexible way of designing, implementing and distributing design software. The Internet provides a flexible manner of offering software access along with methodologies of flexible licensing e.g. on a pay-per-use basis. New communication technologies like ADSL, TV cable of satellites as carriers promise to offer a bandwidth sufficient even for interactive tools with graphical interfaces in the near future. INTERLIDO is an experimental tool suite for process specification and layout verification for lithography based MEMS technologies to be accessed via the Internet. The first version provides a Java implementation even including a graphical editor for process specification. Currently, a new version is brought into operation that is based on JavaBeans component technology. JavaBeans offers the possibility to realize independent interactive design assistants, like a design rule checking assistants, a process consistency checking assistants, a technology definition assistants, a graphical editor assistants, etc. that may reside distributed over the Internet, communicating via Internet protocols. Each potential user thus is able to configure his own dedicated version of a design tool set dedicated to the requirements of the current problem to be solved.

  5. Boundaries on Range-Range Constrained Admissible Regions for Optical Space Surveillance

    NASA Astrophysics Data System (ADS)

    Gaebler, J. A.; Axelrad, P.; Schumacher, P. W., Jr.

    We propose a new type of admissible-region analysis for track initiation in multi-satellite problems when apparent angles measured at known stations are the only observable. The goal is to create an efficient and parallelizable algorithm for computing initial candidate orbits for a large number of new targets. It takes at least three angles-only observations to establish an orbit by traditional means. Thus one is faced with a problem that requires N-choose-3 sets of calculations to test every possible combination of the N observations. An alternative approach is to reduce the number of combinations by making hypotheses of the range to a target along the observed line-of-sight. If realistic bounds on the range are imposed, consistent with a given partition of the space of orbital elements, a pair of range possibilities can be evaluated via Lambert’s method to find candidate orbits for that that partition, which then requires Nchoose- 2 times M-choose-2 combinations, where M is the average number of range hypotheses per observation. The contribution of this work is a set of constraints that establish bounds on the range-range hypothesis region for a given element-space partition, thereby minimizing M. Two effective constraints were identified, which together, constrain the hypothesis region in range-range space to nearly that of the true admissible region based on an orbital partition. The first constraint is based on the geometry of the vacant orbital focus. The second constraint is based on time-of-flight and Lagrange’s form of Kepler’s equation. A complete and efficient parallelization of the problem is possible on this approach because the element partitions can be arbitrary and can be handled independently of each other.

  6. Pattern-set generation algorithm for the one-dimensional multiple stock sizes cutting stock problem

    NASA Astrophysics Data System (ADS)

    Cui, Yaodong; Cui, Yi-Ping; Zhao, Zhigang

    2015-09-01

    A pattern-set generation algorithm (PSG) for the one-dimensional multiple stock sizes cutting stock problem (1DMSSCSP) is presented. The solution process contains two stages. In the first stage, the PSG solves the residual problems repeatedly to generate the patterns in the pattern set, where each residual problem is solved by the column-generation approach, and each pattern is generated by solving a single large object placement problem. In the second stage, the integer linear programming model of the 1DMSSCSP is solved using a commercial solver, where only the patterns in the pattern set are considered. The computational results of benchmark instances indicate that the PSG outperforms existing heuristic algorithms and rivals the exact algorithm in solution quality.

  7. 42 CFR 410.33 - Independent diagnostic testing facility.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... problem and who uses the results in the management of the beneficiary's specific medical problem... the results in the management of the beneficiary's specific medical problem. Nonphysician... SERVICES MEDICARE PROGRAM SUPPLEMENTARY MEDICAL INSURANCE (SMI) BENEFITS Medical and Other Health Services...

  8. Fuel, environmental, and transmission pricing considerations in a deregulated environment

    NASA Astrophysics Data System (ADS)

    Obessis, Emmanouil Vlassios

    The 1992 National Energy Policy Act drastically changed the traditional structure of the vertically integrated utility. To facilitate increased competition in the power utility sector, all markets related to power generation have been opened to free competition and trading. To survive in the new competitive environment, power producers need to reduce costs and increase efficiency. Fuel marketing strategies are thus, getting more aggressive and fuel markets are becoming more competitive, offering more options regarding fuel supplies and contracts. At the same time, the 1990 Clean Air Act Amendments are taking effect. Although tightening the emission standards, this legislation offers utilities a wider flexibility in choosing compliance strategies. It also set maximum annual allowable levels replacing the traditional uniform maximum emission rates. The bill also introduced the concept of marketable emission allowances and provided for the establishment of nationwide markets where allowances may be traded, sold, or purchased. Several fuel- and emission-constrained algorithms have been historically presented, but those two classes of constraints, in general, were handled independently. The multiobjective optimization model developed in this research work, concurrently satisfies sets of detailed fuel and emission limits, modeling in a more accurate way the fuel supply and environmental limitations and their complexities in the new deregulated operational environment. Development of the implementation software is an integral part of this research project. This software may be useful for both daily scheduling activities and short-term operational planning. A Lagrangian multipliers-based variant is used to solve the problem. Single line searches are used to update the multipliers, thus offering attractive execution times. This work also investigates the applicability of cooperative games to the problem of transmission cost allocation. Interest in game theory as a powerful tool to solve common property allocation problems has been renewed. A simple allocation framework is developed using capacity based costing rules. Different solution concepts are applied to solve small scale transmission pricing problems. Game models may render themselves useful in investigating "what if" scenarios.

  9. Computers and clinical arrhythmias.

    PubMed

    Knoebel, S B; Lovelace, D E

    1983-02-01

    Cardiac arrhythmias are ubiquitous in normal and abnormal hearts. These disorders may be life-threatening or benign, symptomatic or unrecognized. Arrhythmias may be the precursor of sudden death, a cause or effect of cardiac failure, a clinical reflection of acute or chronic disorders, or a manifestation of extracardiac conditions. Progress is being made toward unraveling the diagnostic and therapeutic problems involved in arrhythmogenesis. Many of the advances would not be possible, however, without the availability of computer technology. To preserve the proper balance and purposeful progression of computer usage, engineers and physicians have been exhorted not to work independently in this field. Both should learn some of the other's trade. The two disciplines need to come together to solve important problems with computers in cardiology. The intent of this article was to acquaint the practicing cardiologist with some of the extant and envisioned computer applications and some of the problems with both. We conclude that computer-based database management systems are necessary for sorting out the clinical factors of relevance for arrhythmogenesis, but computer database management systems are beset with problems that will require sophisticated solutions. The technology for detecting arrhythmias on routine electrocardiograms is quite good but human over-reading is still required, and the rationale for computer application in this setting is questionable. Systems for qualitative, continuous monitoring and review of extended time ECG recordings are adequate with proper noise rejection algorithms and editing capabilities. The systems are limited presently for clinical application to the recognition of ectopic rhythms and significant pauses. Attention should now be turned to the clinical goals for detection and quantification of arrhythmias. We should be asking the following questions: How quantitative do systems need to be? Are computers required for the detection of all arrhythmias? In all settings? Should we be focusing alternatively on those arrhythmias that are frequent and with clinical significance? The ultimate test of any technology is, after all, its use in advancing knowledge and patient care.

  10. An ILP based memetic algorithm for finding minimum positive influence dominating sets in social networks

    NASA Astrophysics Data System (ADS)

    Lin, Geng; Guan, Jian; Feng, Huibin

    2018-06-01

    The positive influence dominating set problem is a variant of the minimum dominating set problem, and has lots of applications in social networks. It is NP-hard, and receives more and more attention. Various methods have been proposed to solve the positive influence dominating set problem. However, most of the existing work focused on greedy algorithms, and the solution quality needs to be improved. In this paper, we formulate the minimum positive influence dominating set problem as an integer linear programming (ILP), and propose an ILP based memetic algorithm (ILPMA) for solving the problem. The ILPMA integrates a greedy randomized adaptive construction procedure, a crossover operator, a repair operator, and a tabu search procedure. The performance of ILPMA is validated on nine real-world social networks with nodes up to 36,692. The results show that ILPMA significantly improves the solution quality, and is robust.

  11. A Metacognitive Profile of Vocational High School Student’s Field Independent in Mathematical Problem Solving

    NASA Astrophysics Data System (ADS)

    Nugraheni, L.; Budayasa, I. K.; Suwarsono, S. T.

    2018-01-01

    The study was designed to discover examine the profile of metacognition of vocational high school student of the Machine Technology program that had high ability and field independent cognitive style in mathematical problem solving. The design of this study was exploratory research with a qualitative approach. This research was conducted at the Machine Technology program of the vocational senior high school. The result revealed that the high-ability student with field independent cognitive style conducted metacognition practices well. That involved the three types of metacognition activities, consisting of planning, monitoring, and evaluating at metacognition level 2 or aware use, 3 or strategic use, 4 or reflective use in mathematical problem solving. The applicability of the metacognition practices conducted by the subject was never at metacognition level 1 or tacit use. This indicated that the participant were already aware, capable of choosing strategies, and able to reflect on their own thinking before, after, or during the process at the time of solving mathematical problems.That was very necessary for the vocational high school student of Machine Technology program.

  12. The rural community care gerontologic nurse entrepreneur: role development strategies.

    PubMed

    Caffrey, Rosalie A

    2005-10-01

    Rural elderly individuals are an underserved population with limited access to health care. There is an increasing need for independent community care nurses to provide assistance to home-based elderly individuals with chronic illnesses to prevent unnecessary medical and placement decisions and, thus, allow them to maintain independence and quality of life. This article describes the rural setting and why community care nurses are needed, and explores strategies for implementing the role of the independent nurse entrepreneur in caring for community-based elderly individuals in rural settings.

  13. Mathematical visualization process of junior high school students in solving a contextual problem based on cognitive style

    NASA Astrophysics Data System (ADS)

    Utomo, Edy Setiyo; Juniati, Dwi; Siswono, Tatag Yuli Eko

    2017-08-01

    The aim of this research was to describe the mathematical visualization process of Junior High School students in solving contextual problems based on cognitive style. Mathematical visualization process in this research was seen from aspects of image generation, image inspection, image scanning, and image transformation. The research subject was the students in the eighth grade based on GEFT test (Group Embedded Figures Test) adopted from Within to determining the category of cognitive style owned by the students namely field independent or field dependent and communicative. The data collection was through visualization test in contextual problem and interview. The validity was seen through time triangulation. The data analysis referred to the aspect of mathematical visualization through steps of categorization, reduction, discussion, and conclusion. The results showed that field-independent and field-dependent subjects were difference in responding to contextual problems. The field-independent subject presented in the form of 2D and 3D, while the field-dependent subject presented in the form of 3D. Both of the subjects had different perception to see the swimming pool. The field-independent subject saw from the top, while the field-dependent subject from the side. The field-independent subject chose to use partition-object strategy, while the field-dependent subject chose to use general-object strategy. Both the subjects did transformation in an object rotation to get the solution. This research is reference to mathematical curriculum developers of Junior High School in Indonesia. Besides, teacher could develop the students' mathematical visualization by using technology media or software, such as geogebra, portable cabri in learning.

  14. Preschoolers' Cooperative Problem Solving: Integrating Play and Problem Solving

    ERIC Educational Resources Information Center

    Ramani, Geetha B.; Brownell, Celia A.

    2014-01-01

    Cooperative problem solving with peers plays a central role in promoting children's cognitive and social development. This article reviews research on cooperative problem solving among preschool-age children in experimental settings and social play contexts. Studies suggest that cooperative interactions with peers in experimental settings are…

  15. Is the technical performance of young soccer players influenced by hormonal status, sexual maturity, anthropometric profile, and physical performance?

    PubMed

    Moreira, Alexandre; Massa, Marcelo; Thiengo, Carlos R; Rodrigues Lopes, Rafael Alan; Lima, Marcelo R; Vaeyens, Roel; Barbosa, Wesley P; Aoki, Marcelo S

    2017-12-01

    The aim of this study was to examine the influence of hormonal status, anthropometric profile, sexual maturity level, and physical performance on the technical abilities of 40 young male soccer players during small-sided games (SSGs). Anthropometric profiling, saliva sampling, sexual maturity assessment (Tanner scale), and physical performance tests (Yo-Yo and vertical jumps) were conducted two weeks prior to the SSGs. Salivary testosterone was determined by the enzyme-linked immunosorbent assay method. Technical performance was determined by the frequency of actions during SSGs. Principal component analyses identified four technical actions of importance: total number of passes, effectiveness, goal attempts, and total tackles. A multivariate canonical correlation analysis was then employed to verify the prediction of a multiple dependent variables set (composed of four technical actions) from an independent set of variables, composed of testosterone concentration, stage of pubic hair and genitalia development, vertical jumps and Yo-Yo performance. A moderate-to-large relationship between the technical performance set and the independent set was observed. The canonical correlation was 0.75 with a canonical R 2 of 0.45. The highest structure coefficient in the technical performance set was observed for tackles (0.77), while testosterone presented the highest structure coefficient (0.75) for the variables of the independent set. The current data suggest that the selected independent set of variables might be useful in predicting SSG performance in young soccer players. Coaches should be aware that physical development plays a key role in technical performance to avoid decision-making mistakes during the selection of young players.

  16. High-grading bias: subtle problems with assessing power of selected subsets of loci for population assignment.

    PubMed

    Waples, Robin S

    2010-07-01

    Recognition of the importance of cross-validation ('any technique or instance of assessing how the results of a statistical analysis will generalize to an independent dataset'; Wiktionary, en.wiktionary.org) is one reason that the U.S. Securities and Exchange Commission requires all investment products to carry some variation of the disclaimer, 'Past performance is no guarantee of future results.' Even a cursory examination of financial behaviour, however, demonstrates that this warning is regularly ignored, even by those who understand what an independent dataset is. In the natural sciences, an analogue to predicting future returns for an investment strategy is predicting power of a particular algorithm to perform with new data. Once again, the key to developing an unbiased assessment of future performance is through testing with independent data--that is, data that were in no way involved in developing the method in the first place. A 'gold-standard' approach to cross-validation is to divide the data into two parts, one used to develop the algorithm, the other used to test its performance. Because this approach substantially reduces the sample size that can be used in constructing the algorithm, researchers often try other variations of cross-validation to accomplish the same ends. As illustrated by Anderson in this issue of Molecular Ecology Resources, however, not all attempts at cross-validation produce the desired result. Anderson used simulated data to evaluate performance of several software programs designed to identify subsets of loci that can be effective for assigning individuals to population of origin based on multilocus genetic data. Such programs are likely to become increasingly popular as researchers seek ways to streamline routine analyses by focusing on small sets of loci that contain most of the desired signal. Anderson found that although some of the programs made an attempt at cross-validation, all failed to meet the 'gold standard' of using truly independent data and therefore produced overly optimistic assessments of power of the selected set of loci--a phenomenon known as 'high grading bias.'

  17. Clinical Problem Analysis (CPA): A Systematic Approach To Teaching Complex Medical Problem Solving.

    ERIC Educational Resources Information Center

    Custers, Eugene J. F. M.; Robbe, Peter F. De Vries; Stuyt, Paul M. J.

    2000-01-01

    Discusses clinical problem analysis (CPA) in medical education, an approach to solving complex clinical problems. Outlines the five step CPA model and examines the value of CPA's content-independent (methodical) approach. Argues that teaching students to use CPA will enable them to avoid common diagnostic reasoning errors and pitfalls. Compares…

  18. Independence Pending: Teacher Behaviors Preceding Learner Problem Solving

    ERIC Educational Resources Information Center

    Roesler, Rebecca A.

    2017-01-01

    The purposes of the present study were to identify the teacher behaviors that preceded learners' active participation in solving musical and technical problems and describe learners' roles in the problem-solving process. I applied an original model of problem solving to describe the behaviors of teachers and students in 161 rehearsal frames…

  19. Prompting in Web-Based Environments: Supporting Self-Monitoring and Problem Solving Skills in College Students

    ERIC Educational Resources Information Center

    Kauffman, Douglas F.; Ge, Xun; Xie, Kui; Chen, Ching-Huei

    2008-01-01

    This study explored Metacognition and how automated instructional support in the form of problem-solving and self-reflection prompts influenced students' capacity to solve complex problems in a Web-based learning environment. Specifically, we examined the independent and interactive effects of problem-solving prompts and reflection prompts on…

  20. Working Memory and Impulsivity Predict Marijuana-Related Problems Among Frequent Users

    PubMed Central

    Day, Anne M.; Metrik, Jane; Spillane, Nichea S.; Kahler, Christopher W.

    2012-01-01

    Background Although marijuana is the most commonly used illicit substance in the US, only a small portion of users go on to develop dependence, suggesting that there are substantial individual differences in vulnerability to marijuana-related problems among users. Deficits in working memory and high trait impulsivity are two factors that may place marijuana users at increased risk for experiencing related problems. Methods Using baseline data from an experimental study that recruited 104 frequent marijuana users (M=71.86% of prior 60 days, SD=22%), we examined the associations of working memory and trait impulsivity with marijuana-related problems. Results Lower working memory, as measured by Trail Making Test B, but not short-term memory capacity, predicted more marijuana-related problems. Higher trait impulsivity scores were independently associated with greater number of problems. Conclusions Results suggest that marijuana users with reduced executive cognitive ability are more susceptible to developing problems related to their use. Trait impulsivity and executive working memory appear to be independent risk factors for experiencing marijuana-related problems. PMID:23312340

Top