ERIC Educational Resources Information Center
Schindler, Holly Reed; Horner, Robert H.
2005-01-01
The effects of functional communication training on the generalized reduction of problem behavior with three 4- to 5-year-old children with autism and problem behavior were evaluated. Participants were assessed in primary teaching settings and in three secondary, generalization settings. Through baseline analysis, lower effort interventions in the…
NASA Technical Reports Server (NTRS)
Bergeron, H. P.
1980-01-01
Data obtained from the NASA Aviation Safety Reporting System (ASRS) data base were used to determine problems in general aviation single pilot IFR operations. The data examined consisted of incident reports involving flight safety in the National Aviation System. Only those incidents involving general aviation fixed wing aircraft flying under IFR in instrument meteorological conditions were analyzed. The data were cataloged into one of five major problem areas: (1) controller judgement and response problems; (2) pilot judgement and response problems; (3) air traffic control intrafacility and interfacility conflicts; (4) ATC and pilot communications problems; and (5) IFR-VFR conflicts. The significance of the related problems, and the various underlying elements associated with each are discussed. Previous ASRS reports covering several areas of analysis are reviewed.
Performance Analysis and the Generalization Problem.
ERIC Educational Resources Information Center
Wood, Scott; And Others
This document is intended to help school psychologists promote more effective generalization of school intervention programs for education students. Concepts (including transfer of training and response generalization) are reviewed. An approach is described which analyzes problem behavior according to its major controlling variables (skills or…
Rahaman, Mijanur; Pang, Chin-Tzong; Ishtyak, Mohd; Ahmad, Rais
2017-01-01
In this article, we introduce a perturbed system of generalized mixed quasi-equilibrium-like problems involving multi-valued mappings in Hilbert spaces. To calculate the approximate solutions of the perturbed system of generalized multi-valued mixed quasi-equilibrium-like problems, firstly we develop a perturbed system of auxiliary generalized multi-valued mixed quasi-equilibrium-like problems, and then by using the celebrated Fan-KKM technique, we establish the existence and uniqueness of solutions of the perturbed system of auxiliary generalized multi-valued mixed quasi-equilibrium-like problems. By deploying an auxiliary principle technique and an existence result, we formulate an iterative algorithm for solving the perturbed system of generalized multi-valued mixed quasi-equilibrium-like problems. Lastly, we study the strong convergence analysis of the proposed iterative sequences under monotonicity and some mild conditions. These results are new and generalize some known results in this field.
Problem area descriptions : motor vehicle crashes - data analysis and IVI program analysis
DOT National Transportation Integrated Search
In general, the IVI program focuses on the more significant safety problem categories as : indicated by statistical analyses of crash data. However, other factors were considered in setting : program priorities and schedules. For some problem areas, ...
Dangers in Using Analysis of Covariance Procedures.
ERIC Educational Resources Information Center
Campbell, Kathleen T.
Problems associated with the use of analysis of covariance (ANCOVA) as a statistical control technique are explained. Three problems relate to the use of "OVA" methods (analysis of variance, analysis of covariance, multivariate analysis of variance, and multivariate analysis of covariance) in general. These are: (1) the wasting of information when…
Assimilating data into open ocean tidal models
NASA Astrophysics Data System (ADS)
Kivman, Gennady A.
The problem of deriving tidal fields from observations by reason of incompleteness and imperfectness of every data set practically available has an infinitely large number of allowable solutions fitting the data within measurement errors and hence can be treated as ill-posed. Therefore, interpolating the data always relies on some a priori assumptions concerning the tides, which provide a rule of sampling or, in other words, a regularization of the ill-posed problem. Data assimilation procedures used in large scale tide modeling are viewed in a common mathematical framework as such regularizations. It is shown that they all (basis functions expansion, parameter estimation, nudging, objective analysis, general inversion, and extended general inversion), including those (objective analysis and general inversion) originally formulated in stochastic terms, may be considered as utilizations of one of the three general methods suggested by the theory of ill-posed problems. The problem of grid refinement critical for inverse methods and nudging is discussed.
The Classroom Strategy Study: Summary Report of General Findings. Research Series No. 187.
ERIC Educational Resources Information Center
Brophy, Jere; Rohrkemper, Mary
Described are the background, rationale, research design, data collection, analysis, and findings of the Classroom Strategy Study, an investigation of 98 elementary school teachers' general strategies for coping with problem students and their ways of dealing with typical problem behaviors associated with each of 12 types of problem students.…
Computational Methods for Structural Mechanics and Dynamics, part 1
NASA Technical Reports Server (NTRS)
Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)
1989-01-01
The structural analysis methods research has several goals. One goal is to develop analysis methods that are general. This goal of generality leads naturally to finite-element methods, but the research will also include other structural analysis methods. Another goal is that the methods be amenable to error analysis; that is, given a physical problem and a mathematical model of that problem, an analyst would like to know the probable error in predicting a given response quantity. The ultimate objective is to specify the error tolerances and to use automated logic to adjust the mathematical model or solution strategy to obtain that accuracy. A third goal is to develop structural analysis methods that can exploit parallel processing computers. The structural analysis methods research will focus initially on three types of problems: local/global nonlinear stress analysis, nonlinear transient dynamics, and tire modeling.
Using Data Analysis Problems in a Large General Microbiology Course.
ERIC Educational Resources Information Center
Deutch, Charles E.
1997-01-01
Argues that data analysis problems can be used successfully in large introductory microbiology courses, even when exams consist entirely of multiple-choice questions and out-of-class contact with the instructor is limited. Discusses course organization, problem structure, student performance and response, advantages of using data analysis…
NASA Technical Reports Server (NTRS)
Bergeron, H. P.
1983-01-01
An analysis of incident data obtained from the NASA Aviation Safety Reporting System (ASRS) has been made to determine the problem areas in general aviation single-pilot IFR (SPIFR) operations. The Aviation Safety Reporting System data base is a compilation of voluntary reports of incidents from any person who has observed or been involved in an occurrence which was believed to have posed a threat to flight safety. This paper examines only those reported incidents specifically related to general aviation single-pilot IFR operations. The frequency of occurrence of factors related to the incidents was the criterion used to define significant problem areas and, hence, to suggest where research is needed. The data was cataloged into one of five major problem areas: (1) controller judgment and response problems, (2) pilot judgment and response problems, (3) air traffic control (ATC) intrafacility and interfacility conflicts, (4) ATC and pilot communication problems, and (5) IFR-VFR conflicts. In addition, several points common to all or most of the problems were observed and reported. These included human error, communications, procedures and rules, and work load.
ERIC Educational Resources Information Center
Neman, Robert Lynn
This study was designed to assess the effects of the problem-oriented method compared to those of the traditional approach in general chemistry at the college level. The problem-oriented course included topics such as air and water pollution, drug addiction and analysis, tetraethyl-lead additives, insecticides in the environment, and recycling of…
Chaudhry, Jehanzeb Hameed; Estep, Don; Tavener, Simon; Carey, Varis; Sandelin, Jeff
2016-01-01
We consider numerical methods for initial value problems that employ a two stage approach consisting of solution on a relatively coarse discretization followed by solution on a relatively fine discretization. Examples include adaptive error control, parallel-in-time solution schemes, and efficient solution of adjoint problems for computing a posteriori error estimates. We describe a general formulation of two stage computations then perform a general a posteriori error analysis based on computable residuals and solution of an adjoint problem. The analysis accommodates various variations in the two stage computation and in formulation of the adjoint problems. We apply the analysis to compute "dual-weighted" a posteriori error estimates, to develop novel algorithms for efficient solution that take into account cancellation of error, and to the Parareal Algorithm. We test the various results using several numerical examples.
Energy Efficient Data Transmission for Sensors with Wireless Charging
Luo, Junzhou; Wu, Weiwei; Gao, Hong
2018-01-01
This paper studies the problem of maximizing the energy utilization for data transmission in sensors with periodical wireless charging process while taking into account the thermal effect. Two classes of problems are analyzed: one is the case that wireless charging can process for only a limited period of time, and the other is the case that wireless charging can process for a long enough time. Algorithms are proposed to solve the problems and analysis of these algorithms are also provided. For the first problem, three subproblems are studied, and, for the general problem, we give an algorithm that can derive a performance bound of (1−12m)(OPT−E) compared to an optimal solution. In addition, for the second problem, we provide an algorithm with 2m2m−1OPT+1 performance bound for the general problem. Simulations confirm the analysis of the algorithms. PMID:29419770
Energy Efficient Data Transmission for Sensors with Wireless Charging.
Fang, Xiaolin; Luo, Junzhou; Wu, Weiwei; Gao, Hong
2018-02-08
This paper studies the problem of maximizing the energy utilization for data transmission in sensors with periodical wireless charging process while taking into account the thermal effect. Two classes of problems are analyzed: one is the case that wireless charging can process for only a limited period of time, and the other is the case that wireless charging can process for a long enough time. Algorithms are proposed to solve the problems and analysis of these algorithms are also provided. For the first problem, three subproblems are studied, and, for the general problem, we give an algorithm that can derive a performance bound of ( 1 - 1 2 m ) ( O P T - E ) compared to an optimal solution. In addition, for the second problem, we provide an algorithm with 2 m 2 m - 1 O P T + 1 performance bound for the general problem. Simulations confirm the analysis of the algorithms.
Doctoral training in behavior analysis: Training generalized problem-solving skills
Chase, Philip N.; Wylie, Ruth G.
1985-01-01
This essay provides guidelines for designing a doctoral program in behavior analysis. First, we propose a general accomplishment for all behavior analytic doctoral students: that they be able to solve problems concerning individual behavior within a range of environments. Second, in order to achieve this goal, we propose that students be trained in conceptual and experimental analysis of behavior, the application of behavioral principles and the administration of behavioral programs. This training should include class work, but it should emphasize the immersion of students in a variety of environments in which they are required to use behavior analytic strategies. Third, we provide an example of a hypothetical graduate program that involves the proposed training. Finally, an evaluation plan is suggested for determining whether a training program is in fact producing students who are generalized problem-solvers. At each step, we justify our point of view from a perspective that combines principles from behavior analysis and educational systems design. PMID:22478633
Study to determine the IFR operational profile and problems of the general aviation single pilot
NASA Technical Reports Server (NTRS)
Weislogel, G. S.
1983-01-01
General aviation single pilot operating under instrument flight rules (GA SPIFR) was studied. The objectives of the study were to (1) develop a GA SPIFR operational profile, (2) identify problems experienced by the GA SPIFR pilot, and (3) identify research tasks which have the potential for eliminating or reducing the severity of the problems. To obtain the information necessary to accomplish these objectives, a mail questionnaire survey of instrument rated pilots was conducted. The general aviation IFR single pilot operational profile and selected data analysis examples are presented.
Direct Method Transcription for a Human-Class Translunar Injection Trajectory Optimization
NASA Technical Reports Server (NTRS)
Witzberger, Kevin E.; Zeiler, Tom
2012-01-01
This paper presents a new trajectory optimization software package developed in the framework of a low-to-high fidelity 3 degrees-of-freedom (DOF)/6-DOF vehicle simulation program named Mission Analysis Simulation Tool in Fortran (MASTIF) and its application to a translunar trajectory optimization problem. The functionality of the developed optimization package is implemented as a new "mode" in generalized settings to make it applicable for a general trajectory optimization problem. In doing so, a direct optimization method using collocation is employed for solving the problem. Trajectory optimization problems in MASTIF are transcribed to a constrained nonlinear programming (NLP) problem and solved with SNOPT, a commercially available NLP solver. A detailed description of the optimization software developed is provided as well as the transcription specifics for the translunar injection (TLI) problem. The analysis includes a 3-DOF trajectory TLI optimization and a 3-DOF vehicle TLI simulation using closed-loop guidance.
NASA Technical Reports Server (NTRS)
Gupta, Kajal K.
1991-01-01
The details of an integrated general-purpose finite element structural analysis computer program which is also capable of solving complex multidisciplinary problems is presented. Thus, the SOLIDS module of the program possesses an extensive finite element library suitable for modeling most practical problems and is capable of solving statics, vibration, buckling, and dynamic response problems of complex structures, including spinning ones. The aerodynamic module, AERO, enables computation of unsteady aerodynamic forces for both subsonic and supersonic flow for subsequent flutter and divergence analysis of the structure. The associated aeroservoelastic analysis module, ASE, effects aero-structural-control stability analysis yielding frequency responses as well as damping characteristics of the structure. The program is written in standard FORTRAN to run on a wide variety of computers. Extensive graphics, preprocessing, and postprocessing routines are also available pertaining to a number of terminals.
NASA Technical Reports Server (NTRS)
Oden, J. T.; Becker, E. B.; Lin, T. L.; Hsieh, K. T.
1984-01-01
The formulation and numerical analysis of several problems related to the behavior of pneumatic tires are considered. These problems include the general rolling contact problem of a rubber-like viscoelastic cylinder undergoing finite deformations and the finite deformation of cord-reinforced rubber composites. New finite element models are developed for these problems. Numerical results obtained for several representative cases are presented.
NASA Technical Reports Server (NTRS)
Bittker, David A.; Radhakrishnan, Krishnan
1994-01-01
LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 3 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 3 explains the kinetics and kinetics-plus-sensitivity analysis problems supplied with LSENS and presents sample results. These problems illustrate the various capabilities of, and reaction models that can be solved by, the code and may provide a convenient starting point for the user to construct the problem data file required to execute LSENS. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.
A weak Galerkin generalized multiscale finite element method
Mu, Lin; Wang, Junping; Ye, Xiu
2016-03-31
In this study, we propose a general framework for weak Galerkin generalized multiscale (WG-GMS) finite element method for the elliptic problems with rapidly oscillating or high contrast coefficients. This general WG-GMS method features in high order accuracy on general meshes and can work with multiscale basis derived by different numerical schemes. A special case is studied under this WG-GMS framework in which the multiscale basis functions are obtained by solving local problem with the weak Galerkin finite element method. Convergence analysis and numerical experiments are obtained for the special case.
A weak Galerkin generalized multiscale finite element method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Wang, Junping; Ye, Xiu
In this study, we propose a general framework for weak Galerkin generalized multiscale (WG-GMS) finite element method for the elliptic problems with rapidly oscillating or high contrast coefficients. This general WG-GMS method features in high order accuracy on general meshes and can work with multiscale basis derived by different numerical schemes. A special case is studied under this WG-GMS framework in which the multiscale basis functions are obtained by solving local problem with the weak Galerkin finite element method. Convergence analysis and numerical experiments are obtained for the special case.
Primal-dual techniques for online algorithms and mechanisms
NASA Astrophysics Data System (ADS)
Liaghat, Vahid
An offline algorithm is one that knows the entire input in advance. An online algorithm, however, processes its input in a serial fashion. In contrast to offline algorithms, an online algorithm works in a local fashion and has to make irrevocable decisions without having the entire input. Online algorithms are often not optimal since their irrevocable decisions may turn out to be inefficient after receiving the rest of the input. For a given online problem, the goal is to design algorithms which are competitive against the offline optimal solutions. In a classical offline scenario, it is often common to see a dual analysis of problems that can be formulated as a linear or convex program. Primal-dual and dual-fitting techniques have been successfully applied to many such problems. Unfortunately, the usual tricks come short in an online setting since an online algorithm should make decisions without knowing even the whole program. In this thesis, we study the competitive analysis of fundamental problems in the literature such as different variants of online matching and online Steiner connectivity, via online dual techniques. Although there are many generic tools for solving an optimization problem in the offline paradigm, in comparison, much less is known for tackling online problems. The main focus of this work is to design generic techniques for solving integral linear optimization problems where the solution space is restricted via a set of linear constraints. A general family of these problems are online packing/covering problems. Our work shows that for several seemingly unrelated problems, primal-dual techniques can be successfully applied as a unifying approach for analyzing these problems. We believe this leads to generic algorithmic frameworks for solving online problems. In the first part of the thesis, we show the effectiveness of our techniques in the stochastic settings and their applications in Bayesian mechanism design. In particular, we introduce new techniques for solving a fundamental linear optimization problem, namely, the stochastic generalized assignment problem (GAP). This packing problem generalizes various problems such as online matching, ad allocation, bin packing, etc. We furthermore show applications of such results in the mechanism design by introducing Prophet Secretary, a novel Bayesian model for online auctions. In the second part of the thesis, we focus on the covering problems. We develop the framework of "Disk Painting" for a general class of network design problems that can be characterized by proper functions. This class generalizes the node-weighted and edge-weighted variants of several well-known Steiner connectivity problems. We furthermore design a generic technique for solving the prize-collecting variants of these problems when there exists a dual analysis for the non-prize-collecting counterparts. Hence, we solve the online prize-collecting variants of several network design problems for the first time. Finally we focus on designing techniques for online problems with mixed packing/covering constraints. We initiate the study of degree-bounded graph optimization problems in the online setting by designing an online algorithm with a tight competitive ratio for the degree-bounded Steiner forest problem. We hope these techniques establishes a starting point for the analysis of the important class of online degree-bounded optimization on graphs.
Uncertainty analysis of diffuse-gray radiation enclosure problems: A hypersensitive case study
NASA Technical Reports Server (NTRS)
Taylor, Robert P.; Luck, Rogelio; Hodge, B. K.; Steele, W. Glenn
1993-01-01
An uncertainty analysis of diffuse-gray enclosure problems is presented. The genesis was a diffuse-gray enclosure problem which proved to be hypersensitive to the specification of view factors. This genesis is discussed in some detail. The uncertainty analysis is presented for the general diffuse-gray enclosure problem and applied to the hypersensitive case study. It was found that the hypersensitivity could be greatly reduced by enforcing both closure and reciprocity for the view factors. The effects of uncertainties in the surface emissivities and temperatures are also investigated.
Study to determine the IFR operational profile and problems to the general aviation pilot
NASA Technical Reports Server (NTRS)
Weislogel, S.
1983-01-01
A study of the general aviation single pilot operating under instrument flight rules (GA SPIFR) has been conducted for NASA Langley Research Center. The objectives of the study were to (1) develop a GA SPIFR operational profile, (2) identify problems experienced by the GA SPIFR pilot, and (3) identify research tasks which have the potential for eliminating or reducing the severity of the problems. To obtain the information necessary to accomplish these objectives, a mail questionnaire survey of instrument rated pilots was conducted. Complete questionnaire data is reported in NASA CR-165805, "Statistical Summary: Study to Determine the IFR Operational Profile and Problems of the General Aviation Single Pilot'-Based upon the results of the GA SPIFR survey, this final report presents the general aviation IFR single pilot operational profile, illustrates selected data analysis, examples, identifies the problems which he is experiencing, and recommends further research.
Cognitive Task Analysis: Implications for the Theory and Practice of Instructional Design.
ERIC Educational Resources Information Center
Dehoney, Joanne
Cognitive task analysis grew out of efforts by cognitive psychologists to understand problem-solving in a lab setting. It has proved a useful tool for describing expert performance in complex problem solving domains. This review considers two general models of cognitive task analysis and examines the procedures and results of analyses in three…
An Evaluation of Grades 9 and 10 Mathematics Textbooks vis-a-vis Fostering Problem Solving Skills
ERIC Educational Resources Information Center
Buishaw, Alemayehu; Ayalew, Assaye
2013-01-01
This study sought to evaluate the adequacy of integration of problematic situations and general problem-solving strategies (heuristics) in grades 9 and 10 mathematics textbooks. Grade 9 and grade 10 mathematics textbooks were used for analysis. Document analysis and interview were used as data gathering instruments. Document analysis was carried…
Robust Adaptive Modified Newton Algorithm for Generalized Eigendecomposition and Its Application
NASA Astrophysics Data System (ADS)
Yang, Jian; Yang, Feng; Xi, Hong-Sheng; Guo, Wei; Sheng, Yanmin
2007-12-01
We propose a robust adaptive algorithm for generalized eigendecomposition problems that arise in modern signal processing applications. To that extent, the generalized eigendecomposition problem is reinterpreted as an unconstrained nonlinear optimization problem. Starting from the proposed cost function and making use of an approximation of the Hessian matrix, a robust modified Newton algorithm is derived. A rigorous analysis of its convergence properties is presented by using stochastic approximation theory. We also apply this theory to solve the signal reception problem of multicarrier DS-CDMA to illustrate its practical application. The simulation results show that the proposed algorithm has fast convergence and excellent tracking capability, which are important in a practical time-varying communication environment.
Sensitivity analysis and approximation methods for general eigenvalue problems
NASA Technical Reports Server (NTRS)
Murthy, D. V.; Haftka, R. T.
1986-01-01
Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.
Functional Techniques for Data Analysis
NASA Technical Reports Server (NTRS)
Tomlinson, John R.
1997-01-01
This dissertation develops a new general method of solving Prony's problem. Two special cases of this new method have been developed previously. They are the Matrix Pencil and the Osculatory Interpolation. The dissertation shows that they are instances of a more general solution type which allows a wide ranging class of linear functional to be used in the solution of the problem. This class provides a continuum of functionals which provide new methods that can be used to solve Prony's problem.
Summary of 1971 pattern recognition program development
NASA Technical Reports Server (NTRS)
Whitley, S. L.
1972-01-01
Eight areas related to pattern recognition analysis at the Earth Resources Laboratory are discussed: (1) background; (2) Earth Resources Laboratory goals; (3) software problems/limitations; (4) operational problems/limitations; (5) immediate future capabilities; (6) Earth Resources Laboratory data analysis system; (7) general program needs and recommendations; and (8) schedule and milestones.
ERIC Educational Resources Information Center
Kroeker, Leonard P.
The problem of blocking on a status variable was investigated. The one-way fixed-effects analysis of variance, analysis of covariance, and generalized randomized block designs each treat the blocking problem in a different way. In order to compare these designs, it is necessary to restrict attention to experimental situations in which observations…
Is What Is Good for General Motors Good for Architecture?
ERIC Educational Resources Information Center
Myrick, Richard; And Others
1966-01-01
Problems of behavioral evaluation and determination of initial building stimuli are discussed in terms of architectural analysis. Application of management research techniques requires problem and goal definition. Analysis of both lower and higher order needs is contingent upon these definitions. Lower order needs relate to more abstract…
Psychophysiology of Aggression, Psychopathy, and Conduct Problems: A Meta-Analysis
ERIC Educational Resources Information Center
Lorber, Michael F.
2004-01-01
A meta-analysis of 95 studies was conducted to investigate the relations of heart rate (HR) and electrodermal activity (EDA) with aggression, psychopathy, and conduct problems. Analyses revealed a complex constellation of interactive effects, with a failure in some cases of autonomic patterns to generalize across antisocial spectrum behavior…
On the foundations of general relativistic celestial mechanics
NASA Astrophysics Data System (ADS)
Battista, Emmanuele; Esposito, Giampiero; Dell'Agnello, Simone
2017-09-01
Towards the end of nineteenth century, Celestial Mechanics provided the most powerful tools to test Newtonian gravity in the solar system and also led to the discovery of chaos in modern science. Nowadays, in light of general relativity, Celestial Mechanics leads to a new perspective on the motion of satellites and planets. The reader is here introduced to the modern formulation of the problem of motion, following what the leaders in the field have been teaching since the nineties, in particular, the use of a global chart for the overall dynamics of N bodies and N local charts describing the internal dynamics of each body. The next logical step studies in detail how to split the N-body problem into two sub-problems concerning the internal and external dynamics, how to achieve the effacement properties that would allow a decoupling of the two sub-problems, how to define external-potential-effacing coordinates and how to generalize the Newtonian multipole and tidal moments. The review paper ends with an assessment of the nonlocal equations of motion obtained within such a framework, a description of the modifications induced by general relativity on the theoretical analysis of the Newtonian three-body problem, and a mention of the potentialities of the analysis of solar-system metric data carried out with the Planetary Ephemeris Program.
Leavey, Gerard; Rosato, Michael; Galway, Karen; Hughes, Lynette; Mallon, Sharon; Rondon, Janeet
2016-04-30
Contact with primary care and psychiatric services prior to suicide may be considerable, presenting opportunities for intervention. However, there is scant knowledge on the frequency, nature and determinants of contact. Retrospective cohort study-an analysis of deaths recorded as suicide by the Northern Ireland Coroner's Office linked with data from General Practice patient records over a 2 year period Eighty-seven per cent of suicides were in contact with General Practice services in the 12 months before suicide. The frequency of contact with services was considerable, particularly among patients with a common mental disorder or substance misuse problems. A diagnosis of psychiatric problems was absent in 40% of suicides. Excluding suicide attempts, the main predictors of a noted general practitioner concern for patient suicidality are male gender, frequency of consultations, diagnosis of mental illness and substance misuse. Despite widespread and frequent contact, a substantial proportion of suicidal people were undiagnosed and untreated for mental health problems. General Practitioner alertness to suicidality may be too narrowly focused.
On the characteristic exponents of the general three-body problem
NASA Technical Reports Server (NTRS)
Broucke, R.
1976-01-01
A description is given of some properties of the characteristic exponents of the general three-body problem. The variational equations on which the analysis is based are obtained by linearizing the Lagrangian equations of motion in the neighborhood of a given known solution. Attention is given to the fundamental matrix of solutions, the characteristic equation, the three trivial solutions of the variational equations of the three-body problem, symmetric periodic orbits, and the half-period properties of symmetric periodic orbits.
Operant Variability: Some Random Thoughts
ERIC Educational Resources Information Center
Marr, M. Jackson
2012-01-01
Barba's (2012) paper is a serious and thoughtful analysis of a vexing problem in behavior analysis: Just what should count as an operant class and how do people know? The slippery issue of a "generalized operant" or functional response class illustrates one aspect of this problem, and "variation" or "novelty" as an operant appears to fall into…
Regularized Generalized Structured Component Analysis
ERIC Educational Resources Information Center
Hwang, Heungsun
2009-01-01
Generalized structured component analysis (GSCA) has been proposed as a component-based approach to structural equation modeling. In practice, GSCA may suffer from multi-collinearity, i.e., high correlations among exogenous variables. GSCA has yet no remedy for this problem. Thus, a regularized extension of GSCA is proposed that integrates a ridge…
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan; Bittker, David A.
1994-01-01
LSENS, the Lewis General Chemical Kinetics Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 2 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 2 describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part 1 (NASA RP-1328) derives the governing equations describes the numerical solution procedures for the types of problems that can be solved by lSENS. Part 3 (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.
Braunack-Mayer, A. J.
2001-01-01
Whilst there has been considerable debate about the fit between moral theory and moral reasoning in everyday life, the way in which moral problems are defined has rarely been questioned. This paper presents a qualitative analysis of interviews conducted with 15 general practitioners (GPs) in South Australia to argue that the way in which the bioethics literature defines an ethical dilemma captures only some of the range of lay views about the nature of ethical problems. The bioethics literature has defined ethical dilemmas in terms of conflict and choice between values, beliefs and options for action. While some of the views of some of the GPs in this study about the nature of their ethical dilemmas certainly accorded with this definition, other explanations of the ethical nature of their problems revolved around the publicity associated with the issues they were discussing, concern about their relationships with patients, and anxiety about threats to their integrity and reputation. The variety of views about what makes a problem a moral problem indicates that the moral domain is perhaps wider and richer than mainstream bioethics would generally allow. Key Words: Empirical ethics • general practice • qualitative research PMID:11314166
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan
1994-01-01
LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 1 of a series of three reference publications that describe LENS, provide a detailed guide to its usage, and present many example problems. Part 1 derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved. The accuracy and efficiency of LSENS are examined by means of various test problems, and comparisons with other methods and codes are presented. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.
A Projection free method for Generalized Eigenvalue Problem with a nonsmooth Regularizer.
Hwang, Seong Jae; Collins, Maxwell D; Ravi, Sathya N; Ithapu, Vamsi K; Adluru, Nagesh; Johnson, Sterling C; Singh, Vikas
2015-12-01
Eigenvalue problems are ubiquitous in computer vision, covering a very broad spectrum of applications ranging from estimation problems in multi-view geometry to image segmentation. Few other linear algebra problems have a more mature set of numerical routines available and many computer vision libraries leverage such tools extensively. However, the ability to call the underlying solver only as a "black box" can often become restrictive. Many 'human in the loop' settings in vision frequently exploit supervision from an expert, to the extent that the user can be considered a subroutine in the overall system. In other cases, there is additional domain knowledge, side or even partial information that one may want to incorporate within the formulation. In general, regularizing a (generalized) eigenvalue problem with such side information remains difficult. Motivated by these needs, this paper presents an optimization scheme to solve generalized eigenvalue problems (GEP) involving a (nonsmooth) regularizer. We start from an alternative formulation of GEP where the feasibility set of the model involves the Stiefel manifold. The core of this paper presents an end to end stochastic optimization scheme for the resultant problem. We show how this general algorithm enables improved statistical analysis of brain imaging data where the regularizer is derived from other 'views' of the disease pathology, involving clinical measurements and other image-derived representations.
Hybrid Optimization in Urban Traffic Networks
DOT National Transportation Integrated Search
1979-04-01
The hybrid optimization problem is formulated to provide a general theoretical framework for the analysis of a class of traffic control problems which takes into account the role of individual drivers as independent decisionmakers. Different behavior...
Conjecturing and Generalization Process on The Structural Development
NASA Astrophysics Data System (ADS)
Ni'mah, Khomsatun; Purwanto; Bambang Irawan, Edy; Hidayanto, Erry
2017-06-01
This study aims to describe how conjecturing process and generalization process of structural development to thirty children in middle school at grade 8 in solving problems of patterns. Processing of the data in this study uses qualitative data analysis techniques. The analyzed data is the data obtained through direct observation technique, documentation, and interviews. This study based on research studies Mulligan et al (2012) which resulted in a five - structural development stage, namely prestructural, emergent, partial, structural, and advance. From the analysis of the data in this study found there are two phenomena that is conjecturing and generalization process are related. During the conjecturing process, the childrens appropriately in making hypothesis of patterns problem through two phases, which are numerically and symbolically. Whereas during the generalization of process, the childrens able to related rule of pattern on conjecturing process to another context.
ERIC Educational Resources Information Center
Diamond, James J.; McCormick, Janet
1986-01-01
Using item responses from an in-training examination in diagnostic radiology, the application of a strength of association statistic to the general problem of item analysis is illustrated. Criteria for item selection, general issues of reliability, and error of measurement are discussed. (Author/LMO)
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan; Bittker, David A.
1994-01-01
LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part II of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part II describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part I (NASA RP-1328) derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved by LSENS. Part III (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.
Research and applications: Artificial intelligence
NASA Technical Reports Server (NTRS)
Raphael, B.; Fikes, R. E.; Chaitin, L. J.; Hart, P. E.; Duda, R. O.; Nilsson, N. J.
1971-01-01
A program of research in the field of artificial intelligence is presented. The research areas discussed include automatic theorem proving, representations of real-world environments, problem-solving methods, the design of a programming system for problem-solving research, techniques for general scene analysis based upon television data, and the problems of assembling an integrated robot system. Major accomplishments include the development of a new problem-solving system that uses both formal logical inference and informal heuristic methods, the development of a method of automatic learning by generalization, and the design of the overall structure of a new complete robot system. Eight appendices to the report contain extensive technical details of the work described.
Integrated reflector antenna design and analysis
NASA Technical Reports Server (NTRS)
Zimmerman, M. L.; Lee, S. W.; Ni, S.; Christensen, M.; Wang, Y. M.
1993-01-01
Reflector antenna design is a mature field and most aspects were studied. However, of that most previous work is distinguished by the fact that it is narrow in scope, analyzing only a particular problem under certain conditions. Methods of analysis of this type are not useful for working on real-life problems since they can not handle the many and various types of perturbations of basic antenna design. The idea of an integrated design and analysis is proposed. By broadening the scope of the analysis, it becomes possible to deal with the intricacies attendant with modem reflector antenna design problems. The concept of integrated reflector antenna design is put forward. A number of electromagnetic problems related to reflector antenna design are investigated. Some of these show how tools for reflector antenna design are created. In particular, a method for estimating spillover loss for open-ended waveguide feeds is examined. The problem of calculating and optimizing beam efficiency (an important figure of merit in radiometry applications) is also solved. Other chapters deal with applications of this general analysis. The wide angle scan abilities of reflector antennas is examined and a design is proposed for the ATDRSS triband reflector antenna. The development of a general phased-array pattern computation program is discussed and how the concept of integrated design can be extended to other types of antennas is shown. The conclusions are contained in the final chapter.
Simulation and Analysis of Converging Shock Wave Test Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramsey, Scott D.; Shashkov, Mikhail J.
2012-06-21
Results and analysis pertaining to the simulation of the Guderley converging shock wave test problem (and associated code verification hydrodynamics test problems involving converging shock waves) in the LANL ASC radiation-hydrodynamics code xRAGE are presented. One-dimensional (1D) spherical and two-dimensional (2D) axi-symmetric geometric setups are utilized and evaluated in this study, as is an instantiation of the xRAGE adaptive mesh refinement capability. For the 2D simulations, a 'Surrogate Guderley' test problem is developed and used to obviate subtleties inherent to the true Guderley solution's initialization on a square grid, while still maintaining a high degree of fidelity to the originalmore » problem, and minimally straining the general credibility of associated analysis and conclusions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tamburrini, G.; Termini, S.
1982-01-01
The general thesis underlying the present paper is that there are very strong methodological relations among cybernetics, system science, artificial intelligence, fuzzy sets and many other related fields. Then, in order to understand better both the achievements and the weak points of all the previous disciplines, one should look for some common features for looking at them in this general frame. What will be done is to present a brief analysis of the primitive program of cybernetics, presenting it as a case study useful for developing the previous thesis. Among the discussed points are the problems of interdisciplinarity and ofmore » the unity of cybernetics. Some implications of this analysis for a new reading of general system theory and fuzzy sets are briefly outlined at the end of the paper. 3 references.« less
NASA Technical Reports Server (NTRS)
Hen, Itay; Rieffel, Eleanor G.; Do, Minh; Venturelli, Davide
2014-01-01
There are two common ways to evaluate algorithms: performance on benchmark problems derived from real applications and analysis of performance on parametrized families of problems. The two approaches complement each other, each having its advantages and disadvantages. The planning community has concentrated on the first approach, with few ways of generating parametrized families of hard problems known prior to this work. Our group's main interest is in comparing approaches to solving planning problems using a novel type of computational device - a quantum annealer - to existing state-of-the-art planning algorithms. Because only small-scale quantum annealers are available, we must compare on small problem sizes. Small problems are primarily useful for comparison only if they are instances of parametrized families of problems for which scaling analysis can be done. In this technical report, we discuss our approach to the generation of hard planning problems from classes of well-studied NP-complete problems that map naturally to planning problems or to aspects of planning problems that many practical planning problems share. These problem classes exhibit a phase transition between easy-to-solve and easy-to-show-unsolvable planning problems. The parametrized families of hard planning problems lie at the phase transition. The exponential scaling of hardness with problem size is apparent in these families even at very small problem sizes, thus enabling us to characterize even very small problems as hard. The families we developed will prove generally useful to the planning community in analyzing the performance of planning algorithms, providing a complementary approach to existing evaluation methods. We illustrate the hardness of these problems and their scaling with results on four state-of-the-art planners, observing significant differences between these planners on these problem families. Finally, we describe two general, and quite different, mappings of planning problems to QUBOs, the form of input required for a quantum annealing machine such as the D-Wave II.
Confirming the appearance of excess success: Reply to van Boxtel and Koch (2016).
Francis, Gregory
2016-12-01
van Boxtel and Koch (Psychonomic Bulletin & Review. doi: 10.3758/s13423-016-1010-0 , 2016) reported finding problems in the Test for Excess Success (TES) analysis in Francis (Psychonomic Bulletin & Review, 21, 1180-1187, 2014). They argued that their findings undermined the general analysis and the conclusions of the specific TES analysis for their article (van Boxtel & Koch in Psychological Science, 23(4), 410-418, 2012). As shown in this paper, their reported problems reflect misunderstandings about both the general properties of a TES analysis and how it was applied to their specific set of findings. Another look at the findings and theoretical claims in van Boxtel and Koch (Psychological Science, 23(4), 410-418, 2012) confirms the appearance of excess success.
Dynamic extension of the Simulation Problem Analysis Kernel (SPANK)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sowell, E.F.; Buhl, W.F.
1988-07-15
The Simulation Problem Analysis Kernel (SPANK) is an object-oriented simulation environment for general simulation purposes. Among its unique features is use of the directed graph as the primary data structure, rather than the matrix. This allows straightforward use of graph algorithms for matching variables and equations, and reducing the problem graph for efficient numerical solution. The original prototype implementation demonstrated the principles for systems of algebraic equations, allowing simulation of steady-state, nonlinear systems (Sowell 1986). This paper describes how the same principles can be extended to include dynamic objects, allowing simulation of general dynamic systems. The theory is developed andmore » an implementation is described. An example is taken from the field of building energy system simulation. 2 refs., 9 figs.« less
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; Railkar, Sudhir B.
1988-01-01
This paper describes new and recent advances in the development of a hybrid transfinite element computational methodology for applicability to conduction/convection/radiation heat transfer problems. The transfinite element methodology, while retaining the modeling versatility of contemporary finite element formulations, is based on application of transform techniques in conjunction with classical Galerkin schemes and is a hybrid approach. The purpose of this paper is to provide a viable hybrid computational methodology for applicability to general transient thermal analysis. Highlights and features of the methodology are described and developed via generalized formulations and applications to several test problems. The proposed transfinite element methodology successfully provides a viable computational approach and numerical test problems validate the proposed developments for conduction/convection/radiation thermal analysis.
A study analysis of cable-body systems totally immersed in a fluid stream
NASA Technical Reports Server (NTRS)
Delaurier, J. D.
1972-01-01
A general stability analysis of a cable-body system immersed in a fluid stream is presented. The analytical portion of this analysis treats the system as being essentially a cable problem, with the body dynamics giving the end conditions. The mathematical form of the analysis consists of partial differential wave equations, with the end and auxiliary conditions being determined from the body equations of motion. The equations uncouple to give a lateral problem and a longitudinal problem as in first order airplane dynamics. A series of tests on a tethered wind tunnel model provide a comparison of the theory with experiment.
Structural synthesis: Precursor and catalyst
NASA Technical Reports Server (NTRS)
Schmit, L. A.
1984-01-01
More than twenty five years have elapsed since it was recognized that a rather general class of structural design optimization tasks could be properly posed as an inequality constrained minimization problem. It is suggested that, independent of primary discipline area, it will be useful to think about: (1) posing design problems in terms of an objective function and inequality constraints; (2) generating design oriented approximate analysis methods (giving special attention to behavior sensitivity analysis); (3) distinguishing between decisions that lead to an analysis model and those that lead to a design model; (4) finding ways to generate a sequence of approximate design optimization problems that capture the essential characteristics of the primary problem, while still having an explicit algebraic form that is matched to one or more of the established optimization algorithms; (5) examining the potential of optimum design sensitivity analysis to facilitate quantitative trade-off studies as well as participation in multilevel design activities. It should be kept in mind that multilevel methods are inherently well suited to a parallel mode of operation in computer terms or to a division of labor between task groups in organizational terms. Based on structural experience with multilevel methods general guidelines are suggested.
Computer Graphics-aided systems analysis: application to well completion design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Detamore, J.E.; Sarma, M.P.
1985-03-01
The development of an engineering tool (in the form of a computer model) for solving design and analysis problems related with oil and gas well production operations is discussed. The development of the method is based on integrating the concepts of ''Systems Analysis'' with the techniques of ''Computer Graphics''. The concepts behind the method are very general in nature. This paper, however, illustrates the application of the method in solving gas well completion design problems. The use of the method will save time and improve the efficiency of such design and analysis problems. The method can be extended to othermore » design and analysis aspects of oil and gas wells.« less
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Rankin, Charles C.
2006-01-01
This document summarizes the STructural Analysis of General Shells (STAGS) development effort, STAGS performance for selected demonstration problems, and STAGS application problems illustrating selected advanced features available in the STAGS Version 5.0. Each problem is discussed including selected background information and reference solutions when available. The modeling and solution approach for each problem is described and illustrated. Numerical results are presented and compared with reference solutions, test data, and/or results obtained from mesh refinement studies. These solutions provide an indication of the overall capabilities of the STAGS nonlinear finite element analysis tool and provide users with representative cases, including input files, to explore these capabilities that may then be tailored to other applications.
NASA Astrophysics Data System (ADS)
Voloshinov, V. V.
2018-03-01
In computations related to mathematical programming problems, one often has to consider approximate, rather than exact, solutions satisfying the constraints of the problem and the optimality criterion with a certain error. For determining stopping rules for iterative procedures, in the stability analysis of solutions with respect to errors in the initial data, etc., a justified characteristic of such solutions that is independent of the numerical method used to obtain them is needed. A necessary δ-optimality condition in the smooth mathematical programming problem that generalizes the Karush-Kuhn-Tucker theorem for the case of approximate solutions is obtained. The Lagrange multipliers corresponding to the approximate solution are determined by solving an approximating quadratic programming problem.
Linear and nonlinear dynamic analysis by boundary element method. Ph.D. Thesis, 1986 Final Report
NASA Technical Reports Server (NTRS)
Ahmad, Shahid
1991-01-01
An advanced implementation of the direct boundary element method (BEM) applicable to free-vibration, periodic (steady-state) vibration and linear and nonlinear transient dynamic problems involving two and three-dimensional isotropic solids of arbitrary shape is presented. Interior, exterior, and half-space problems can all be solved by the present formulation. For the free-vibration analysis, a new real variable BEM formulation is presented which solves the free-vibration problem in the form of algebraic equations (formed from the static kernels) and needs only surface discretization. In the area of time-domain transient analysis, the BEM is well suited because it gives an implicit formulation. Although the integral formulations are elegant, because of the complexity of the formulation it has never been implemented in exact form. In the present work, linear and nonlinear time domain transient analysis for three-dimensional solids has been implemented in a general and complete manner. The formulation and implementation of the nonlinear, transient, dynamic analysis presented here is the first ever in the field of boundary element analysis. Almost all the existing formulation of BEM in dynamics use the constant variation of the variables in space and time which is very unrealistic for engineering problems and, in some cases, it leads to unacceptably inaccurate results. In the present work, linear and quadratic isoparametric boundary elements are used for discretization of geometry and functional variations in space. In addition, higher order variations in time are used. These methods of analysis are applicable to piecewise-homogeneous materials, such that not only problems of the layered media and the soil-structure interaction can be analyzed but also a large problem can be solved by the usual sub-structuring technique. The analyses have been incorporated in a versatile, general-purpose computer program. Some numerical problems are solved and, through comparisons with available analytical and numerical results, the stability and high accuracy of these dynamic analysis techniques are established.
Analysis of crack propagation as an energy absorption mechanism in metal matrix composites
NASA Technical Reports Server (NTRS)
Adams, D. F.; Murphy, D. P.
1981-01-01
The crack initiation and crack propagation capability was extended to the previously developed generalized plane strain, finite element micromechanics analysis. Also, an axisymmetric analysis was developed, which contains all of the general features of the plane analysis, including elastoplastic material behavior, temperature-dependent material properties, and crack propagation. These analyses were used to generate various example problems demonstrating the inelastic response of, and crack initiation and propagation in, a boron/aluminum composite.
Generalized Centroid Estimators in Bioinformatics
Hamada, Michiaki; Kiryu, Hisanori; Iwasaki, Wataru; Asai, Kiyoshi
2011-01-01
In a number of estimation problems in bioinformatics, accuracy measures of the target problem are usually given, and it is important to design estimators that are suitable to those accuracy measures. However, there is often a discrepancy between an employed estimator and a given accuracy measure of the problem. In this study, we introduce a general class of efficient estimators for estimation problems on high-dimensional binary spaces, which represent many fundamental problems in bioinformatics. Theoretical analysis reveals that the proposed estimators generally fit with commonly-used accuracy measures (e.g. sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in many cases, and cover a wide range of problems in bioinformatics from the viewpoint of the principle of maximum expected accuracy (MEA). It is also shown that some important algorithms in bioinformatics can be interpreted in a unified manner. Not only the concept presented in this paper gives a useful framework to design MEA-based estimators but also it is highly extendable and sheds new light on many problems in bioinformatics. PMID:21365017
Wightman, Jade; Julio, Flávia; Virués-Ortega, Javier
2014-05-01
Experimental functional analysis is an assessment methodology to identify the environmental factors that maintain problem behavior in individuals with developmental disabilities and in other populations. Functional analysis provides the basis for the development of reinforcement-based approaches to treatment. This article reviews the procedures, validity, and clinical implementation of the methodological variations of functional analysis and function-based interventions. We present six variations of functional analysis methodology in addition to the typical functional analysis: brief functional analysis, single-function tests, latency-based functional analysis, functional analysis of precursors, and trial-based functional analysis. We also present the three general categories of function-based interventions: extinction, antecedent manipulation, and differential reinforcement. Functional analysis methodology is a valid and efficient approach to the assessment of problem behavior and the selection of treatment strategies.
Sg, Prem Kumar; G, Anil Kumar; Sp, Ramgopal; V, Venkata Srinivas; Dandona, Rakhi
2016-09-21
Data on mental health among orphaned children in India are scanty. We compared the generalized anxiety, conduct and peer relationship problems and their associated risk factors among children orphaned by HIV/AIDS and those due to other reasons in the Indian city of Hyderabad. Four hundred orphaned children aged 12 to 16 years residing in orphanages in Hyderabad were sampled, half being AIDS orphans (COA) and the rest orphaned due to other reasons (COO). Interviews were done using standardized scales to assess generalized anxiety, conduct and peer relationship problems. A score >8, >4, and >5 was considered as indicator of generalized anxiety, conduct problem and peer relationship problem, respectively. Variations in the intensity of these three conditions due to possible factors including co-existing depression were assessed using multiple classification analysis (MCA). A total of 396 (99.3 %) orphans participated of whom 199 (50.3 %) were COA. The mean generalized anxiety, conduct and peer relationship problem scores were 11.1 (SD 5.2), 3.8 (SD 2.5) and 3.8 (SD 2.5) for COA; and 7.6 (SD 4), 2.6 (SD 2) and 2.3 (SD 1.8) for COO, respectively. Among COA, the prevalence of generalized anxiety score of >8 was 74.4 % (95 % CI 67.8-80.0 %), of conduct problem score of >4 was 33.2 % (95 % CI 26.9-40.1 %), and of peer relationship problem score of >5 was 27.6 %, (95 % CI 21.8-34.3 %), with these being significantly lower in COO. In MCA, a higher mean depression score had the highest effect on the intensity of generalized anxiety, conduct and peer relationship problem (Beta 0.477; 0.379 and 0.453 respectively); being COA and a girl had the most impact on generalized anxiety (0.100 and 0.115, respectively). A significantly high proportion of AIDS orphans deal with generalized anxiety, conduct and peer relationship problem as compared with other orphans highlighting the need to address the poor mental health of orphans in India.
Cognitive Models for Integrating Testing and Instruction, Phase II. Methodology Program.
ERIC Educational Resources Information Center
Quellmalz, Edys S.; Shaha, Steven
The potential of a cognitive model task analysis scheme (CMS) that specifies features of test problems shown by research to affect performance is explored. CMS describes the general skill area and the generic task or problem type. It elaborates features of the problem situation and required responses found by research to influence performance.…
Supporting Valid Decision Making: Uses and Misuses of Assessment Data within the Context of RtI
ERIC Educational Resources Information Center
Ball, Carrie R.; Christ, Theodore J.
2012-01-01
Within an RtI problem-solving context, assessment and decision making generally center around the tasks of problem identification, problem analysis, progress monitoring, and program evaluation. We use this framework to discuss the current state of the literature regarding curriculum based measurement, its technical properties, and its utility for…
Modeling the missile-launch tube problem in DYSCO
NASA Technical Reports Server (NTRS)
Berman, Alex; Gustavson, Bruce A.
1989-01-01
DYSCO is a versatile, general purpose dynamic analysis program which assembles equations and solves dynamics problems. The executive manages a library of technology modules which contain routines that compute the matrix coefficients of the second order ordinary differential equations of the components. The executive performs the coupling of the equations of the components and manages the solution of the coupled equations. Any new component representation may be added to the library if, given the state vector, a FORTRAN program can be written to compute M, C, K, and F. The problem described demonstrates the generality of this statement.
Shape Optimization for Navier-Stokes Equations with Algebraic Turbulence Model: Existence Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bulicek, Miroslav; Haslinger, Jaroslav; Malek, Josef
2009-10-15
We study a shape optimization problem for the paper machine headbox which distributes a mixture of water and wood fibers in the paper making process. The aim is to find a shape which a priori ensures the given velocity profile on the outlet part. The mathematical formulation leads to an optimal control problem in which the control variable is the shape of the domain representing the header, the state problem is represented by a generalized stationary Navier-Stokes system with nontrivial mixed boundary conditions. In this paper we prove the existence of solutions both to the generalized Navier-Stokes system and tomore » the shape optimization problem.« less
Problem Solving Model for Science Learning
NASA Astrophysics Data System (ADS)
Alberida, H.; Lufri; Festiyed; Barlian, E.
2018-04-01
This research aims to develop problem solving model for science learning in junior high school. The learning model was developed using the ADDIE model. An analysis phase includes curriculum analysis, analysis of students of SMP Kota Padang, analysis of SMP science teachers, learning analysis, as well as the literature review. The design phase includes product planning a science-learning problem-solving model, which consists of syntax, reaction principle, social system, support system, instructional impact and support. Implementation of problem-solving model in science learning to improve students' science process skills. The development stage consists of three steps: a) designing a prototype, b) performing a formative evaluation and c) a prototype revision. Implementation stage is done through a limited trial. A limited trial was conducted on 24 and 26 August 2015 in Class VII 2 SMPN 12 Padang. The evaluation phase was conducted in the form of experiments at SMPN 1 Padang, SMPN 12 Padang and SMP National Padang. Based on the development research done, the syntax model problem solving for science learning at junior high school consists of the introduction, observation, initial problems, data collection, data organization, data analysis/generalization, and communicating.
Probabilistic boundary element method
NASA Technical Reports Server (NTRS)
Cruse, T. A.; Raveendra, S. T.
1989-01-01
The purpose of the Probabilistic Structural Analysis Method (PSAM) project is to develop structural analysis capabilities for the design analysis of advanced space propulsion system hardware. The boundary element method (BEM) is used as the basis of the Probabilistic Advanced Analysis Methods (PADAM) which is discussed. The probabilistic BEM code (PBEM) is used to obtain the structural response and sensitivity results to a set of random variables. As such, PBEM performs analogous to other structural analysis codes such as finite elements in the PSAM system. For linear problems, unlike the finite element method (FEM), the BEM governing equations are written at the boundary of the body only, thus, the method eliminates the need to model the volume of the body. However, for general body force problems, a direct condensation of the governing equations to the boundary of the body is not possible and therefore volume modeling is generally required.
Picture of All Solutions of Successive 2-Block Maxbet Problems
ERIC Educational Resources Information Center
Choulakian, Vartan
2011-01-01
The Maxbet method is a generalized principal components analysis of a data set, where the group structure of the variables is taken into account. Similarly, 3-block[12,13] partial Maxdiff method is a generalization of covariance analysis, where only the covariances between blocks (1, 2) and (1, 3) are taken into account. The aim of this paper is…
Job Redesign: An Analysis of an Intervention to Improve Job Characteristics
1989-09-01
vii I. Introduction . . . . . . . . . . . 1 General Issue ... . I Specific Problem . .. . . . I Research Objectives . . . . . 2...the Job Diagnostic Survey, the Minnesota Satisfaction Questionnaire and ad-hoc items pertaining to the issues of training, challenge, and the matrix...vii JOB REDESIGN: AN ANALYSIS OF AN INTERVENTION TO IMPROVE JOB CHARACTERISTICS I. Introduction General Issue This tnesis will center on whether job
Fuchs, Lynn S.; Compton, Donald L.; Fuchs, Douglas; Powell, Sarah R.; Schumacher, Robin F.; Hamlett, Carol L.; Vernier, Emily; Namkung, Jessica M.; Vukovic, Rose K.
2012-01-01
The purpose of this study was to investigate the contributions of domain-general cognitive resources and different forms of arithmetic development to individual differences in pre-algebraic knowledge. Children (n=279; mean age=7.59 yrs) were assessed on 7 domain-general cognitive resources as well as arithmetic calculations and word problems at start of 2nd grade and on calculations, word problems, and pre-algebraic knowledge at end of 3rd grade. Multilevel path analysis, controlling for instructional effects associated with the sequence of classrooms in which students were nested across grades 2–3, indicated arithmetic calculations and word problems are foundational to pre-algebraic knowledge. Also, results revealed direct contributions of nonverbal reasoning and oral language to pre-algebraic knowledge, beyond indirect effects that are mediated via arithmetic calculations and word problems. By contrast, attentive behavior, phonological processing, and processing speed contributed to pre-algebraic knowledge only indirectly via arithmetic calculations and word problems. PMID:22409764
User's Manual: Thermal Radiation Analysis System TRASYS 2
NASA Technical Reports Server (NTRS)
Jensen, C. L.
1981-01-01
A digital computer software system with generalized capability to solve the radiation related aspects of thermal analysis problems is presented. When used in conjunction with a generalized thermal analysis program such as the systems improved numerical differencing analyzer program, any thermal problem that can be expressed in terms of a lumped parameter R-C thermal network can be solved. The function of TRASYS is twofold. It provides: (a) Internode radiation interchange data; and (b) Incident and absorbed heat rate data from environmental radiant heat sources. Data of both types is provided in a format directly usable by the thermal analyzer programs. The system allows the user to write his own executive or driver program which organizes and directs the program library routines toward solution of each specific problem in the most expeditious manner. The user also may write his own output routines, thus the system data output can directly interface with any thermal analyzer using the R-C network concept.
Data Understanding Applied to Optimization
NASA Technical Reports Server (NTRS)
Buntine, Wray; Shilman, Michael
1998-01-01
The goal of this research is to explore and develop software for supporting visualization and data analysis of search and optimization. Optimization is an ever-present problem in science. The theory of NP-completeness implies that the problems can only be resolved by increasingly smarter problem specific knowledge, possibly for use in some general purpose algorithms. Visualization and data analysis offers an opportunity to accelerate our understanding of key computational bottlenecks in optimization and to automatically tune aspects of the computation for specific problems. We will prototype systems to demonstrate how data understanding can be successfully applied to problems characteristic of NASA's key science optimization tasks, such as central tasks for parallel processing, spacecraft scheduling, and data transmission from a remote satellite.
The Analysis of Seawater: A Laboratory-Centered Learning Project in General Chemistry.
ERIC Educational Resources Information Center
Selco, Jodye I.; Roberts, Julian L., Jr.; Wacks, Daniel B.
2003-01-01
Describes a sea-water analysis project that introduces qualitative and quantitative analysis methods and laboratory methods such as gravimetric analysis, potentiometric titration, ion-selective electrodes, and the use of calibration curves. Uses a problem-based cooperative teaching approach. (Contains 24 references.) (YDS)
Jordan, N C; Montani, T O
1997-01-01
This study examined problem-solving and number-fact skills in two subgroups of third-grade children with mathematics difficulties (MD): MD-specific (n = 12) and MD-general (n = 12). The MD-specific group had difficulties in mathematics but not in reading, and the MD-general group had difficulties in reading as well as in mathematics. A comparison group of nonimpaired children (n = 24) also was included. The findings showed that on both story and number-fact problems, the MD-specific group performed worse than the nonimpaired group in timed conditions but not in untimed conditions. The MD-general group, on the other hand, performed worse than the nonimpaired group, regardless of whether tasks were timed or not. An analysis of children's strategies in untimed conditions showed that both the MD-specific and the MD-general groups relied more on backup strategies than the nonimpaired group. However, children in the MD-specific group executed backup strategies more skillfully than children in the MD-general group, allowing them to achieve parity with children in the nonimpaired group when tasks were not timed. The findings suggest that children with specific MD have circumscribed deficits associated with fact retrieval, whereas children with general MD have more basic delays associated with problem conceptualization and execution of calculation procedures.
Azimian, Jalil; Piran, Pegah; Jahanihashemi, Hassan; Dehghankar, Leila
2017-04-01
Pressures in nursing can affect family life and marital problems, disrupt common social problems, increase work-family conflicts and endanger people's general health. To determine marital satisfaction and its relationship with job stress and general health of nurses. This descriptive and cross-sectional study was done in 2015 in medical educational centers of Qazvin by using an ENRICH marital satisfaction scale and General Health and Job Stress questionnaires completed by 123 nurses. Analysis was done by SPSS version 19 using descriptive and analytical statistics (Pearson correlation, t-test, ANOVA, Chi-square, regression line, multiple regression analysis). The findings showed that 64.4% of nurses had marital satisfaction. There was significant relationship between age (p=0.03), job experience (p=0.01), age of spouse (p=0.01) and marital satisfaction. The results showed that there was a significant relationship between marital satisfaction and general health (p<0.0001). Multiple regression analysis showed that there was a significant relationship between depression (p=0.012) and anxiety (p=0.001) with marital satisfaction. Due to high levels of job stress and disorder in general health of nurses and low marital satisfaction by running health promotion programs and paying attention to its dimensions can help work and family health of nurses.
The twilight of the training analysis system.
Kernberg, Otto F
2014-04-01
This paper briefly reviews challenges to psychoanalysis at this time, including those derived from both external, societal origins and internal psychoanalytic problems. It focuses attention on serious conflicts around psychoanalytic education, and refers to the training analysis system as a central problem determining fundamental constraints on present-day psychoanalytic education. These constraints are examined in some detail, and the general advantages and disadvantages of the training analysis system are outlined. The effects of all these dynamics on the administrative organization of the American Psychoanalytic Association are explored, and a proposal for a fundamental reorganization of our educational system to resolve the correspondent problems is outlined.
A Fast Fourier transform stochastic analysis of the contaminant transport problem
Deng, F.W.; Cushman, J.H.; Delleur, J.W.
1993-01-01
A three-dimensional stochastic analysis of the contaminant transport problem is developed in the spirit of Naff (1990). The new derivation is more general and simpler than previous analysis. The fast Fourier transformation is used extensively to obtain numerical estimates of the mean concentration and various spatial moments. Data from both the Borden and Cape Cod experiments are used to test the methodology. Results are comparable to results obtained by other methods, and to the experiments themselves.
Artificial equilibrium points for a generalized sail in the elliptic restricted three-body problem
NASA Astrophysics Data System (ADS)
Aliasi, Generoso; Mengali, Giovanni; Quarta, Alessandro A.
2012-10-01
Different types of propulsion systems with continuous and purely radial thrust, whose modulus depends on the distance from a massive body, may be conveniently described within a single mathematical model by means of the concept of generalized sail. This paper discusses the existence and stability of artificial equilibrium points maintained by a generalized sail within an elliptic restricted three-body problem. Similar to the classical case in the absence of thrust, a generalized sail guarantees the existence of equilibrium points belonging only to the orbital plane of the two primaries. The geometrical loci of existing artificial equilibrium points are shown to coincide with those obtained for the circular three body problem when a non-uniformly rotating and pulsating coordinate system is chosen to describe the spacecraft motion. However, the generalized sail has to provide a periodically variable acceleration to maintain a given artificial equilibrium point. A linear stability analysis of the artificial equilibrium points is provided by means of the Floquet theory.
Analysis Tools for CFD Multigrid Solvers
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.; Thomas, James L.; Diskin, Boris
2004-01-01
Analysis tools are needed to guide the development and evaluate the performance of multigrid solvers for the fluid flow equations. Classical analysis tools, such as local mode analysis, often fail to accurately predict performance. Two-grid analysis tools, herein referred to as Idealized Coarse Grid and Idealized Relaxation iterations, have been developed and evaluated within a pilot multigrid solver. These new tools are applicable to general systems of equations and/or discretizations and point to problem areas within an existing multigrid solver. Idealized Relaxation and Idealized Coarse Grid are applied in developing textbook-efficient multigrid solvers for incompressible stagnation flow problems.
On the tautology of the matching law in consumer behavior analysis.
Curry, Bruce; Foxall, Gordon R; Sigurdsson, Valdimar
2010-05-01
Matching analysis has often attracted the criticism that it is formally tautological and hence empirically unfalsifiable, a problem that particularly affects translational attempts to extend behavior analysis into new areas. An example is consumer behavior analysis where application of matching in natural settings requires the inference of ratio-based relationships between amount purchased and amount spent. This gives rise to the argument that matching is an artifact of the way in which the alleged independent and dependent variables are defined and measured. We argue that the amount matching law would be tautological only in extreme circumstances (those in which prices or quantities move strictly in proportion); this is because of the presence of an error term in the matching function which arises from aggregation, particularly aggregation over brands. Cost matching is a viable complement of amount matching which avoids this tautology but a complete explanation of consumer choice requires a viable measure of amount matching also. This necessitates a more general solution to the problem of tautology in matching. In general, the fact that there remain doubts about the functional form of the matching equation itself implies the absence of a tautology. In proposing a general solution to the problem of assumed tautology in matching, the paper notes the experiences of matching researchers in another translation field, sports behavior. Copyright (c) 2009 Elsevier B.V. All rights reserved.
Some Observations on Cost-Effectiveness Analysis in Education.
ERIC Educational Resources Information Center
Geske, Terry G.
1979-01-01
The general nature of cost-effectiveness analysis is discussed, analytical frameworks for conducting cost-effectiveness studies are described, and some of the problems inherent in measuring educational costs and in assessing program effectiveness are addressed. (Author/IRT)
NASA Astrophysics Data System (ADS)
Reznik, A. L.; Tuzikov, A. V.; Solov'ev, A. A.; Torgov, A. V.
2016-11-01
Original codes and combinatorial-geometrical computational schemes are presented, which are developed and applied for finding exact analytical formulas that describe the probability of errorless readout of random point images recorded by a scanning aperture with a limited number of threshold levels. Combinatorial problems encountered in the course of the study and associated with the new generalization of Catalan numbers are formulated and solved. An attempt is made to find the explicit analytical form of these numbers, which is, on the one hand, a necessary stage of solving the basic research problem and, on the other hand, an independent self-consistent problem.
NASA Astrophysics Data System (ADS)
Zhang, Sheng; Hong, Siyu
2018-07-01
In this paper, a generalized Ablowitz-Kaup-Newell-Segur (AKNS) hierarchy in inhomogeneities of media described by variable coefficients is investigated, which includes some important nonlinear evolution equations as special cases, for example, the celebrated Korteweg-de Vries equation modeling waves on shallow water surfaces. To be specific, the known AKNS spectral problem and its time evolution equation are first generalized by embedding a finite number of differentiable and time-dependent functions. Starting from the generalized AKNS spectral problem and its generalized time evolution equation, a generalized AKNS hierarchy with variable coefficients is then derived. Furthermore, based on a systematic analysis on the time dependence of related scattering data of the generalized AKNS spectral problem, exact solutions of the generalized AKNS hierarchy are formulated through the inverse scattering transform method. In the case of reflectionless potentials, the obtained exact solutions are reduced to n-soliton solutions. It is graphically shown that the dynamical evolutions of such soliton solutions are influenced by not only the time-dependent coefficients but also the related scattering data in the process of propagations.
Hart, Sara A.; Petrill, Stephen A.; Thompson, Lee A.; Plomin, Robert
2009-01-01
The goal of this first major report from the Western Reserve Reading Project Math component is to explore the etiology of the relationship among tester-administered measures of mathematics ability, reading ability, and general cognitive ability. Data are available on 314 pairs of monozygotic and same-sex dizygotic twins analyzed across 5 waves of assessment. Univariate analyses provide a range of estimates of genetic (h2 = .00 –.63) and shared (c2 = .15–.52) environmental influences across math calculation, fluency, and problem solving measures. Multivariate analyses indicate genetic overlap between math problem solving with general cognitive ability and reading decoding, whereas math fluency shares significant genetic overlap with reading fluency and general cognitive ability. Further, math fluency has unique genetic influences. In general, math ability has shared environmental overlap with general cognitive ability and decoding. These results indicate that aspects of math that include problem solving have different genetic and environmental influences than math calculation. Moreover, math fluency, a timed measure of calculation, is the only measured math ability with unique genetic influences. PMID:20157630
Sensitivity Analysis for some Water Pollution Problem
NASA Astrophysics Data System (ADS)
Le Dimet, François-Xavier; Tran Thu, Ha; Hussaini, Yousuff
2014-05-01
Sensitivity Analysis for Some Water Pollution Problems Francois-Xavier Le Dimet1 & Tran Thu Ha2 & M. Yousuff Hussaini3 1Université de Grenoble, France, 2Vietnamese Academy of Sciences, 3 Florida State University Sensitivity analysis employs some response function and the variable with respect to which its sensitivity is evaluated. If the state of the system is retrieved through a variational data assimilation process, then the observation appears only in the Optimality System (OS). In many cases, observations have errors and it is important to estimate their impact. Therefore, sensitivity analysis has to be carried out on the OS, and in that sense sensitivity analysis is a second order property. The OS can be considered as a generalized model because it contains all the available information. This presentation proposes a method to carry out sensitivity analysis in general. The method is demonstrated with an application to water pollution problem. The model involves shallow waters equations and an equation for the pollutant concentration. These equations are discretized using a finite volume method. The response function depends on the pollutant source, and its sensitivity with respect to the source term of the pollutant is studied. Specifically, we consider: • Identification of unknown parameters, and • Identification of sources of pollution and sensitivity with respect to the sources. We also use a Singular Evolutive Interpolated Kalman Filter to study this problem. The presentation includes a comparison of the results from these two methods. .
Generalized fictitious methods for fluid-structure interactions: Analysis and simulations
NASA Astrophysics Data System (ADS)
Yu, Yue; Baek, Hyoungsu; Karniadakis, George Em
2013-07-01
We present a new fictitious pressure method for fluid-structure interaction (FSI) problems in incompressible flow by generalizing the fictitious mass and damping methods we published previously in [1]. The fictitious pressure method involves modification of the fluid solver whereas the fictitious mass and damping methods modify the structure solver. We analyze all fictitious methods for simplified problems and obtain explicit expressions for the optimal reduction factor (convergence rate index) at the FSI interface [2]. This analysis also demonstrates an apparent similarity of fictitious methods to the FSI approach based on Robin boundary conditions, which have been found to be very effective in FSI problems. We implement all methods, including the semi-implicit Robin based coupling method, in the context of spectral element discretization, which is more sensitive to temporal instabilities than low-order methods. However, the methods we present here are simple and general, and hence applicable to FSI based on any other spatial discretization. In numerical tests, we verify the selection of optimal values for the fictitious parameters for simplified problems and for vortex-induced vibrations (VIV) even at zero mass ratio ("for-ever-resonance"). We also develop an empirical a posteriori analysis for complex geometries and apply it to 3D patient-specific flexible brain arteries with aneurysms for very large deformations. We demonstrate that the fictitious pressure method enhances stability and convergence, and is comparable or better in most cases to the Robin approach or the other fictitious methods.
Chronic Diseases in the Pediatric Age Group. Matrix No. 7.
ERIC Educational Resources Information Center
Katz, Michael
This paper briefly outlines current problems associated with chronic diseases in children and youth and provides indications for the types of future research and analysis needed to facilitate the development of solutions. In general, these problems are associated with the following: malignancies, hereditary anemias, cystic fibrosis, other chronic…
ERIC Educational Resources Information Center
Ding, Meixia; Li, Xiaobao
2010-01-01
This study examines presentations of the distributive property (DP) in two widely used U.S. elementary text series and one main Chinese text series along three dimensions: problem contexts, typical problem types within each problem context, and variability in using the DP. In general, the two U.S. texts were found to resemble each other but to…
Statistical energy analysis computer program, user's guide
NASA Technical Reports Server (NTRS)
Trudell, R. W.; Yano, L. I.
1981-01-01
A high frequency random vibration analysis, (statistical energy analysis (SEA) method) is examined. The SEA method accomplishes high frequency prediction of arbitrary structural configurations. A general SEA computer program is described. A summary of SEA theory, example problems of SEA program application, and complete program listing are presented.
ERIC Educational Resources Information Center
Crossno, S. K.; And Others
1996-01-01
Presents experiments involving the analysis of commercial products such as carbonated beverages and antacids that illustrate the principles of acid-base reactions and present interesting problems in stoichiometry for students. (JRH)
NASA Astrophysics Data System (ADS)
Tisdell, C. C.
2017-08-01
Solution methods to exact differential equations via integrating factors have a rich history dating back to Euler (1740) and the ideas enjoy applications to thermodynamics and electromagnetism. Recently, Azevedo and Valentino presented an analysis of the generalized Bernoulli equation, constructing a general solution by linearizing the problem through a substitution. The purpose of this note is to present an alternative approach using 'exact methods', illustrating that a substitution and linearization of the problem is unnecessary. The ideas may be seen as forming a complimentary and arguably simpler approach to Azevedo and Valentino that have the potential to be assimilated and adapted to pedagogical needs of those learning and teaching exact differential equations in schools, colleges, universities and polytechnics. We illustrate how to apply the ideas through an analysis of the Gompertz equation, which is of interest in biomathematical models of tumour growth.
Mathematical modeling of spinning elastic bodies for modal analysis.
NASA Technical Reports Server (NTRS)
Likins, P. W.; Barbera, F. J.; Baddeley, V.
1973-01-01
The problem of modal analysis of an elastic appendage on a rotating base is examined to establish the relative advantages of various mathematical models of elastic structures and to extract general inferences concerning the magnitude and character of the influence of spin on the natural frequencies and mode shapes of rotating structures. In realization of the first objective, it is concluded that except for a small class of very special cases the elastic continuum model is devoid of useful results, while for constant nominal spin rate the distributed-mass finite-element model is quite generally tractable, since in the latter case the governing equations are always linear, constant-coefficient, ordinary differential equations. Although with both of these alternatives the details of the formulation generally obscure the essence of the problem and permit very little engineering insight to be gained without extensive computation, this difficulty is not encountered when dealing with simple concentrated mass models.
Automatic movie skimming with general tempo analysis
NASA Astrophysics Data System (ADS)
Lee, Shih-Hung; Yeh, Chia-Hung; Kuo, C. C. J.
2003-11-01
Story units are extracted by general tempo analysis including tempos analysis including tempos of audio and visual information in this research. Although many schemes have been proposed to successfully segment video data into shots using basic low-level features, how to group shots into meaningful units called story units is still a challenging problem. By focusing on a certain type of video such as sport or news, we can explore models with the specific application domain knowledge. For movie contents, many heuristic rules based on audiovisual clues have been proposed with limited success. We propose a method to extract story units using general tempo analysis. Experimental results are given to demonstrate the feasibility and efficiency of the proposed technique.
JPL Test Effectiveness Analysis
NASA Technical Reports Server (NTRS)
Shreck, Stephanie; Sharratt, Stephen; Smith, Joseph F.; Strong, Edward
2008-01-01
1) The pilot study provided meaningful conclusions that are generally consistent with the earlier Test Effectiveness work done between 1992 and 1994: a) Analysis of pre-launch problem/failure reports is consistent with earlier work. b) Analysis of post-launch early mission anomaly reports indicates that there are more software issues in newer missions, and the no-test category for identification of post-launch failures is more significant than in the earlier analysis. 2) Future work includes understanding how differences in Missions effect these analyses: a) There are large variations in the number of problem reports and issues that are documented by the different Projects/Missions. b) Some missions do not have any reported environmental test anomalies, even though environmental tests were performed. 3) Each project/mission has different standards and conventions for filling out the PFR forms, the industry may wish to address this issue: a) Existing problem reporting forms are to document and track problems, failures, and issues (etc.) for the projects, to ensure high quality. b) Existing problem reporting forms are not intended for data mining.
Program Helps To Determine Chemical-Reaction Mechanisms
NASA Technical Reports Server (NTRS)
Bittker, D. A.; Radhakrishnan, K.
1995-01-01
General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code developed for use in solving complex, homogeneous, gas-phase, chemical-kinetics problems. Provides for efficient and accurate chemical-kinetics computations and provides for sensitivity analysis for variety of problems, including problems involving honisothermal conditions. Incorporates mathematical models for static system, steady one-dimensional inviscid flow, reaction behind incident shock wave (with boundary-layer correction), and perfectly stirred reactor. Computations of equilibrium properties performed for following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. Written in FORTRAN 77 with exception of NAMELIST extensions used for input.
Azimian, Jalil; Piran, Pegah; Jahanihashemi, Hassan; Dehghankar, Leila
2017-01-01
Background Pressures in nursing can affect family life and marital problems, disrupt common social problems, increase work-family conflicts and endanger people’s general health. Aim To determine marital satisfaction and its relationship with job stress and general health of nurses. Methods This descriptive and cross-sectional study was done in 2015 in medical educational centers of Qazvin by using an ENRICH marital satisfaction scale and General Health and Job Stress questionnaires completed by 123 nurses. Analysis was done by SPSS version 19 using descriptive and analytical statistics (Pearson correlation, t-test, ANOVA, Chi-square, regression line, multiple regression analysis). Results The findings showed that 64.4% of nurses had marital satisfaction. There was significant relationship between age (p=0.03), job experience (p=0.01), age of spouse (p=0.01) and marital satisfaction. The results showed that there was a significant relationship between marital satisfaction and general health (p<0.0001). Multiple regression analysis showed that there was a significant relationship between depression (p=0.012) and anxiety (p=0.001) with marital satisfaction. Conclusions Due to high levels of job stress and disorder in general health of nurses and low marital satisfaction by running health promotion programs and paying attention to its dimensions can help work and family health of nurses. PMID:28607660
Data-driven advice for applying machine learning to bioinformatics problems
Olson, Randal S.; La Cava, William; Mustahsan, Zairah; Varik, Akshay; Moore, Jason H.
2017-01-01
As the bioinformatics field grows, it must keep pace not only with new data but with new algorithms. Here we contribute a thorough analysis of 13 state-of-the-art, commonly used machine learning algorithms on a set of 165 publicly available classification problems in order to provide data-driven algorithm recommendations to current researchers. We present a number of statistical and visual comparisons of algorithm performance and quantify the effect of model selection and algorithm tuning for each algorithm and dataset. The analysis culminates in the recommendation of five algorithms with hyperparameters that maximize classifier performance across the tested problems, as well as general guidelines for applying machine learning to supervised classification problems. PMID:29218881
Clustering "N" Objects into "K" Groups under Optimal Scaling of Variables.
ERIC Educational Resources Information Center
van Buuren, Stef; Heiser, Willem J.
1989-01-01
A method based on homogeneity analysis (multiple correspondence analysis or multiple scaling) is proposed to reduce many categorical variables to one variable with "k" categories. The method is a generalization of the sum of squared distances cluster analysis problem to the case of mixed measurement level variables. (SLD)
2013-01-01
Background Refugees are a particularly vulnerable group in relation to the development of mental illness and many may have been subjected to torture or other traumatic experiences. General practitioners are gatekeepers for access to several parts of the psychiatric system and knowledge of their patients’ refugee background is crucial to secure adequate care. The aim of this study is to investigate how general practitioners experience providing care to refugees with mental health problems. Methods The study was conducted as part of an EU project on European Best Practices in Access, Quality and Appropriateness of Health Services for Immigrants in Europe (EUGATE). Semi-structured interviews were carried out with nine general practitioners in the vicinity of Copenhagen purposively selected from areas with a high proportion of immigrants. The analysis of the interviews is inspired by qualitative content analysis. Results One of the main themes identified in the analysis is communication. This includes the use of professional interpreters and that communication entails more than sharing a common language. Quality of care is another theme that emerges and includes awareness of possible trauma history, limited possibilities for refugees to participate in certain treatments due to language barriers and feelings of hopelessness in the general practitioners. The general practitioners may also choose different referral pathways for refugees and they report that their patients lack understanding regarding the differences between psychological problems and physical symptoms. Conclusion General practitioners experience that providing care to refugees differs from providing care for patients from the majority population. The different strategies employed by the general practitioners in the health care treatment of refugees may be the result of the great diversity in the organisation of general practice in Denmark and the lack of a national strategy in the health care management of refugees. The findings from this study suggest that the development of conversational models for general practitioners including points to be aware of in the treatment of refugee patients may serve as a support in the management of refugee patients in primary care. PMID:23356401
NASA Technical Reports Server (NTRS)
Rankin, C. C.
1988-01-01
A consistent linearization is provided for the element-dependent corotational formulation, providing the proper first and second variation of the strain energy. As a result, the warping problem that has plagued flat elements has been overcome, with beneficial effects carried over to linear solutions. True Newton quadratic convergence has been restored to the Structural Analysis of General Shells (STAGS) code for conservative loading using the full corotational implementation. Some implications for general finite element analysis are discussed, including what effect the automatic frame invariance provided by this work might have on the development of new, improved elements.
Modular thermal analyzer routine, volume 1
NASA Technical Reports Server (NTRS)
Oren, J. A.; Phillips, M. A.; Williams, D. R.
1972-01-01
The Modular Thermal Analyzer Routine (MOTAR) is a general thermal analysis routine with strong capabilities for performing thermal analysis of systems containing flowing fluids, fluid system controls (valves, heat exchangers, etc.), life support systems, and thermal radiation situations. Its modular organization permits the analysis of a very wide range of thermal problems for simple problems containing a few conduction nodes to those containing complicated flow and radiation analysis with each problem type being analyzed with peak computational efficiency and maximum ease of use. The organization and programming methods applied to MOTAR achieved a high degree of computer utilization efficiency in terms of computer execution time and storage space required for a given problem. The computer time required to perform a given problem on MOTAR is approximately 40 to 50 percent that required for the currently existing widely used routines. The computer storage requirement for MOTAR is approximately 25 percent more than the most commonly used routines for the most simple problems but the data storage techniques for the more complicated options should save a considerable amount of space.
NASA Astrophysics Data System (ADS)
Śloderbach, Zdzisław
2016-05-01
This paper reports the results of a study into global and local conditions of uniqueness and the criteria excluding the possibility of bifurcation of the equilibrium state for small strains. The conditions and criteria are derived on the basis of an analysis of the problem of uniqueness of a solution involving the basic incremental boundary problem of coupled generalized thermo-elasto-plasticity. This work forms a follow-up of previous research (Śloderbach in Bifurcations criteria for equilibrium states in generalized thermoplasticity, IFTR Reports, 1980, Arch Mech 3(35):337-349, 351-367, 1983), but contains a new derivation of global and local criteria excluding a possibility of bifurcation of an equilibrium state regarding a comparison body dependent on the admissible fields of stress rate. The thermal elasto-plastic coupling effects, non-associated laws of plastic flow and influence of plastic strains on thermoplastic properties of a body were taken into account in this work. Thus, the mathematical problem considered here is not a self-conjugated problem.
On the inherent competition between valid and spurious inductive inferences in Boolean data
NASA Astrophysics Data System (ADS)
Andrecut, M.
Inductive inference is the process of extracting general rules from specific observations. This problem also arises in the analysis of biological networks, such as genetic regulatory networks, where the interactions are complex and the observations are incomplete. A typical task in these problems is to extract general interaction rules as combinations of Boolean covariates, that explain a measured response variable. The inductive inference process can be considered as an incompletely specified Boolean function synthesis problem. This incompleteness of the problem will also generate spurious inferences, which are a serious threat to valid inductive inference rules. Using random Boolean data as a null model, here we attempt to measure the competition between valid and spurious inductive inference rules from a given data set. We formulate two greedy search algorithms, which synthesize a given Boolean response variable in a sparse disjunct normal form, and respectively a sparse generalized algebraic normal form of the variables from the observation data, and we evaluate numerically their performance.
2017-03-23
solutions obtained through their proposed method to comparative instances of a generalized assignment problem with either ordinal cost components or... method flag: Designates the method by which the changed/ new assignment problem instance is solved. methodFlag = 0:SMAWarmstart Returns a matching...of randomized perturbations. We examine the contrasts between these methods in the context of assigning Army Officers among a set of identified
Improving Learning Performance Through Rational Resource Allocation
NASA Technical Reports Server (NTRS)
Gratch, J.; Chien, S.; DeJong, G.
1994-01-01
This article shows how rational analysis can be used to minimize learning cost for a general class of statistical learning problems. We discuss the factors that influence learning cost and show that the problem of efficient learning can be cast as a resource optimization problem. Solutions found in this way can be significantly more efficient than the best solutions that do not account for these factors. We introduce a heuristic learning algorithm that approximately solves this optimization problem and document its performance improvements on synthetic and real-world problems.
NASA Technical Reports Server (NTRS)
Johnson, F. T.
1980-01-01
A method for solving the linear integral equations of incompressible potential flow in three dimensions is presented. Both analysis (Neumann) and design (Dirichlet) boundary conditions are treated in a unified approach to the general flow problem. The method is an influence coefficient scheme which employs source and doublet panels as boundary surfaces. Curved panels possessing singularity strengths, which vary as polynomials are used, and all influence coefficients are derived in closed form. These and other features combine to produce an efficient scheme which is not only versatile but eminently suited to the practical realities of a user-oriented environment. A wide variety of numerical results demonstrating the method is presented.
Mannarini, Stefania; Balottin, Laura; Toldo, Irene; Gatta, Michela
2016-10-01
The study, conducted on Italian preadolscents aged 11 to 13 belonging to the general population, aims to investigate the relationship between the emotional functioning, namely, alexithymia, and the risk of developing behavioral and emotional problems measured using the Strength and Difficulty Questionnaire. The latent class analysis approach allowed to identify two latent variables, accounting for the internalizing (emotional symptoms and difficulties in emotional awareness) and for the externalizing problems (conduct problems and hyperactivity, problematic relationships with peers, poor prosocial behaviors and externally oriented thinking). The two latent variables featured two latent classes: the difficulty in dealing with problems and the strength to face problems that was representative of most of the healthy participants with specific gender differences. Along with the analysis of psychopathological behaviors, the study of resilience and strengths can prove to be a key step in order to develop valuable preventive approaches to tackle psychiatric disorders. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Improving Generalization of Academic Skills: Commentary on the Special Issue
ERIC Educational Resources Information Center
Skinner, Christopher H.; Daly, Edward J., III
2010-01-01
Behavior analysts have long been interested in developing and promoting the use of effective generalization strategies for behavioral interventions. Perhaps because research on academic performance has lagged behind in the field of applied behavior analysis, far less research on this topic has been conducted for academic performance problems. The…
NASA Astrophysics Data System (ADS)
Vorontsova, Elena; Vorontsov, Andrey; Drozdenko, Yuriy
2017-11-01
The article is devoted to the analysis of problems of maintenance of ecological safety of the mining enterprises. The aim of the work was the formulation of proposals, the implementation of which, in the opinion of the authors, is capable of raising the level of environmental safety of the mining industry and ultimately ensuring the environmentally oriented growth of the Russian economy.
The Multidimensional Structure of University Absenteeism: An Exploratory Study
ERIC Educational Resources Information Center
López-Bonilla, Jesús Manuel; López-Bonilla, Luis Miguel
2015-01-01
Absenteeism has been a common and very extended problem in university spheres for several years. This problem has become a permanent feature in academic studies in general, yet it has received scarce empirical research attention. This work is focused on the analysis of the factors that determine university absenteeism. It evaluates a series of…
ERIC Educational Resources Information Center
Algozzine, Bob; Newton, J. Stephen; Horner, Robert H.; Todd, Anne W.; Algozzine, Kate
2012-01-01
Problem solving is fundamental to psychoeducational assessment practices and generally grounded in activities related to identifying problems, developing and refining hypotheses, generating solutions, developing and implementing actions, and evaluating outcomes. While the process is central to response-to-intervention practices as well, little…
The Effects of Measurement Error on Statistical Models for Analyzing Change. Final Report.
ERIC Educational Resources Information Center
Dunivant, Noel
The results of six major projects are discussed including a comprehensive mathematical and statistical analysis of the problems caused by errors of measurement in linear models for assessing change. In a general matrix representation of the problem, several new analytic results are proved concerning the parameters which affect bias in…
2011-06-01
in giving us of a copy of his habilitation thesis, without which this article would not have been possible. We also thank Prof. Karsten Eppler for...John Wiley & Sons, 1983. [19] Andreas Kirsch. Generalized boundary value- and control problems for the Helmholtz equation. Habilitation thesis, 1984
Conformal mapping for multiple terminals
Wang, Weimin; Ma, Wenying; Wang, Qiang; Ren, Hao
2016-01-01
Conformal mapping is an important mathematical tool that can be used to solve various physical and engineering problems in many fields, including electrostatics, fluid mechanics, classical mechanics, and transformation optics. It is an accurate and convenient way to solve problems involving two terminals. However, when faced with problems involving three or more terminals, which are more common in practical applications, existing conformal mapping methods apply assumptions or approximations. A general exact method does not exist for a structure with an arbitrary number of terminals. This study presents a conformal mapping method for multiple terminals. Through an accurate analysis of boundary conditions, additional terminals or boundaries are folded into the inner part of a mapped region. The method is applied to several typical situations, and the calculation process is described for two examples of an electrostatic actuator with three electrodes and of a light beam splitter with three ports. Compared with previously reported results, the solutions for the two examples based on our method are more precise and general. The proposed method is helpful in promoting the application of conformal mapping in analysis of practical problems. PMID:27830746
Microgravity isolation system design: A modern control synthesis framework
NASA Technical Reports Server (NTRS)
Hampton, R. D.; Knospe, C. R.; Allaire, P. E.; Grodsinsky, C. M.
1994-01-01
Manned orbiters will require active vibration isolation for acceleration-sensitive microgravity science experiments. Since umbilicals are highly desirable or even indispensable for many experiments, and since their presence greatly affects the complexity of the isolation problem, they should be considered in control synthesis. In this paper a general framework is presented for applying extended H2 synthesis methods to the three-dimensional microgravity isolation problem. The methodology integrates control and state frequency weighting and input and output disturbance accommodation techniques into the basic H2 synthesis approach. The various system models needed for design and analysis are also presented. The paper concludes with a discussion of a general design philosophy for the microgravity vibration isolation problem.
Microgravity isolation system design: A modern control synthesis framework
NASA Technical Reports Server (NTRS)
Hampton, R. D.; Knospe, C. R.; Allaire, P. E.; Grodsinsky, C. M.
1994-01-01
Manned orbiters will require active vibration isolation for acceleration-sensitive microgravity science experiments. Since umbilicals are highly desirable or even indispensable for many experiments, and since their presence greatly affects the complexity of the isolation problem, they should be considered in control synthesis. A general framework is presented for applying extended H2 synthesis methods to the three-dimensional microgravity isolation problem. The methodology integrates control and state frequency weighting and input and output disturbance accommodation techniques into the basic H2 synthesis approach. The various system models needed for design and analysis are also presented. The paper concludes with a discussion of a general design philosophy for the microgravity vibration isolation problem.
2005-12-01
EPFM elastic-plastic fracture mechanics FCG fatigue crack growth FEA finite element analysis FKN ANSYS FEA command for contact pair stiffness FTOLN...current TMF research is too general for thermal gradient applications . Moreover, the nature of a cyclically heated, localized region of higher...when separating this problem into the general engineering issues that are germane to the application , one can find much published research that is
Generalized continued fractions and ergodic theory
NASA Astrophysics Data System (ADS)
Pustyl'nikov, L. D.
2003-02-01
In this paper a new theory of generalized continued fractions is constructed and applied to numbers, multidimensional vectors belonging to a real space, and infinite-dimensional vectors with integral coordinates. The theory is based on a concept generalizing the procedure for constructing the classical continued fractions and substantially using ergodic theory. One of the versions of the theory is related to differential equations. In the finite-dimensional case the constructions thus introduced are used to solve problems posed by Weyl in analysis and number theory concerning estimates of trigonometric sums and of the remainder in the distribution law for the fractional parts of the values of a polynomial, and also the problem of characterizing algebraic and transcendental numbers with the use of generalized continued fractions. Infinite-dimensional generalized continued fractions are applied to estimate sums of Legendre symbols and to obtain new results in the classical problem of the distribution of quadratic residues and non-residues modulo a prime. In the course of constructing these continued fractions, an investigation is carried out of the ergodic properties of a class of infinite-dimensional dynamical systems which are also of independent interest.
Charles, Janice; Harrison, Christopher M; Britt, Helena
2011-11-01
The aim of this study was to examine changes over four decades in children's psychological problems managed in Australian general practice and to describe recent management of these problems. Analysis of GP encounters with children, using data from the BEACH study, an on-going, cross-sectional, national survey of general practice, provides contemporary results. Comparisons with two related studies: 1970-1971 (from published reports), and 1990-1991 (secondary analysis), describe changes over time. Changes over time: psychological problems accounted for 2% of all children's problems managed in 1971, 1.3% in 1990-1991 and 2.6% in 2008-2009. In 1971, non-organic enuresis accounted for 30% of children's psychological problems but only 2.7% in 2008-2009. Insomnia showed a similar pattern. Between 1990-1991 and 2008-2009, ADHD increased from 0.8% to 14.7%, and from 2000-2001 to 2008-2009, autism spectrum disorders rose from 4.9% to 11%. Current practice: most common psychological problems managed for children less than 18 years were anxiety, depression, intellectual impairment and ADHD. Among children aged 0-5 years, sleep disturbance and intellectual impairment were the main problems, for 6-11 year olds, anxiety and ADHD, and for 12-17 year olds, depression. Boys were significantly more likely to be managed for intellectual impairment, ADHD and autism spectrum disorders than were girls, who were more likely to be managed for depression. The medication rate was low at 19 per 100 psychological problems although higher for depression and ADHD. Referrals were given at a high rate. Counselling was also provided often, except in management of ADHD. Access to the three studies allowed consideration of trends over a forty year period, showing the development of newly defined conditions which have replaced childhood diagnoses of past decades. The results demonstrate that GP involvement in children's mental health care management has grown significantly over the past 20 years.
On the Hardness of Subset Sum Problem from Different Intervals
NASA Astrophysics Data System (ADS)
Kogure, Jun; Kunihiro, Noboru; Yamamoto, Hirosuke
The subset sum problem, which is often called as the knapsack problem, is known as an NP-hard problem, and there are several cryptosystems based on the problem. Assuming an oracle for shortest vector problem of lattice, the low-density attack algorithm by Lagarias and Odlyzko and its variants solve the subset sum problem efficiently, when the “density” of the given problem is smaller than some threshold. When we define the density in the context of knapsack-type cryptosystems, weights are usually assumed to be chosen uniformly at random from the same interval. In this paper, we focus on general subset sum problems, where this assumption may not hold. We assume that weights are chosen from different intervals, and make analysis of the effect on the success probability of above algorithms both theoretically and experimentally. Possible application of our result in the context of knapsack cryptosystems is the security analysis when we reduce the data size of public keys.
Simplification of multiple Fourier series - An example of algorithmic approach
NASA Technical Reports Server (NTRS)
Ng, E. W.
1981-01-01
This paper describes one example of multiple Fourier series which originate from a problem of spectral analysis of time series data. The example is exercised here with an algorithmic approach which can be generalized for other series manipulation on a computer. The generalized approach is presently pursued towards applications to a variety of multiple series and towards a general purpose algorithm for computer algebra implementation.
NASA Technical Reports Server (NTRS)
Pineda, Evan J.; Waas, Anthony M.; Berdnarcyk, Brett A.; Arnold, Steven M.; Collier, Craig S.
2009-01-01
This preliminary report demonstrates the capabilities of the recently developed software implementation that links the Generalized Method of Cells to explicit finite element analysis by extending a previous development which tied the generalized method of cells to implicit finite elements. The multiscale framework, which uses explicit finite elements at the global-scale and the generalized method of cells at the microscale is detailed. This implementation is suitable for both dynamic mechanics problems and static problems exhibiting drastic and sudden changes in material properties, which often encounter convergence issues with commercial implicit solvers. Progressive failure analysis of stiffened and un-stiffened fiber-reinforced laminates subjected to normal blast pressure loads was performed and is used to demonstrate the capabilities of this framework. The focus of this report is to document the development of the software implementation; thus, no comparison between the results of the models and experimental data is drawn. However, the validity of the results are assessed qualitatively through the observation of failure paths, stress contours, and the distribution of system energies.
NASA Astrophysics Data System (ADS)
Berk, N. F.
2014-03-01
We present a general approach to analyzing elastic scattering for those situations where the incident beam is prepared as an incoherent ensemble of wave packets of a given arbitrary shape. Although wave packets, in general, are not stationary solutions of the Schrödinger equation, the analysis of elastic scattering data treats the scattering as a stationary-state problem. We thus must gate the wave packet, coherently distorting its shape in a manner consistent with the elastic condition. The resulting gated scattering amplitudes (e.g., reflection coefficients) thus are weighted coherent sums of the constituent plane-wave scattering amplitudes, with the weights determined by the shape of the incident wave packet as "filtered" by energy gating. We develop the gating formalism in general and apply it to the problem of neutron scattering from ruled gratings described by Majkrzak et al. in a companion paper. The required exact solution of the associated problem of plane-wave reflection from gratings also is derived.
Single pilot IFR accident data analysis
NASA Technical Reports Server (NTRS)
Harris, D. F.
1983-01-01
The aircraft accident data recorded by the National Transportation and Safety Board (NTSR) for 1964-1979 were analyzed to determine what problems exist in the general aviation (GA) single pilot instrument flight rule (SPIFR) environment. A previous study conducted in 1978 for the years 1964-1975 provided a basis for comparison. This effort was generally limited to SPIFR pilot error landing phase accidents but includes some SPIFR takeoff and enroute accident analysis as well as some dual pilot IFR accident analysis for comparison. Analysis was performed for 554 accidents of which 39% (216) occurred during the years 1976-1979.
Child dental fear and general emotional problems: a pilot study.
Krikken, J B; ten Cate, J M; Veerkamp, J S J
2010-12-01
This was to investigate the relation between general emotional and behavioural problems of the child and dental anxiety and dental behavioural management problems. Dental treatment involves many potentially unpleasant stimuli, which all may lead to the development of dental anxiety and behavioural management problems (BMP). It is still unclear why some children get anxious in the dental situation while others, with a comparable dental history, do not. Besides the latent inhibition theory it is suggested that this can be explained by differences in child rearing and personality traits. The sample consisted of 50 children (4-12 years old) and their parents participated in this study. Parents filled out the Child Fear Survey Schedule Dental Subscale (CFSS-DS) and the Child Behaviour Checklist (CBCL) on behalf of their child. Child behaviour during consecutive dental treatments was assessed using the Venham scale. There were 39 children subject to analysis (21 boys) with a mean CFSS score of 40.4. Children aged 4 and 5 years who had sleeping problems, attention problems and aggressive behaviour, as scored by parents on the CBCL, displayed more disruptive behaviour during dental treatment. Children with emotionally/ reactive and attention problems were more anxious. In this pilot study a possible relation between general emotional and behavioural problems of young children and dental anxiety was shown. Also a relation between emotional and behavioural problems and dental behavioural management problems was shown. Because of the small number of subjects in our study, further research will be needed to confirm these results.
Knowledge representation for commonality
NASA Technical Reports Server (NTRS)
Yeager, Dorian P.
1990-01-01
Domain-specific knowledge necessary for commonality analysis falls into two general classes: commonality constraints and costing information. Notations for encoding such knowledge should be powerful and flexible and should appeal to the domain expert. The notations employed by the Commonality Analysis Problem Solver (CAPS) analysis tool are described. Examples are given to illustrate the main concepts.
Mason, W Alex; January, Stacy-Ann A; Chmelka, Mary B; Parra, Gilbert R; Savolainen, Jukka; Miettunen, Jouko; Järvelin, Marjo-Riitta; Taanila, Anja; Moilanen, Irma
2016-07-01
Research indicates that risk factors cluster in the most vulnerable youth, increasing their susceptibility for adverse developmental outcomes. However, most studies of cumulative risk are cross-sectional or short-term longitudinal, and have been based on data from the United States or the United Kingdom. Using data from the Northern Finland Birth Cohort 1986 Study (NFBC1986), we examined cumulative contextual risk (CCR) at birth as a predictor of adolescent substance use and co-occurring conduct problems and risky sex to determine the degree to which CCR predicts specific outcomes over-and-above its effect on general problem behavior, while testing for moderation of associations by gender. Analyses of survey data from 6963 participants of the NFBC1986 followed from the prenatal/birth period into adolescence were conducted using structural equation modeling. CCR had long-term positive associations with first-order substance use, conduct problems, and risky sex factors, and, in a separate analysis, with a second-order general problem behavior factor. Further analyses showed that there was a positive specific effect of CCR on risky sex, over-and-above general problem behavior, for girls only. This study, conducted within the Finnish context, showed that CCR at birth had long-term general and specific predictive associations with substance use and co-occurring problem behaviors in adolescence; effects on risky sex were stronger for girls. Results are consistent with the hypothesis that early exposure to CCR can have lasting adverse consequences, suggesting the need for early identification and intervention efforts for vulnerable children. Copyright © 2016 Elsevier Ltd. All rights reserved.
Topological analysis of the motion of an ellipsoid on a smooth plane
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivochkin, M Yu
2008-06-30
The problem of the motion of a dynamically and geometrically symmetric heavy ellipsoid on a smooth horizontal plane is investigated. The problem is integrable and can be considered a generalization of the problem of motion of a heavy rigid body with fixed point in the Lagrangian case. The Smale bifurcation diagrams are constructed. Surgeries of tori are investigated using methods developed by Fomenko and his students. Bibliography: 9 titles.
McCormick, Jessica; Delfabbro, Paul; Denson, Linley A
2012-12-01
The aim of this study was to conduct an empirical investigation of the validity of Jacobs' (in J Gambl Behav 2:15-31, 1986) general theory of addictions in relation to gambling problems associated with electronic gaming machines (EGM). Regular EGM gamblers (n = 190) completed a series of standardised measures relating to psychological and physiological vulnerability, substance use, dissociative experiences, early childhood trauma and abuse and problem gambling (the Problem Gambling Severity Index). Statistical analysis using structural equation modelling revealed clear relationships between childhood trauma and life stressors and psychological vulnerability, dissociative-like experiences and problem gambling. These findings confirm and extend a previous model validated by Gupta and Derevensky (in J Gambl Stud 14: 17-49, 1998) using an adolescent population. The significance of these findings are discussed for existing pathway models of problem gambling, for Jacobs' theory, and for clinicians engaged in assessment and intervention.
The Mutual Relationship Between Men's Drinking and Depression: A 4-Year Longitudinal Analysis.
Lee, Soo Bi; Chung, Sulki; Lee, HaeKook; Seo, Jeong Seok
2018-03-17
The purpose of the current study was to examine the longitudinal reciprocal relationship between depression and drinking among male adults from the general population. This study used a panel dataset from the Korean Welfare Panel (from 2011 to 2014). The subjects were 2511 male adults aged between 20 and 65 years. Based on the Korean Version of the Alcohol Use Disorders Identification Test (AUDIT-K) scores, 2191 subjects were categorized as the control group (AUDIT-K < 12) and 320 subjects were categorized as the problem drinking group (AUDIT-K ≥ 12). An autoregressive cross-lagged modelling analysis was performed to investigate the mutual relationship between problem drinking and depression measured consecutively over time. The results indicated that alcohol drinking and depression were stable over time. In the control group, there was no significant causal relationship between problem drinking and depression while in the problem drinking group, drinking in the previous year significantly influenced depression in the following second, third and fourth years. This study compared normal versus problem drinkers and showed a 4-year mutual causal relationship between depression and drinking. No longitudinal interaction between drinking and depression occurred in normal drinkers, while drinking intensified depression over time in problem drinkers. This study found that problem drinking was a risk factor for development of depression. Therefore, more attention should be given to problem alcohol use in the general population and evaluation of past alcohol use history in patients with depressive disorders.
Davidsen, Annette Sofie
2009-06-01
General practitioners (GPs) treat more than 90% of common mental disorders. However, the content of their interventions remains undefined. The present study aimed to explore GPs' processes of understanding the patients with emotional problems. The study was qualitative using semi-structured interviews with 14 general practitioners sampled purposively. Observation was done in the surgeries of four of the GPs. Analysis of the interviews was made by Interpretative Phenomenological Analysis (IPA). Observation notes were analysed from a hermeneutic-phenomenological perspective, inspired by IPA. GPs had very different approaches to patients with emotional problems. Physical symptoms were the usual reason for consulting the GP. Understanding patients' perception of the meaning of their bodily symptoms in their complex life-situation was considered important by some of the participants. Arriving at this understanding often occurred through the narrative delivered in different narrative styles mirroring the patients' mental state. Awareness of relational factors and self-awareness and self-reflexivity on the part of the GP influenced this process. Other participants did not enter this process of understanding patients' emotional problems. The concept of mentalization could be used to describe GPs' processes of understanding their patients when making psychosocial interventions and could form an important ingredient in a general practice theory in this field. Only some participants had a mentalizing approach. The study calls attention to the advantage of training this capacity for promoting professional treatment of patients and a professional dialogue across sector borders.
Application of variational and Galerkin equations to linear and nonlinear finite element analysis
NASA Technical Reports Server (NTRS)
Yu, Y.-Y.
1974-01-01
The paper discusses the application of the variational equation to nonlinear finite element analysis. The problem of beam vibration with large deflection is considered. The variational equation is shown to be flexible in both the solution of a general problem and in the finite element formulation. Difficulties are shown to arise when Galerkin's equations are used in the consideration of the finite element formulation of two-dimensional linear elasticity and of the linear classical beam.
Pragmatic failure, mind style and characterisation in fiction about autism
2014-01-01
This article presents an analysis of different types of pragmatic failure in the interactional behaviour of the ‘autistic’ protagonists of three recent novels. Three main types of pragmatic failure occur across all three novels: problems with informativeness and relevance in conversational contributions; problems with face management resulting in unintentional impolite behaviours; and problems with the interpretation of figurative language. These problems are salient and frequent enough to contribute to the projection of distinctive mind styles, and more generally to the characterisation of the protagonists as individuals with communication and socialisation difficulties that are likely to both reflect and reinforce general perceptions of autism-spectrum disorders. It is also argued that pragmatic failure contributes to the potential defamiliarisation of ‘normal’ communication, which is presented as being fraught with obscurity, ambiguity and insincerity. PMID:29710882
ERIC Educational Resources Information Center
Genthon, Michele; Joscelyn, Mary K., Ed.
Chief academic officers at 1,053 institutions of higher education across the United States were surveyed about the barriers to improving teaching and learning. Using factor analysis, responses were reduced to nine general problem areas. In order of importance from most important to least important, the problems identified were: financial support,…
ERIC Educational Resources Information Center
Busacca, Margherita L.; Anderson, Angelika; Moore, Dennis W.
2015-01-01
This review evaluates self-management literature targeting problem behaviors of primary school students in general education settings. Thirty-one single-case design studies met inclusion criteria, of which 16 demonstrated adequate methodological rigor, according to What Works Clearinghouse (WWC) design standards. Visual analysis and WWC…
What Public Media Reveals about MOOCs: A Systematic Analysis of News Reports
ERIC Educational Resources Information Center
Kovanovic, Vitomir; Joksimovic, Srecko; Gaševic, Dragan; Siemens, George; Hatala, Marek
2015-01-01
One of the striking differences between massive open online courses (MOOCs) and previous innovations in the education technology field is the unprecedented interest and involvement of the general public. As MOOCs address pressing problems in higher education and the broader educational practice, awareness of the general public debate around MOOCs…
ERIC Educational Resources Information Center
FERSTER, C.B.
THESE EXPERIMENTS WITH VERBAL BEHAVIOR WERE CARRIED OUT AS AN EXTENSION AND ADAPTATION OF GENERAL LABORATORY PRINCIPLES DEVELOPED WITH ANIMALS. THE EXPERIMENTS COVERED THREE AREAS. THE FIRST WAS AN APPLICATION OF GENERAL PRINCIPLES OF VERBAL BEHAVIOR, LARGELY BASED ON SKINNER'S ANALYSIS, TO THE PROBLEMS OF TEACHING A SECOND LANGUAGE. ACTUAL…
Modeling containment of large wildfires using generalized linear mixed-model analysis
Mark Finney; Isaac C. Grenfell; Charles W. McHugh
2009-01-01
Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...
Zhang, Lei; Zeng, Zhi; Ji, Qiang
2011-09-01
Chain graph (CG) is a hybrid probabilistic graphical model (PGM) capable of modeling heterogeneous relationships among random variables. So far, however, its application in image and video analysis is very limited due to lack of principled learning and inference methods for a CG of general topology. To overcome this limitation, we introduce methods to extend the conventional chain-like CG model to CG model with more general topology and the associated methods for learning and inference in such a general CG model. Specifically, we propose techniques to systematically construct a generally structured CG, to parameterize this model, to derive its joint probability distribution, to perform joint parameter learning, and to perform probabilistic inference in this model. To demonstrate the utility of such an extended CG, we apply it to two challenging image and video analysis problems: human activity recognition and image segmentation. The experimental results show improved performance of the extended CG model over the conventional directed or undirected PGMs. This study demonstrates the promise of the extended CG for effective modeling and inference of complex real-world problems.
Use of model analysis to analyse Thai students’ attitudes and approaches to physics problem solving
NASA Astrophysics Data System (ADS)
Rakkapao, S.; Prasitpong, S.
2018-03-01
This study applies the model analysis technique to explore the distribution of Thai students’ attitudes and approaches to physics problem solving and how those attitudes and approaches change as a result of different experiences in physics learning. We administered the Attitudes and Approaches to Problem Solving (AAPS) survey to over 700 Thai university students from five different levels, namely students entering science, first-year science students, and second-, third- and fourth-year physics students. We found that their inferred mental states were generally mixed. The largest gap between physics experts and all levels of the students was about the role of equations and formulas in physics problem solving, and in views towards difficult problems. Most participants of all levels believed that being able to handle the mathematics is the most important part of physics problem solving. Most students’ views did not change even though they gained experiences in physics learning.
Functional analysis screening for multiple topographies of problem behavior.
Bell, Marlesha C; Fahmie, Tara A
2018-04-23
The current study evaluated a screening procedure for multiple topographies of problem behavior in the context of an ongoing functional analysis. Experimenters analyzed the function of a topography of primary concern while collecting data on topographies of secondary concern. We used visual analysis to predict the function of secondary topographies and a subsequent functional analysis to test those predictions. Results showed that a general function was accurately predicted for five of six (83%) secondary topographies. A specific function was predicted and supported for a subset of these topographies. The experimenters discuss the implication of these results for clinicians who have limited time for functional assessment. © 2018 Society for the Experimental Analysis of Behavior.
13 CFR 315.16 - Adjustment proposal requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... analysis of the Firm's problems, strengths and weaknesses and an assessment of its prospects for recovery... generally consists of knowledge-based services such as market penetration studies, customized business...
Atmospheric planetary wave response to external forcing
NASA Technical Reports Server (NTRS)
Stevens, D. E.; Reiter, E. R.
1985-01-01
The tools of observational analysis, complex general circulation modeling, and simpler modeling approaches were combined in order to attack problems on the largest spatial scales of the earth's atmosphere. Two different models were developed and applied. The first is a two level, global spectral model which was designed primarily to test the effects of north-south sea surface temperature anomaly (SSTA) gradients between the equatorial and midlatitude north Pacific. The model is nonlinear, contains both radiation and a moisture budget with associated precipitation and surface evaporation, and utilizes a linear balance dynamical framework. Supporting observational analysis of atmospheric planetary waves is briefly summarized. More extensive general circulation models have also been used to consider the problem of the atmosphere's response, especially in the horizontal propagation of planetary scale waves, to SSTA.
Structural testing for static failure, flutter and other scary things
NASA Technical Reports Server (NTRS)
Ricketts, R. H.
1983-01-01
Ground test and flight test methods are described that may be used to highlight potential structural problems that occur on aircraft. Primary interest is focused on light-weight general aviation airplanes. The structural problems described include static strength failure, aileron reversal, static divergence, and flutter. An example of each of the problems is discussed to illustrate how the data acquired during the tests may be used to predict the occurrence of the structural problem. While some rules of thumb for the prediction of structural problems are given the report is not intended to be used explicitly as a structural analysis handbook.
Method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1972-01-01
Two computer programs developed according to two general types of exponential models for conducting nonlinear exponential regression analysis are described. Least squares procedure is used in which the nonlinear problem is linearized by expanding in a Taylor series. Program is written in FORTRAN 5 for the Univac 1108 computer.
Merging Two Futures Concepts: Issues Management and Policy Impact Analysis.
ERIC Educational Resources Information Center
Renfro, William L.; Morrison, James L.
1982-01-01
Describes a workshop held during the 1982 World Future Society's Fourth General Assembly on the combined application of issues management and policy impact analysis. The workshop participants applied futures research, forecasting, goal-setting, and policy development techniques to future problems in educational policy. (AM)
General Nature of Multicollinearity in Multiple Regression Analysis.
ERIC Educational Resources Information Center
Liu, Richard
1981-01-01
Discusses multiple regression, a very popular statistical technique in the field of education. One of the basic assumptions in regression analysis requires that independent variables in the equation should not be highly correlated. The problem of multicollinearity and some of the solutions to it are discussed. (Author)
Hintermair, Manfred
2013-01-01
In this study, behavioral problems of deaf and hard-of-hearing (D/HH) school-aged children are discussed in the context of executive functioning and communicative competence. Teachers assessed the executive functions of a sample of 214 D/HH students from general schools and schools for the deaf, using a German version of the Behavior Rating Inventory of Executive Functions (BRIEF-D). This was complemented by a questionnaire that measured communicative competence and behavioral problems (German version of the Strengths and Difficulties Questionnaire; SDQ-D). The results in nearly all the scales show a significantly higher problem rate for executive functions in the group of D/HH students compared with a normative sample of hearing children. In the D/HH group, students at general schools had better scores on most scales than students at schools for the deaf. Regression analysis reveals the importance of executive functions and communicative competence for behavioral problems. The relevance of the findings for pedagogical work is discussed. A specific focus on competencies such as self-efficacy or self-control in educational concepts for D/HH students seems to be necessary in addition to extending language competencies.
NASA Astrophysics Data System (ADS)
Malekan, Mohammad; Barros, Felício B.
2017-12-01
Generalized or extended finite element method (G/XFEM) models the crack by enriching functions of partition of unity type with discontinuous functions that represent well the physical behavior of the problem. However, this enrichment functions are not available for all problem types. Thus, one can use numerically-built (global-local) enrichment functions to have a better approximate procedure. This paper investigates the effects of micro-defects/inhomogeneities on a main crack behavior by modeling the micro-defects/inhomogeneities in the local problem using a two-scale G/XFEM. The global-local enrichment functions are influenced by the micro-defects/inhomogeneities from the local problem and thus change the approximate solution of the global problem with the main crack. This approach is presented in detail by solving three different linear elastic fracture mechanics problems for different cases: two plane stress and a Reissner-Mindlin plate problems. The numerical results obtained with the two-scale G/XFEM are compared with the reference solutions from the analytical, numerical solution using standard G/XFEM method and ABAQUS as well, and from the literature.
Zheng, Wenming; Lin, Zhouchen; Wang, Haixian
2014-04-01
A novel discriminant analysis criterion is derived in this paper under the theoretical framework of Bayes optimality. In contrast to the conventional Fisher's discriminant criterion, the major novelty of the proposed one is the use of L1 norm rather than L2 norm, which makes it less sensitive to the outliers. With the L1-norm discriminant criterion, we propose a new linear discriminant analysis (L1-LDA) method for linear feature extraction problem. To solve the L1-LDA optimization problem, we propose an efficient iterative algorithm, in which a novel surrogate convex function is introduced such that the optimization problem in each iteration is to simply solve a convex programming problem and a close-form solution is guaranteed to this problem. Moreover, we also generalize the L1-LDA method to deal with the nonlinear robust feature extraction problems via the use of kernel trick, and hereafter proposed the L1-norm kernel discriminant analysis (L1-KDA) method. Extensive experiments on simulated and real data sets are conducted to evaluate the effectiveness of the proposed method in comparing with the state-of-the-art methods.
Lightweight aircraft engines, the potential and problems for use of automotive fuels
NASA Technical Reports Server (NTRS)
Patterson, D. J.
1983-01-01
A comprehensive data research and analysis for evaluating the use of automotive fuels as a substitute for aviation grade fuel by piston-type general aviation aircraft engines is presented. Historically known problems and potential problems with fuels were reviewed for possible impact relative to application to an aircraft operational environment. This report reviews areas such as: fuel specification requirements, combustion knock, preignition, vapor lock, spark plug fouling, additives for fuel and oil, and storage stability.
Conduct problems trajectories and psychosocial outcomes: a systematic review and meta-analysis.
Bevilacqua, Leonardo; Hale, Daniel; Barker, Edward D; Viner, Russell
2017-10-06
There is increasing evidence that youth who follow the early onset persistent (EOP), adolescent-onset (AO) and childhood-limited (CL) trajectories of conduct problems show varying patterns of health, mental health, educational, and social outcomes in adulthood. However, there has been no systematic review and meta-analysis on outcomes associated with different conduct problems trajectories. We systematically reviewed the literature of longitudinal studies considering outcomes of three conduct problems trajectories: EOP, AO, and CL compared with individuals with low levels of conduct problems (low). We performed a series of meta-analyses comparing each trajectory to the low group for eight different outcomes in early adulthood or later. Thirteen studies met our inclusion criteria. Outcomes were mental health (depression), cannabis use, alcohol use, self-reported aggression, official records of antisocial behaviour, poor general health, poor education, and poor employment. Overall, EOP individuals showed significant higher risk of poor outcome followed by AO individuals, CL individuals, and finally participants in the low group. All conduct problems trajectories showed higher risk of poor psychosocial outcomes compared to the low group, but the magnitude of risk differed across trajectories, with a general trend for the EOP to perform significantly worse, followed by the AO and CL. Early intervention is recommended across domains to maximise likelihood of desistance from antisocial behaviour and improvement on several psychosocial outcomes.
Educational Level, Underachievement, and General Mental Health Problems in 10,866 Adolescents.
Tempelaar, Wanda M; de Vos, Nelleke; Plevier, Carolien M; van Gastel, Willemijn A; Termorshuizen, Fabian; MacCabe, James H; Boks, Marco P M
2017-08-01
Previous research suggests that cognitive functioning is associated with the risk of several adult psychiatric disorders. In this study we investigated whether adolescents who perform worse than expected at secondary school are at a higher risk for general mental health problems. In a cross-sectional survey comprising 10,866 Dutch adolescents aged 13 to 16 years, underachievement at secondary school was defined as the discrepancy between predicted school grade and actual grade 1 or 3 years later. Mental health problems were assessed using the Strengths and Difficulties Questionnaire. We investigated the association of underachievement with mental health problems using logistic regression, adjusting for potential confounders. Underachievement was associated with general psychopathology in pupils aged 13 to 14 years (odds ratio [OR], 1.86; 95% confidence interval [CI], 1.47-2.37) and in pupils aged 15 to 16 years (OR, 2.05; 95% CI, 1.67-2.52) in a multivariate analysis including sociodemographic factors. The association between underachievement and mental health problems was attenuated when school factors such as teacher advice and interaction between underachievement and teacher advice were added, but underachievement remained significantly associated with mental health problems in adolescents in the higher educational tracks (pupils aged 13-14 years: OR, 2.22; 95% CI, 1.07-4.60 and OR, 2.41; 95% CI, 1.10-5.30, age 15-16 years: OR, 2.63; 95% CI, 1.38-5.03). In the multivariate analysis including the interaction between underachievement and teacher advice, a significant interaction effect occurs between underachievement and teacher advice in the higher tracks. Values of OR and CI are given for each significant interaction term. In the younger age group (pupils aged 13-14 years) this results in 2 sets of OR and CI. This association was most pronounced for the hyperactivity subscale of the Strengths and Difficulties Questionnaire. Underachievement at secondary school is associated with general mental health problems, especially with hyperactivity symptoms, in pupils who started at high educational tracks. Copyright © 2017 Academic Pediatric Association. All rights reserved.
Nonlinear analysis of structures. [within framework of finite element method
NASA Technical Reports Server (NTRS)
Armen, H., Jr.; Levine, H.; Pifko, A.; Levy, A.
1974-01-01
The development of nonlinear analysis techniques within the framework of the finite-element method is reported. Although the emphasis is concerned with those nonlinearities associated with material behavior, a general treatment of geometric nonlinearity, alone or in combination with plasticity is included, and applications presented for a class of problems categorized as axisymmetric shells of revolution. The scope of the nonlinear analysis capabilities includes: (1) a membrane stress analysis, (2) bending and membrane stress analysis, (3) analysis of thick and thin axisymmetric bodies of revolution, (4) a general three dimensional analysis, and (5) analysis of laminated composites. Applications of the methods are made to a number of sample structures. Correlation with available analytic or experimental data range from good to excellent.
Tang, Catherine So-Kum; Wu, Anise M S
2010-12-01
A multiple mediation model was proposed to integrate core concepts of the social axioms framework and the social cognitive theory in order to understand gambling behavior. It was hypothesized that the influence of general fate control belief on problem gambling and negative mood would be mediated by gambling-specific beliefs. Data from 773 Chinese college recreational gamblers were collected. The bootstrapping procedure was used to test the multiple mediation hypotheses. Significant indirect effects of fate control belief on problem gambling and negative mood through two gambling-specific mediators were found. Gambling expectancy bias was a more salient mediator than gambling self-efficacy. Fate control belief was also found to have a significant direct effect on negative mood. In general, a high level of general fate control belief was related to greater gambling expectancy bias and lower self-efficacy in resisting gambling, which were in turn related to problem gambling and negative mood. Limitations and implications of the study were discussed.
Flux-Based Finite Volume representations for general thermal problems
NASA Technical Reports Server (NTRS)
Mohan, Ram V.; Tamma, Kumar K.
1993-01-01
Flux-Based Finite Volume (FV) element representations for general thermal problems are given in conjunction with a generalized trapezoidal gamma-T family of algorithms, formulated following the spirit of what we term as the Lax-Wendroff based FV formulations. The new flux-based representations introduced offer an improved physical interpretation of the problem along with computationally convenient and attractive features. The space and time discretization emanate from a conservation form of the governing equation for thermal problems, and in conjunction with the flux-based element representations give rise to a physically improved and locally conservative numerical formulations. The present representations seek to involve improved locally conservative properties, improved physical representations and computational features; these are based on a 2D, bilinear FV element and can be extended for other cases. Time discretization based on a gamma-T family of algorithms in the spirit of a Lax-Wendroff based FV formulations are employed. Numerical examples involving linear/nonlinear steady and transient situations are shown to demonstrate the applicability of the present representations for thermal analysis situations.
The Effective-One-Body Approach to the General Relativistic Two Body Problem
NASA Astrophysics Data System (ADS)
Damour, Thibault; Nagar, Alessandro
The two-body problem in General Relativity has been the subject of many analytical investigations. After reviewing some of the methods used to tackle this problem (and, more generally, the N-body problem), we focus on a new, recently introduced approach to the motion and radiation of (comparable mass) binary systems: the Effective One Body (EOB) formalism. We review the basic elements of this formalism, and discuss some of its recent developments. Several recent comparisons between EOB predictions and Numerical Relativity (NR) simulations have shown the aptitude of the EOB formalism to provide accurate descriptions of the dynamics and radiation of various binary systems (comprising black holes or neutron stars) in regimes that are inaccessible to other analytical approaches (such as the last orbits and the merger of comparable mass black holes). In synergy with NR simulations, post-Newtonian (PN) theory and Gravitational Self-Force (GSF) computations, the EOB formalism is likely to provide an efficient way of computing the very many accurate template waveforms that are needed for Gravitational Wave (GW) data analysis purposes.
Father Involvement and Behavior Problems among Preadolescents at Risk of Maltreatment
Yoon, Susan; Bellamy, Jennifer L.; Kim, Wonhee; Yoon, Dalhee
2018-01-01
Although there is a well-established connection between father involvement and children’s positive behavioral development in general, this relation has been understudied in more vulnerable and high-risk populations. The aims of this study were to examine how the quantity (i.e., the amount of shared activities) and quality (i.e., perceived quality of the father-child relationship) of father involvement are differently related to internalizing and externalizing behavior problems among preadolescents at risk of maltreatment and test if these associations are moderated by father type and child maltreatment. A secondary data analysis was conducted using data from the Longitudinal Studies of Child Abuse and Neglect (LONGSCAN). Generalized estimating equations analysis was performed on a sample of 499 preadolescents aged 12 years. The results indicated that higher quality of father involvement was associated with lower levels of internalizing and externalizing behavior problems whereas greater quantity of father involvement was associated with higher levels of internalizing and externalizing behavior problems. The positive association between the quantity of father involvement and behavior problems was stronger in adolescents who were physically abused by their father. The association between father involvement and behavior problems did not differ by the type of father co-residing in the home. The findings suggest that policies and interventions aimed at improving the quality of fathers’ relationships and involvement with their children may be helpful in reducing behavior problems in adolescents at risk of maltreatment. PMID:29491703
Multilayer neural networks for reduced-rank approximation.
Diamantaras, K I; Kung, S Y
1994-01-01
This paper is developed in two parts. First, the authors formulate the solution to the general reduced-rank linear approximation problem relaxing the invertibility assumption of the input autocorrelation matrix used by previous authors. The authors' treatment unifies linear regression, Wiener filtering, full rank approximation, auto-association networks, SVD and principal component analysis (PCA) as special cases. The authors' analysis also shows that two-layer linear neural networks with reduced number of hidden units, trained with the least-squares error criterion, produce weights that correspond to the generalized singular value decomposition of the input-teacher cross-correlation matrix and the input data matrix. As a corollary the linear two-layer backpropagation model with reduced hidden layer extracts an arbitrary linear combination of the generalized singular vector components. Second, the authors investigate artificial neural network models for the solution of the related generalized eigenvalue problem. By introducing and utilizing the extended concept of deflation (originally proposed for the standard eigenvalue problem) the authors are able to find that a sequential version of linear BP can extract the exact generalized eigenvector components. The advantage of this approach is that it's easier to update the model structure by adding one more unit or pruning one or more units when the application requires it. An alternative approach for extracting the exact components is to use a set of lateral connections among the hidden units trained in such a way as to enforce orthogonality among the upper- and lower-layer weights. The authors call this the lateral orthogonalization network (LON) and show via theoretical analysis-and verify via simulation-that the network extracts the desired components. The advantage of the LON-based model is that it can be applied in a parallel fashion so that the components are extracted concurrently. Finally, the authors show the application of their results to the solution of the identification problem of systems whose excitation has a non-invertible autocorrelation matrix. Previous identification methods usually rely on the invertibility assumption of the input autocorrelation, therefore they can not be applied to this case.
Robust L1-norm two-dimensional linear discriminant analysis.
Li, Chun-Na; Shao, Yuan-Hai; Deng, Nai-Yang
2015-05-01
In this paper, we propose an L1-norm two-dimensional linear discriminant analysis (L1-2DLDA) with robust performance. Different from the conventional two-dimensional linear discriminant analysis with L2-norm (L2-2DLDA), where the optimization problem is transferred to a generalized eigenvalue problem, the optimization problem in our L1-2DLDA is solved by a simple justifiable iterative technique, and its convergence is guaranteed. Compared with L2-2DLDA, our L1-2DLDA is more robust to outliers and noises since the L1-norm is used. This is supported by our preliminary experiments on toy example and face datasets, which show the improvement of our L1-2DLDA over L2-2DLDA. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Dudikova, Larysa
2017-01-01
The materials presented in this article are the result of a documentary-bibliographic study, which is based on the use of methods of analysis, synthesis, comparison and generalization. The results of the study have shown that the problem of professional ethics and culture of health care professionals is of significant interest. Problems of ethics,…
ERIC Educational Resources Information Center
Young, Edna Carter; Thompson, Cynthia K.
1987-01-01
The effects of treatment on errors in consonant clusters and in ambisyllabic consonants were investigated in two adults with histories of developmental phonological problems. Results indicated that treatment, consisting of a sound-referenced rebus approach, affected change in production of trained words as well as generalization to untrained words…
Analysis of 4th Grade Students' Problem Solving Skills in Terms of Several Variables
ERIC Educational Resources Information Center
Sungur, Gülcan; Bal, Pervin Nedim
2016-01-01
The aim of this study is to examine if the level of primary school students in solving problems differs according to some demographic variables. The research is descriptive type in the general survey method, it was carried out with quantitative research techniques. The sample of the study consisted of 587 primary school students in Grade 4. The…
3D engineered fiberboard : engineering analysis of a new building product
John F. Hunt; Jerrold E. Winandy
2003-01-01
In many forests across the United States, the high forest fuel loadings are contributing to our recent forest fire problems. Many fire-prone timber stands are generally far from traditional timber markets or the timber is not economically valuable enough to cover the costs of removal. To help address this problem, the USDA Forest Products Laboratory has developed a...
Wilson, Justin; Dai, Manhong; Jakupovic, Elvis; Watson, Stanley; Meng, Fan
2007-01-01
Modern video cards and game consoles typically have much better performance to price ratios than that of general purpose CPUs. The parallel processing capabilities of game hardware are well-suited for high throughput biomedical data analysis. Our initial results suggest that game hardware is a cost-effective platform for some computationally demanding bioinformatics problems.
A spectral dynamic stiffness method for free vibration analysis of plane elastodynamic problems
NASA Astrophysics Data System (ADS)
Liu, X.; Banerjee, J. R.
2017-03-01
A highly efficient and accurate analytical spectral dynamic stiffness (SDS) method for modal analysis of plane elastodynamic problems based on both plane stress and plane strain assumptions is presented in this paper. First, the general solution satisfying the governing differential equation exactly is derived by applying two types of one-dimensional modified Fourier series. Then the SDS matrix for an element is formulated symbolically using the general solution. The SDS matrices are assembled directly in a similar way to that of the finite element method, demonstrating the method's capability to model complex structures. Any arbitrary boundary conditions are represented accurately in the form of the modified Fourier series. The Wittrick-Williams algorithm is then used as the solution technique where the mode count problem (J0) of a fully-clamped element is resolved. The proposed method gives highly accurate solutions with remarkable computational efficiency, covering low, medium and high frequency ranges. The method is applied to both plane stress and plane strain problems with simple as well as complex geometries. All results from the theory in this paper are accurate up to the last figures quoted to serve as benchmarks.
Development of an integrated BEM approach for hot fluid structure interaction
NASA Technical Reports Server (NTRS)
Dargush, G. F.; Banerjee, P. K.; Shi, Y.
1991-01-01
The development of a comprehensive fluid-structure interaction capability within a boundary element computer code is described. This new capability is implemented in a completely general manner, so that quite arbitrary geometry, material properties and boundary conditions may be specified. Thus, a single analysis code can be used to run structures-only problems, fluids-only problems, or the combined fluid-structure problem. In all three cases, steady or transient conditions can be selected, with or without thermal effects. Nonlinear analyses can be solved via direct iteration or by employing a modified Newton-Raphson approach. A number of detailed numerical examples are included at the end of these two sections to validate the formulations and to emphasize both the accuracy and generality of the computer code. A brief review of the recent applicable boundary element literature is included for completeness. The fluid-structure interaction facility is discussed. Once again, several examples are provided to highlight this unique capability. A collection of potential boundary element applications that have been uncovered as a result of work related to the present grant is given. For most of those problems, satisfactory analysis techniques do not currently exist.
Structural Analysis of Covariance and Correlation Matrices.
ERIC Educational Resources Information Center
Joreskog, Karl G.
1978-01-01
A general approach to analysis of covariance structures is considered, in which the variances and covariances or correlations of the observed variables are directly expressed in terms of the parameters of interest. The statistical problems of identification, estimation and testing of such covariance or correlation structures are discussed.…
An Analysis of the Baking Occupation.
ERIC Educational Resources Information Center
Boyadjid, Thomas A; Paoletti, Donald J.
The general purpose of the occupational analysis is to provide workable, basic information dealing with the many and varied duties performed in the baking occupation. Such tasks as choosing ingredients and the actual baking process are logical primary concerns, but also explored are the safety and sanitation factors and management problems in a…
Analysis techniques for multivariate root loci. [a tool in linear control systems
NASA Technical Reports Server (NTRS)
Thompson, P. M.; Stein, G.; Laub, A. J.
1980-01-01
Analysis and techniques are developed for the multivariable root locus and the multivariable optimal root locus. The generalized eigenvalue problem is used to compute angles and sensitivities for both types of loci, and an algorithm is presented that determines the asymptotic properties of the optimal root locus.
Studies of Horst's Procedure for Binary Data Analysis.
ERIC Educational Resources Information Center
Gray, William M.; Hofmann, Richard J.
Most responses to educational and psychological test items may be represented in binary form. However, such dichotomously scored items present special problems when an analysis of correlational interrelationships among the items is attempted. Two general methods of analyzing binary data are proposed by Horst to partial out the effects of…
Approaches to the Analysis of School Costs, an Introduction.
ERIC Educational Resources Information Center
Payzant, Thomas
A review and general discussion of quantitative and qualitative techniques for the analysis of economic problems outside of education is presented to help educators discover new tools for planning, allocating, and evaluating educational resources. The pamphlet covers some major components of cost accounting, cost effectiveness, cost-benefit…
ERIC Educational Resources Information Center
Maines, Laina L.; Bruch, Martha D.
2012-01-01
General chemistry students often have difficulty writing balanced equations and performing stoichiometry calculations for precipitation reactions, in part because of difficulty understanding the symbolic notation used to represent chemical reactions. We have developed a problem-based experiment to improve student learning of these concepts, and…
An optimal design of wind turbine and ship structure based on neuro-response surface method
NASA Astrophysics Data System (ADS)
Lee, Jae-Chul; Shin, Sung-Chul; Kim, Soo-Young
2015-07-01
The geometry of engineering systems affects their performances. For this reason, the shape of engineering systems needs to be optimized in the initial design stage. However, engineering system design problems consist of multi-objective optimization and the performance analysis using commercial code or numerical analysis is generally time-consuming. To solve these problems, many engineers perform the optimization using the approximation model (response surface). The Response Surface Method (RSM) is generally used to predict the system performance in engineering research field, but RSM presents some prediction errors for highly nonlinear systems. The major objective of this research is to establish an optimal design method for multi-objective problems and confirm its applicability. The proposed process is composed of three parts: definition of geometry, generation of response surface, and optimization process. To reduce the time for performance analysis and minimize the prediction errors, the approximation model is generated using the Backpropagation Artificial Neural Network (BPANN) which is considered as Neuro-Response Surface Method (NRSM). The optimization is done for the generated response surface by non-dominated sorting genetic algorithm-II (NSGA-II). Through case studies of marine system and ship structure (substructure of floating offshore wind turbine considering hydrodynamics performances and bulk carrier bottom stiffened panels considering structure performance), we have confirmed the applicability of the proposed method for multi-objective side constraint optimization problems.
MAC/GMC 4.0 User's Manual: Example Problem Manual. Volume 3
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2002-01-01
This document is the third volume in the three volume set of User's Manuals for the Micromechanics Analysis Code with Generalized Method of Cells Version 4.0 (MAC/GMC 4.0). Volume 1 is the Theory Manual, Volume 2 is the Keywords Manual, and this document is the Example Problems Manual. MAC/GMC 4.0 is a composite material and laminate analysis software program developed at the NASA Glenn Research Center. It is based on the generalized method of cells (GMC) micromechanics theory, which provides access to the local stress and strain fields in the composite material. This access grants GMC the ability to accommodate arbitrary local models for inelastic material behavior and various types of damage and failure analysis. MAC/GMC 4.0 has been built around GMC to provide the theory with a user-friendly framework, along with a library of local inelastic, damage, and failure models. Further, application of simulated thermo-mechanical loading, generation of output results, and selection of architectures to represent the composite material, have been automated in MAC/GMC 4.0. Finally, classical lamination theory has been implemented within MAC/GMC 4.0 wherein GMC is used to model the composite material response of each ply. Consequently, the full range of GMC composite material capabilities is available for analysis of arbitrary laminate configurations as well. This volume provides in-depth descriptions of 43 example problems, which were specially designed to highlight many of the most important capabilities of the code. The actual input files associated with each example problem are distributed with the MAC/GMC 4.0 software; thus providing the user with a convenient starting point for their own specialized problems of interest.
Disorganized attachment and inhibitory capacity: predicting externalizing problem behaviors.
Bohlin, Gunilla; Eninger, Lilianne; Brocki, Karin Cecilia; Thorell, Lisa B
2012-04-01
The aim of the present study was to investigate whether attachment insecurity, focusing on disorganized attachment, and the executive function (EF) component of inhibition, assessed at age 5, were longitudinally related to general externalizing problem behaviors as well as to specific symptoms of ADHD and Autism spectrum disorder (ASD), and callous-unemotional (CU) traits. General externalizing problem behaviors were also measured at age 5 to allow for a developmental analysis. Outcome variables were rated by parents and teachers. The sample consisted of 65 children with an oversampling of children with high levels of externalizing behaviors. Attachment was evaluated using a story stem attachment doll play procedure. Inhibition was measured using four different tasks. The results showed that both disorganized attachment and poor inhibition were longitudinally related to all outcome variables. Controlling for initial level of externalizing problem behavior, poor inhibition predicted ADHD symptoms and externalizing problem behaviors, independent of disorganized attachment, whereas for ASD symptoms no predictive relations remained. Disorganized attachment independently predicted CU traits.
Health-related quality of life of children with newly diagnosed specific learning disability.
Karande, Sunil; Bhosrekar, Kirankumar; Kulkarni, Madhuri; Thakker, Arpita
2009-06-01
The objective of this study was to measure health-related quality of life (HRQL) of children with newly diagnosed specific learning disability (SpLD) using the Child Health Questionnaire-Parent Form 50. We detected clinically significant deficits (effect size > or = -0.5) in 9 out of 12 domains: limitations in family activities, emotional impact on parents, social limitations as a result of emotional-behavioral problems, time impact on parents, general behavior, physical functioning, social limitations as a result of physical health, general health perceptions and mental health; and in both summary scores (psychosocial > physical). Multivariate analysis revealed having > or = 1 non-academic problem(s) (p < 0.0001), attention-deficit hyperactivity disorder (p = 0.005) or first-born status (p = 0.009) predicted a poor psychosocial summary score; and having > or =1 non-academic problem(s) (p = 0.006) or first-born status (p = 0.035) predicted a poor physical summary score. HRQL is significantly compromised in children having newly diagnosed SpLD.
A simplified analysis of the multigrid V-cycle as a fast elliptic solver
NASA Technical Reports Server (NTRS)
Decker, Naomi H.; Taasan, Shlomo
1988-01-01
For special model problems, Fourier analysis gives exact convergence rates for the two-grid multigrid cycle and, for more general problems, provides estimates of the two-grid convergence rates via local mode analysis. A method is presented for obtaining mutigrid convergence rate estimates for cycles involving more than two grids (using essentially the same analysis as for the two-grid cycle). For the simple cast of the V-cycle used as a fast Laplace solver on the unit square, the k-grid convergence rate bounds obtained by this method are sharper than the bounds predicted by the variational theory. Both theoretical justification and experimental evidence are presented.
Gong, Pinghua; Zhang, Changshui; Lu, Zhaosong; Huang, Jianhua Z; Ye, Jieping
2013-01-01
Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations have demonstrated their superiority over the convex counterparts in several sparse learning settings. However, solving the non-convex optimization problems associated with non-convex penalties remains a big challenge. A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems. This approach is usually not very practical for large-scale problems because its computational cost is a multiple of solving a single convex problem. In this paper, we propose a General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-convex penalties. The GIST algorithm iteratively solves a proximal operator problem, which in turn has a closed-form solution for many commonly used penalties. At each outer iteration of the algorithm, we use a line search initialized by the Barzilai-Borwein (BB) rule that allows finding an appropriate step size quickly. The paper also presents a detailed convergence analysis of the GIST algorithm. The efficiency of the proposed algorithm is demonstrated by extensive experiments on large-scale data sets.
Intrasystem Analysis Program (IAP) code summaries
NASA Astrophysics Data System (ADS)
Dobmeier, J. J.; Drozd, A. L. S.; Surace, J. A.
1983-05-01
This report contains detailed descriptions and capabilities of the codes that comprise the Intrasystem Analysis Program. The four codes are: Intrasystem Electromagnetic Compatibility Analysis Program (IEMCAP), General Electromagnetic Model for the Analysis of Complex Systems (GEMACS), Nonlinear Circuit Analysis Program (NCAP), and Wire Coupling Prediction Models (WIRE). IEMCAP is used for computer-aided evaluation of electromagnetic compatibility (ECM) at all stages of an Air Force system's life cycle, applicable to aircraft, space/missile, and ground-based systems. GEMACS utilizes a Method of Moments (MOM) formalism with the Electric Field Integral Equation (EFIE) for the solution of electromagnetic radiation and scattering problems. The code employs both full matrix decomposition and Banded Matrix Iteration solution techniques and is expressly designed for large problems. NCAP is a circuit analysis code which uses the Volterra approach to solve for the transfer functions and node voltage of weakly nonlinear circuits. The Wire Programs deal with the Application of Multiconductor Transmission Line Theory to the Prediction of Cable Coupling for specific classes of problems.
Primal-dual methods of shape sensitivity analysis for curvilinear cracks with nonpenetration
NASA Astrophysics Data System (ADS)
Kovtunenko, V. A.
2006-10-01
Based on a level-set description of a crack moving with a given velocity, the problem of shape perturb-ation of the crack is considered. Nonpenetration conditions are imposed between opposite crack surfaces which result in a constrained minimization problem describing equilibrium of a solid with the crack. We suggest a minimax formulation of the state problem thus allowing curvilinear (nonplanar) cracks for the consideration. Utilizing primal-dual methods of shape sensitivity analysis we obtain the general formula for a shape derivative of the potential energy, which describes an energy-release rate for the curvilinear cracks. The conditions sufficient to rewrite it in the form of a path-independent integral (J-integral) are derived.
NASA Astrophysics Data System (ADS)
Bittencourt, Tulio N.; Barry, Ahmabou; Ingraffea, Anthony R.
This paper presents a comparison among stress-intensity factors for mixed-mode two-dimensional problems obtained through three different approaches: displacement correlation, J-integral, and modified crack-closure integral. All mentioned procedures involve only one analysis step and are incorporated in the post-processor page of a finite element computer code for fracture mechanics analysis (FRANC). Results are presented for a closed-form solution problem under mixed-mode conditions. The accuracy of these described methods then is discussed and analyzed in the framework of their numerical results. The influence of the differences among the three methods on the predicted crack trajectory of general problems is also discussed.
Error analysis of multipoint flux domain decomposition methods for evolutionary diffusion problems
NASA Astrophysics Data System (ADS)
Arrarás, A.; Portero, L.; Yotov, I.
2014-01-01
We study space and time discretizations for mixed formulations of parabolic problems. The spatial approximation is based on the multipoint flux mixed finite element method, which reduces to an efficient cell-centered pressure system on general grids, including triangles, quadrilaterals, tetrahedra, and hexahedra. The time integration is performed by using a domain decomposition time-splitting technique combined with multiterm fractional step diagonally implicit Runge-Kutta methods. The resulting scheme is unconditionally stable and computationally efficient, as it reduces the global system to a collection of uncoupled subdomain problems that can be solved in parallel without the need for Schwarz-type iteration. Convergence analysis for both the semidiscrete and fully discrete schemes is presented.
Research of generalized wavelet transformations of Haar correctness in remote sensing of the Earth
NASA Astrophysics Data System (ADS)
Kazaryan, Maretta; Shakhramanyan, Mihail; Nedkov, Roumen; Richter, Andrey; Borisova, Denitsa; Stankova, Nataliya; Ivanova, Iva; Zaharinova, Mariana
2017-10-01
In this paper, Haar's generalized wavelet functions are applied to the problem of ecological monitoring by the method of remote sensing of the Earth. We study generalized Haar wavelet series and suggest the use of Tikhonov's regularization method for investigating them for correctness. In the solution of this problem, an important role is played by classes of functions that were introduced and described in detail by I.M. Sobol for studying multidimensional quadrature formulas and it contains functions with rapidly convergent series of wavelet Haar. A theorem on the stability and uniform convergence of the regularized summation function of the generalized wavelet-Haar series of a function from this class with approximate coefficients is proved. The article also examines the problem of using orthogonal transformations in Earth remote sensing technologies for environmental monitoring. Remote sensing of the Earth allows to receive from spacecrafts information of medium, high spatial resolution and to conduct hyperspectral measurements. Spacecrafts have tens or hundreds of spectral channels. To process the images, the device of discrete orthogonal transforms, and namely, wavelet transforms, was used. The aim of the work is to apply the regularization method in one of the problems associated with remote sensing of the Earth and subsequently to process the satellite images through discrete orthogonal transformations, in particular, generalized Haar wavelet transforms. General methods of research. In this paper, Tikhonov's regularization method, the elements of mathematical analysis, the theory of discrete orthogonal transformations, and methods for decoding of satellite images are used. Scientific novelty. The task of processing of archival satellite snapshots (images), in particular, signal filtering, was investigated from the point of view of an incorrectly posed problem. The regularization parameters for discrete orthogonal transformations were determined.
Racial classification in the evolutionary sciences: a comparative analysis.
Billinger, Michael S
2007-01-01
Human racial classification has long been a problem for the discipline of anthropology, but much of the criticism of the race concept has focused on its social and political connotations. The central argument of this paper is that race is not a specifically human problem, but one that exists in evolutionary thought in general. This paper looks at various disciplinary approaches to racial or subspecies classification, extending its focus beyond the anthropological race concept by providing a comparative analysis of the use of racial classification in evolutionary biology, genetics, and anthropology.
Content-addressable read/write memories for image analysis
NASA Technical Reports Server (NTRS)
Snyder, W. E.; Savage, C. D.
1982-01-01
The commonly encountered image analysis problems of region labeling and clustering are found to be cases of search-and-rename problem which can be solved in parallel by a system architecture that is inherently suitable for VLSI implementation. This architecture is a novel form of content-addressable memory (CAM) which provides parallel search and update functions, allowing speed reductions down to constant time per operation. It has been proposed in related investigations by Hall (1981) that, with VLSI, CAM-based structures with enhanced instruction sets for general purpose processing will be feasible.
General Tricomi-Rassias problem and oblique derivative problem for generalized Chaplygin equations
NASA Astrophysics Data System (ADS)
Wen, Guochun; Chen, Dechang; Cheng, Xiuzhen
2007-09-01
Many authors have discussed the Tricomi problem for some second order equations of mixed type, which has important applications in gas dynamics. In particular, Bers proposed the Tricomi problem for Chaplygin equations in multiply connected domains [L. Bers, Mathematical Aspects of Subsonic and Transonic Gas Dynamics, Wiley, New York, 1958]. And Rassias proposed the exterior Tricomi problem for mixed equations in a doubly connected domain and proved the uniqueness of solutions for the problem [J.M. Rassias, Lecture Notes on Mixed Type Partial Differential Equations, World Scientific, Singapore, 1990]. In the present paper, we discuss the general Tricomi-Rassias problem for generalized Chaplygin equations. This is one general oblique derivative problem that includes the exterior Tricomi problem as a special case. We first give the representation of solutions of the general Tricomi-Rassias problem, and then prove the uniqueness and existence of solutions for the problem by a new method. In this paper, we shall also discuss another general oblique derivative problem for generalized Chaplygin equations.
NASA Astrophysics Data System (ADS)
Zhang, Chenglong; Guo, Ping
2017-10-01
The vague and fuzzy parametric information is a challenging issue in irrigation water management problems. In response to this problem, a generalized fuzzy credibility-constrained linear fractional programming (GFCCFP) model is developed for optimal irrigation water allocation under uncertainty. The model can be derived from integrating generalized fuzzy credibility-constrained programming (GFCCP) into a linear fractional programming (LFP) optimization framework. Therefore, it can solve ratio optimization problems associated with fuzzy parameters, and examine the variation of results under different credibility levels and weight coefficients of possibility and necessary. It has advantages in: (1) balancing the economic and resources objectives directly; (2) analyzing system efficiency; (3) generating more flexible decision solutions by giving different credibility levels and weight coefficients of possibility and (4) supporting in-depth analysis of the interrelationships among system efficiency, credibility level and weight coefficient. The model is applied to a case study of irrigation water allocation in the middle reaches of Heihe River Basin, northwest China. Therefore, optimal irrigation water allocation solutions from the GFCCFP model can be obtained. Moreover, factorial analysis on the two parameters (i.e. λ and γ) indicates that the weight coefficient is a main factor compared with credibility level for system efficiency. These results can be effective for support reasonable irrigation water resources management and agricultural production.
Relative contributions of three descriptive methods: implications for behavioral assessment.
Pence, Sacha T; Roscoe, Eileen M; Bourret, Jason C; Ahearn, William H
2009-01-01
This study compared the outcomes of three descriptive analysis methods-the ABC method, the conditional probability method, and the conditional and background probability method-to each other and to the results obtained from functional analyses. Six individuals who had been diagnosed with developmental delays and exhibited problem behavior participated. Functional analyses indicated that participants' problem behavior was maintained by social positive reinforcement (n = 2), social negative reinforcement (n = 2), or automatic reinforcement (n = 2). Results showed that for all but 1 participant, descriptive analysis outcomes were similar across methods. In addition, for all but 1 participant, the descriptive analysis outcome differed substantially from the functional analysis outcome. This supports the general finding that descriptive analysis is a poor means of determining functional relations.
Approximation concepts for efficient structural synthesis
NASA Technical Reports Server (NTRS)
Schmit, L. A., Jr.; Miura, H.
1976-01-01
It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.
Travel Medicine Encounters of Australian General Practice Trainees-A Cross-Sectional Study.
Morgan, Simon; Henderson, Kim M; Tapley, Amanda; Scott, John; van Driel, Mieke L; Spike, Neil A; McArthur, Lawrie A; Davey, Andrew R; Catzikiris, Nigel F; Magin, Parker J
2015-01-01
Travel medicine is a common and challenging area of clinical practice and practitioners need up-to-date knowledge and experience in a range of areas. Australian general practitioners (GPs) play a significant role in the delivery of travel medicine advice. We aimed to describe the rate and nature of travel medicine consultations, including both the clinical and educational aspects of the consultations. A cross-sectional analysis from an ongoing cohort study of GP trainees' clinical consultations was performed. Trainees contemporaneously recorded demographic, clinical, and educational details of consecutive patient consultations. Proportions of all problems/diagnoses managed in these consultations that were coded "travel-related" and "travel advice" were both calculated with 95% confidence intervals (CIs). Associations of a problem/diagnosis being "travel-related" or "travel advice" were tested using simple logistic regression within the generalized estimating equations (GEE) framework. A total of 856 trainees contributed data on 169,307 problems from 108,759 consultations (2010-2014). Travel-related and travel advice problems were managed at a rate of 1.1 and 0.5 problems per 100 encounters, respectively. Significant positive associations of travel-related problems were younger trainee and patient age; new patient to the trainee and practice; privately billing, larger, urban, and higher socioeconomic status practices; and involvement of the practice nurse. Trainees sought in-consultation information and generated learning goals in 34.7 and 20.8% of travel advice problems, respectively, significantly more than in non-travel advice problems. Significant positive associations of travel advice problems were seeking in-consultation information, generation of learning goals, longer consultation duration, and more problems managed. Our findings reinforce the importance of focused training in travel medicine for GP trainees and adequate exposure to patients in the practice setting. In addition, our findings have implications more broadly for the delivery of travel medicine in general practice. © 2015 International Society of Travel Medicine.
Developments in boundary element methods - 2
NASA Astrophysics Data System (ADS)
Banerjee, P. K.; Shaw, R. P.
This book is a continuation of the effort to demonstrate the power and versatility of boundary element methods which began in Volume 1 of this series. While Volume 1 was designed to introduce the reader to a selected range of problems in engineering for which the method has been shown to be efficient, the present volume has been restricted to time-dependent problems in engineering. Boundary element formulation for melting and solidification problems in considered along with transient flow through porous elastic media, applications of boundary element methods to problems of water waves, and problems of general viscous flow. Attention is given to time-dependent inelastic deformation of metals by boundary element methods, the determination of eigenvalues by boundary element methods, transient stress analysis of tunnels and caverns of arbitrary shape due to traveling waves, an analysis of hydrodynamic loads by boundary element methods, and acoustic emissions from submerged structures.
Nonstandard Analysis and Shock Wave Jump Conditions in a One-Dimensional Compressible Gas
NASA Technical Reports Server (NTRS)
Baty, Roy S.; Farassat, Fereidoun; Hargreaves, John
2007-01-01
Nonstandard analysis is a relatively new area of mathematics in which infinitesimal numbers can be defined and manipulated rigorously like real numbers. This report presents a fairly comprehensive tutorial on nonstandard analysis for physicists and engineers with many examples applicable to generalized functions. To demonstrate the power of the subject, the problem of shock wave jump conditions is studied for a one-dimensional compressible gas. It is assumed that the shock thickness occurs on an infinitesimal interval and the jump functions in the thermodynamic and fluid dynamic parameters occur smoothly across this interval. To use conservations laws, smooth pre-distributions of the Dirac delta measure are applied whose supports are contained within the shock thickness. Furthermore, smooth pre-distributions of the Heaviside function are applied which vary from zero to one across the shock wave. It is shown that if the equations of motion are expressed in nonconservative form then the relationships between the jump functions for the flow parameters may be found unambiguously. The analysis yields the classical Rankine-Hugoniot jump conditions for an inviscid shock wave. Moreover, non-monotonic entropy jump conditions are obtained for both inviscid and viscous flows. The report shows that products of generalized functions may be defined consistently using nonstandard analysis; however, physically meaningful products of generalized functions must be determined from the physics of the problem and not the mathematical form of the governing equations.
Development of guidelines for the definition of the relavant information content in data classes
NASA Technical Reports Server (NTRS)
Schmitt, E.
1973-01-01
The problem of experiment design is defined as an information system consisting of information source, measurement unit, environmental disturbances, data handling and storage, and the mathematical analysis and usage of data. Based on today's concept of effective computability, general guidelines for the definition of the relevant information content in data classes are derived. The lack of a universally applicable information theory and corresponding mathematical or system structure is restricting the solvable problem classes to a small set. It is expected that a new relativity theory of information, generally described by a universal algebra of relations will lead to new mathematical models and system structures capable of modeling any well defined practical problem isomorphic to an equivalence relation at any corresponding level of abstractness.
Nan, Feng; Moghadasi, Mohammad; Vakili, Pirooz; Vajda, Sandor; Kozakov, Dima; Ch. Paschalidis, Ioannis
2015-01-01
We propose a new stochastic global optimization method targeting protein docking problems. The method is based on finding a general convex polynomial underestimator to the binding energy function in a permissive subspace that possesses a funnel-like structure. We use Principal Component Analysis (PCA) to determine such permissive subspaces. The problem of finding the general convex polynomial underestimator is reduced into the problem of ensuring that a certain polynomial is a Sum-of-Squares (SOS), which can be done via semi-definite programming. The underestimator is then used to bias sampling of the energy function in order to recover a deep minimum. We show that the proposed method significantly improves the quality of docked conformations compared to existing methods. PMID:25914440
Abstract generalized vector quasi-equilibrium problems in noncompact Hadamard manifolds.
Lu, Haishu; Wang, Zhihua
2017-01-01
This paper deals with the abstract generalized vector quasi-equilibrium problem in noncompact Hadamard manifolds. We prove the existence of solutions to the abstract generalized vector quasi-equilibrium problem under suitable conditions and provide applications to an abstract vector quasi-equilibrium problem, a generalized scalar equilibrium problem, a scalar equilibrium problem, and a perturbed saddle point problem. Finally, as an application of the existence of solutions to the generalized scalar equilibrium problem, we obtain a weakly mixed variational inequality and two mixed variational inequalities. The results presented in this paper unify and generalize many known results in the literature.
Design sensitivity analysis of nonlinear structural response
NASA Technical Reports Server (NTRS)
Cardoso, J. B.; Arora, J. S.
1987-01-01
A unified theory is described of design sensitivity analysis of linear and nonlinear structures for shape, nonshape and material selection problems. The concepts of reference volume and adjoint structure are used to develop the unified viewpoint. A general formula for design sensitivity analysis is derived. Simple analytical linear and nonlinear examples are used to interpret various terms of the formula and demonstrate its use.
Covariance expressions for eigenvalue and eigenvector problems
NASA Astrophysics Data System (ADS)
Liounis, Andrew J.
There are a number of important scientific and engineering problems whose solutions take the form of an eigenvalue--eigenvector problem. Some notable examples include solutions to linear systems of ordinary differential equations, controllability of linear systems, finite element analysis, chemical kinetics, fitting ellipses to noisy data, and optimal estimation of attitude from unit vectors. In many of these problems, having knowledge of the eigenvalue and eigenvector Jacobians is either necessary or is nearly as important as having the solution itself. For instance, Jacobians are necessary to find the uncertainty in a computed eigenvalue or eigenvector estimate. This uncertainty, which is usually represented as a covariance matrix, has been well studied for problems similar to the eigenvalue and eigenvector problem, such as singular value decomposition. There has been substantially less research on the covariance of an optimal estimate originating from an eigenvalue-eigenvector problem. In this thesis we develop two general expressions for the Jacobians of eigenvalues and eigenvectors with respect to the elements of their parent matrix. The expressions developed make use of only the parent matrix and the eigenvalue and eigenvector pair under consideration. In addition, they are applicable to any general matrix (including complex valued matrices, eigenvalues, and eigenvectors) as long as the eigenvalues are simple. Alongside this, we develop expressions that determine the uncertainty in a vector estimate obtained from an eigenvalue-eigenvector problem given the uncertainty of the terms of the matrix. The Jacobian expressions developed are numerically validated with forward finite, differencing and the covariance expressions are validated using Monte Carlo analysis. Finally, the results from this work are used to determine covariance expressions for a variety of estimation problem examples and are also applied to the design of a dynamical system.
Finite Element Based Structural Damage Detection Using Artificial Boundary Conditions
2007-09-01
C. (2005). Elementary Linear Algebra . New York: John Wiley and Sons. Avitable, Peter (2001, January) Experimental Modal Analysis, A Simple Non...variables under consideration. 3 Frequency sensitivities are the basis for a linear approximation to compute the change in the natural frequencies of a...THEORY The general problem statement for a non- linear constrained optimization problem is: To minimize ( )f x Objective Function Subject to
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haslinger, Jaroslav, E-mail: hasling@karlin.mff.cuni.cz; Stebel, Jan, E-mail: stebel@math.cas.cz
2011-04-15
We study the shape optimization problem for the paper machine headbox which distributes a mixture of water and wood fibers in the paper making process. The aim is to find a shape which a priori ensures the given velocity profile on the outlet part. The mathematical formulation leads to the optimal control problem in which the control variable is the shape of the domain representing the header, the state problem is represented by the generalized Navier-Stokes system with nontrivial boundary conditions. This paper deals with numerical aspects of the problem.
Magrabi, Farah; Liaw, Siaw Teng; Arachi, Diana; Runciman, William; Coiera, Enrico; Kidd, Michael R
2016-11-01
To identify the categories of problems with information technology (IT), which affect patient safety in general practice. General practitioners (GPs) reported incidents online or by telephone between May 2012 and November 2013. Incidents were reviewed against an existing classification for problems associated with IT and the clinical process impacted. 87 GPs across Australia. Types of problems, consequences and clinical processes. GPs reported 90 incidents involving IT which had an observable impact on the delivery of care, including actual patient harm as well as near miss events. Practice systems and medications were the most affected clinical processes. Problems with IT disrupted clinical workflow, wasted time and caused frustration. Issues with user interfaces, routine updates to software packages and drug databases, and the migration of records from one package to another generated clinical errors that were unique to IT; some could affect many patients at once. Human factors issues gave rise to some errors that have always existed with paper records but are more likely to occur and cause harm with IT. Such errors were linked to slips in concentration, multitasking, distractions and interruptions. Problems with patient identification and hybrid records generated errors that were in principle no different to paper records. Problems associated with IT include perennial risks with paper records, but additional disruptions in workflow and hazards for patients unique to IT, occasionally affecting multiple patients. Surveillance for such hazards may have general utility, but particularly in the context of migrating historical records to new systems and software updates to existing systems. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Concurrent electromagnetic scattering analysis
NASA Technical Reports Server (NTRS)
Patterson, Jean E.; Cwik, Tom; Ferraro, Robert D.; Jacobi, Nathan; Liewer, Paulett C.; Lockhart, Thomas G.; Lyzenga, Gregory A.; Parker, Jay
1989-01-01
The computational power of the hypercube parallel computing architecture is applied to the solution of large-scale electromagnetic scattering and radiation problems. Three analysis codes have been implemented. A Hypercube Electromagnetic Interactive Analysis Workstation was developed to aid in the design and analysis of metallic structures such as antennas and to facilitate the use of these analysis codes. The workstation provides a general user environment for specification of the structure to be analyzed and graphical representations of the results.
Development of Composite Materials with High Passive Damping Properties
2006-05-15
frequency response function analysis. Sound transmission through sandwich panels was studied using the statistical energy analysis (SEA). Modal density...2.2.3 Finite element models 14 2.2.4 Statistical energy analysis method 15 CHAPTER 3 ANALYSIS OF DAMPING IN SANDWICH MATERIALS. 24 3.1 Equation of...sheets and the core. 2.2.4 Statistical energy analysis method Finite element models are generally only efficient for problems at low and middle frequencies
Magin, Parker J; Adams, Jon; Sibbritt, David W; Joy, Elyssa; Ireland, Malcolm C
2005-10-03
To establish the prevalence and characteristics of occupational violence in Australian urban general practice, and examine practitioner correlates of violence. Cross-sectional questionnaire survey mailed to all members (n = 1085) of three urban divisions of general practice in New South Wales in August and September 2004. The three divisions were chosen to provide a range of socioeconomic status (SES) demographics. Occupational violence towards general practitioners during the previous 12 months. 528 GPs returned questionnaires (49% response rate). Of these, 63.7% had experienced violence in the previous year. The most common forms of violence were "low level" violence - verbal abuse (42.1%), property damage/theft (28.6%) and threats (23.1%). A smaller proportion of GPs had experienced "high level" violence, such as sexual harassment (9.3%) and physical abuse (2.7%). On univariate analysis, violence was significantly more likely towards female GPs (P < 0.001), less experienced GPs (P = 0.003) and GPs working in a lower SES status area (P < 0.001), and among practice populations encompassing greater social disadvantage (P = 0.006), mental health problems (P < 0.001), and drug- and alcohol-related problems (P < 0.001). Experience of violence was greater for younger GPs (P = 0.005) and those providing after-hours care (P = 0.033 for after-hours home visits). On multivariate analysis, a significant association persisted between high level violence and lower SES area (odds ratio [OR], 2.86), being female (OR, 5.87), having practice populations with more drug-related problems (OR, 5.77), and providing home visits during business hours (OR, 4.76). More experienced GPs encountered less violence (OR, 0.77) for every additional 5 years of practice. Occupational violence is a considerable problem in Australian urban general practice. Formal education programs in preventing and managing violence would be appropriate for GPs and doctors-in-training.
Is Word-Problem Solving a Form of Text Comprehension?
Fuchs, Lynn S.; Fuchs, Douglas; Compton, Donald L.; Hamlett, Carol L.; Wang, Amber Y.
2015-01-01
This study’s hypotheses were that (a) word-problem (WP) solving is a form of text comprehension that involves language comprehension processes, working memory, and reasoning, but (b) WP solving differs from other forms of text comprehension by requiring WP-specific language comprehension as well as general language comprehension. At the start of the 2nd grade, children (n = 206; on average, 7 years, 6 months) were assessed on general language comprehension, working memory, nonlinguistic reasoning, processing speed (a control variable), and foundational skill (arithmetic for WPs; word reading for text comprehension). In spring, they were assessed on WP-specific language comprehension, WPs, and text comprehension. Path analytic mediation analysis indicated that effects of general language comprehension on text comprehension were entirely direct, whereas effects of general language comprehension on WPs were partially mediated by WP-specific language. By contrast, effects of working memory and reasoning operated in parallel ways for both outcomes. PMID:25866461
ERIC Educational Resources Information Center
Shieh, Gwowen
2006-01-01
This paper considers the problem of analysis of correlation coefficients from a multivariate normal population. A unified theorem is derived for the regression model with normally distributed explanatory variables and the general results are employed to provide useful expressions for the distributions of simple, multiple, and partial-multiple…
An Attempt of Formalizing the Selection Parameters for Settlements Generalization in Small-Scales
NASA Astrophysics Data System (ADS)
Karsznia, Izabela
2014-12-01
The paper covers one of the most important problems concerning context-sensitive settlement selection for the purpose of the small-scale maps. So far, no formal parameters for small-scale settlements generalization have been specified, hence the problem seems to be an important and innovative challenge. It is also crucial from the practical point of view as it is necessary to develop appropriate generalization algorithms for the purpose of the General Geographic Objects Database generalization which is the essential Spatial Data Infrastructure component in Poland. The author proposes and verifies quantitative generalization parameters for the purpose of the settlement selection process in small-scale maps. The selection of settlements was carried out in two research areas - in Lower Silesia and Łódź Province. Based on the conducted analysis appropriate contextual-sensitive settlements selection parameters have been defined. Particular effort has been made to develop a methodology of quantitative settlements selection which would be useful in the automation processes and that would make it possible to keep specifics of generalized objects unchanged.
Ninth NASTRAN (R) Users' Colloquium
NASA Technical Reports Server (NTRS)
1980-01-01
The general application of finite element methodology and the specific application of NASTRAN to a wide variety of static and dynamic structural problems is addressed. Comparison with other approaches and new methods of analysis with nastran are included.
Beyond the specific child. What is 'a child's case' in general practice?
Hølge-Hazelton, Bibi; Tulinius, Charlotte
2010-01-01
Too many abused and neglected children are being overlooked by GPs and other professionals who are in contact with the families. Some suggestions for a definition of 'a child in need' have been given, but the functionality of these definitions has not been tested in general practice. To describe the problems presented by GPs as cases with children in need during supervision, and from here to suggest an empirically-based definition of a child in need in general practice. A mixed-method evaluation design was used. Twenty-one GPs, in Denmark, participated in supervision groups concerning cases with children in need in general practice. The data were analysed via field notes and video recordings; case categorisation into sex, ethnicity, and developmental stages; thematically using the GPs' own descriptions; and a theoretically supported style. Analysis of the data led to the suggested definition of a case concerning 'a child in need' in general practice as one that directly or indirectly involves problems with a specific child, an as-yet unborn child, or one or both parents of a family currently or potentially threatening the wellbeing of the family or the child. Based on this analysis, one suggestion as to why some abused and neglected children are overlooked in general practice is that GPs often have to navigate in difficult indirect consultations, where there is a high risk of losing the overview.
Cope, Anwen L; Wood, Fiona; Francis, Nick A; Chestnutt, Ivor G
2015-01-01
Objectives This study aimed to produce an account of the attitudes of general practitioners (GPs) towards the management of dental conditions in general practice, and sought to explore how GPs use antibiotics in the treatment of dental problems. Design Qualitative study employing semistructured telephone interviews and thematic analysis. Participants 17 purposively sampled GPs working in Wales, of which 9 were male. The median number of years since graduation was 21. Maximum variation sampling techniques were used to ensure participants represented different Rural–Urban localities, worked in communities with varying levels of deprivation, and had differing lengths of practising career. Results Most GPs reported regularly managing dental problems, with more socioeconomically deprived patients being particularly prone to consult. Participants recognised that dental problems are not optimally managed in general practice, but had sympathy with patients experiencing dental pain who reported difficulty obtaining an emergency dental consultation. Many GPs considered antibiotics an acceptable first-line treatment for acute dental problems and reported that patients often attended expecting to receive antibiotics. GPs who reported that their usual practice was to prescribe antibiotics were more likely to prioritise patients’ immediate needs, whereas clinicians who reported rarely prescribing often did so to encourage patients to consult a dental professional. Conclusions The presentation of patients with dental problems presents challenges to GPs who report concerns about their ability to manage such conditions. Despite this, many reported frequently prescribing antibiotics for patients with dental conditions. This may contribute to both patient morbidity and the emergence of antimicrobial resistance. This research has identified the need for quantitative data on general practice consultations for dental problems and qualitative research exploring patient perspectives on reasons for consulting. The findings of these studies will inform the design of an intervention to support patients in accessing appropriate care when experiencing dental problems. PMID:26428331
Numerical analysis of the asymptotic two-point boundary value solution for N-body trajectories.
NASA Technical Reports Server (NTRS)
Lancaster, J. E.; Allemann, R. A.
1972-01-01
Previously published asymptotic solutions for lunar and interplanetary trajectories have been modified and combined to formulate a general analytical boundary value solution applicable to a broad class of trajectory problems. In addition, the earlier first-order solutions have been extended to second-order to determine if improved accuracy is possible. Comparisons between the asymptotic solution and numerical integration for several lunar and interplanetary trajectories show that the asymptotic solution is generally quite accurate. Also, since no iterations are required, a solution to the boundary value problem is obtained in a fraction of the time required for numerically integrated solutions.
[Psychosocial adjustment in children with a cleft lip and/or palate].
Hoek, Ineke H C; Kraaimaat, Floris W; Admiraal, Ronald J C; Kuijpers-Jagtman, Anne Marie; Verhaak, Christianne M
2009-01-01
To gain insight into the psychosocial health of children aged 9 to 12 years with a cleft lip and/or palate; to determine the relation between their health and the nature and severity of the cleft as well as other individual characteristics. Descriptive, cross-sectional study. Questionnaires completed by parents, teachers and children were used to obtain information about the psychosocial health, nature and severity of the cleft lip and/or palate, and individual characteristics of 80 children. The interrelationship between these parameters was assessed using chi-square tests, single-factor analysis of variance and correlational analysis. In general, the psychosocial health of children with a cleft lip and/or palate did not differ from that of the norm groups. Parents of children with a cleft lip/and or palate reported more withdrawn or depressive behaviour in their child than parents from the norm groups. Children with a cleft lip and/or palate exhibited less rule-breaking behaviour. Teachers reported relatively more social problems. One-third of the children had learning problems. A better psychosocial health was associated with fewer speech problems but not with a more or less abnormal physical appearance. Self-image showed a negative correlation with psychosocial health problems, while learning problems showed a positive correlation. In general, the psychosocial health of children with a cleft lip and/or palate does not differ from children without this condition. However, children with a cleft lip and/or palate do exhibit more learning problems.
On Target Localization Using Combined RSS and AoA Measurements
Beko, Marko; Dinis, Rui
2018-01-01
This work revises existing solutions for a problem of target localization in wireless sensor networks (WSNs), utilizing integrated measurements, namely received signal strength (RSS) and angle of arrival (AoA). The problem of RSS/AoA-based target localization became very popular in the research community recently, owing to its great applicability potential and relatively low implementation cost. Therefore, here, a comprehensive study of the state-of-the-art (SoA) solutions and their detailed analysis is presented. The beginning of this work starts by considering the SoA approaches based on convex relaxation techniques (more computationally complex in general), and it goes through other (less computationally complex) approaches, as well, such as the ones based on the generalized trust region sub-problems framework and linear least squares. Furthermore, a detailed analysis of the computational complexity of each solution is reviewed. Furthermore, an extensive set of simulation results is presented. Finally, the main conclusions are summarized, and a set of future aspects and trends that might be interesting for future research in this area is identified. PMID:29671832
From Constraints to Resolution Rules Part II : chains, braids, confluence and T&E
NASA Astrophysics Data System (ADS)
Berthier, Denis
In this Part II, we apply the general theory developed in Part I to a detailed analysis of the Constraint Satisfaction Problem (CSP). We show how specific types of resolution rules can be defined. In particular, we introduce the general notions of a chain and a braid. As in Part I, these notions are illustrated in detail with the Sudoku example - a problem known to be NP-complete and which is therefore typical of a broad class of hard problems. For Sudoku, we also show how far one can go in "approximating" a CSP with a resolution theory and we give an empirical statistical analysis of how the various puzzles, corresponding to different sets of entries, can be classified along a natural scale of complexity. For any CSP, we also prove the confluence property of some Resolution Theories based on braids and we show how it can be used to define different resolution strategies. Finally, we prove that, in any CSP, braids have the same solving capacity as Trial-and-Error (T&E) with no guessing and we comment this result in the Sudoku case.
Calhoun, Susan L.; Fernandez-Mendoza, Julio; Vgontzas, Alexandros N.; Mayes, Susan D.; Tsaoussoglou, Marina; Rodriguez-Muñoz, Alfredo; Bixler, Edward O.
2012-01-01
Study Objectives: Although excessive daytime sleepiness (EDS) is a common problem in children, with estimates of 15%; few studies have investigated the sequelae of EDS in young children. We investigated the association of EDS with objective neurocognitive measures and parent reported learning, attention/hyperactivity, and conduct problems in a large general population sample of children. Design: Cross-sectional. Setting: Population based. Participants: 508 children from The Penn State Child Cohort. Interventions: N/A. Measurements and Results: Children underwent a 9-h polysomnogram, comprehensive neurocognitive testing, and parent rating scales. Children were divided into 2 groups: those with and without parent-reported EDS. Structural equation modeling was used to examine whether processing speed and working memory performance would mediate the relationship between EDS and learning, attention/hyperactivity, and conduct problems. Logistic regression models suggest that parent-reported learning, attention/hyperactivity, and conduct problems, as well as objective measurement of processing speed and working memory are significant sequelae of EDS, even when controlling for AHI and objective markers of sleep. Path analysis demonstrates that processing speed and working memory performance are strong mediators of the association of EDS with learning and attention/hyperactivity problems, while to a slightly lesser degree are mediators from EDS to conduct problems. Conclusions: This study suggests that in a large general population sample of young children, parent-reported EDS is associated with neurobehavioral (learning, attention/hyperactivity, conduct) problems and poorer performance in processing speed and working memory. Impairment due to EDS in daytime cognitive and behavioral functioning can have a significant impact on children's development. Citation: Calhoun SL; Fernandez-Mendoza J; Vgontzas AN; Mayes SD; Tsaoussoglou M; Rodriguez-Muñoz A; Bixler EO. Learning, attention/hyperactivity, and conduct problems as sequelae of excessive daytime sleepiness in a general population study of young children. SLEEP 2012;35(5):627-632. PMID:22547888
Parallelizing Timed Petri Net simulations
NASA Technical Reports Server (NTRS)
Nicol, David M.
1993-01-01
The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then simulated to obtain information about system behavior. Viewed this way, it was important that the parallelization of TPN's be as automatic as possible, to admit the possibility of the parallelization being embedded in the system design tool. Later years of the grant were devoted to examining the problem of joint performance and reliability analysis, to explore whether both types of analysis could be accomplished within a single framework. In this final report, the results of our studies are summarized. We believe that the problem of parallelizing TPN's automatically for MIMD architectures has been almost completely solved for a large and important class of problems. Our initial investigations into joint performance/reliability analysis are two-fold; it was shown that Monte Carlo simulation, with importance sampling, offers promise of joint analysis in the context of a single tool, and methods for the parallel simulation of general Continuous Time Markov Chains, a model framework within which joint performance/reliability models can be cast, were developed. However, very much more work is needed to determine the scope and generality of these approaches. The results obtained in our two studies, future directions for this type of work, and a list of publications are included.
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1995-01-01
This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.
Generalized vector calculus on convex domain
NASA Astrophysics Data System (ADS)
Agrawal, Om P.; Xu, Yufeng
2015-06-01
In this paper, we apply recently proposed generalized integral and differential operators to develop generalized vector calculus and generalized variational calculus for problems defined over a convex domain. In particular, we present some generalization of Green's and Gauss divergence theorems involving some new operators, and apply these theorems to generalized variational calculus. For fractional power kernels, the formulation leads to fractional vector calculus and fractional variational calculus for problems defined over a convex domain. In special cases, when certain parameters take integer values, we obtain formulations for integer order problems. Two examples are presented to demonstrate applications of the generalized variational calculus which utilize the generalized vector calculus developed in the paper. The first example leads to a generalized partial differential equation and the second example leads to a generalized eigenvalue problem, both in two dimensional convex domains. We solve the generalized partial differential equation by using polynomial approximation. A special case of the second example is a generalized isoperimetric problem. We find an approximate solution to this problem. Many physical problems containing integer order integrals and derivatives are defined over arbitrary domains. We speculate that future problems containing fractional and generalized integrals and derivatives in fractional mechanics will be defined over arbitrary domains, and therefore, a general variational calculus incorporating a general vector calculus will be needed for these problems. This research is our first attempt in that direction.
Error analysis and correction of discrete solutions from finite element codes
NASA Technical Reports Server (NTRS)
Thurston, G. A.; Stein, P. A.; Knight, N. F., Jr.; Reissner, J. E.
1984-01-01
Many structures are an assembly of individual shell components. Therefore, results for stresses and deflections from finite element solutions for each shell component should agree with the equations of shell theory. This paper examines the problem of applying shell theory to the error analysis and the correction of finite element results. The general approach to error analysis and correction is discussed first. Relaxation methods are suggested as one approach to correcting finite element results for all or parts of shell structures. Next, the problem of error analysis of plate structures is examined in more detail. The method of successive approximations is adapted to take discrete finite element solutions and to generate continuous approximate solutions for postbuckled plates. Preliminary numerical results are included.
Pohontsch, N; Träder, J-M; Scherer, M; Deck, R
2013-10-01
Interface problems in medical rehabilitation are a consequence of problems with communication and cooperation, lack of information and transparency. Different stakeholders are trying to solve these problems since many years or decades respectively. Following a series of deficit-oriented studies we tried to develop recommendations for possible solutions of important interface problems together with affected people based on a qualitative analysis of main problem areas. 10 separate group discussions with rehabilitation patients, general practitioners and specialists in private practices, representatives of the federal pension fund and statutory health insurance as well as clinicians from rehabilitation clinics and 3 mixed group discussions (all before mentioned groups excluding rehabilitation patients) were conducted. These group discussions served to prepare a semidiurnal final conference. All meetings were recorded and content analyzed or summarized in protocols respectively. Results are recommendations on strategies to reduce interface problems in medical rehabilitation. Those are: development of a rehabilitation-information-website for insurees and general practitioners and specialists in private practices; changes in forms, applications, notifications; advanced training for general practitioners and specialists in private practices und support in detecting rehabilitation need. Due to divided structures of care provision and increasing specialization, overcoming interface problems is one of the main challenges in the provision of medical rehabilitation. It can be met if different stakeholder approach each other without prejudices, share instead of demarcate competencies and are willing to strike new paths. Our recommendations represent the first step to reaching this goal. © Georg Thieme Verlag KG Stuttgart · New York.
A survey of automated methods for sensemaking support
NASA Astrophysics Data System (ADS)
Llinas, James
2014-05-01
Complex, dynamic problems in general present a challenge for the design of analysis support systems and tools largely because there is limited reliable a priori procedural knowledge descriptive of the dynamic processes in the environment. Problem domains that are non-cooperative or adversarial impute added difficulties involving suboptimal observational data and/or data containing the effects of deception or covertness. The fundamental nature of analysis in these environments is based on composite approaches involving mining or foraging over the evidence, discovery and learning processes, and the synthesis of fragmented hypotheses; together, these can be labeled as sensemaking procedures. This paper reviews and analyzes the features, benefits, and limitations of a variety of automated techniques that offer possible support to sensemaking processes in these problem domains.
Analysis of lane change crashes
DOT National Transportation Integrated Search
2003-03-01
This report defines the problem of lane change crashes in the United States (U.S.) based on data from the 1999 National Automotive Sampling System/General Estimates System (GES) crash database of the National Highway Traffic Safety Administration. Th...
Nomadism as a Man-Environment System
ERIC Educational Resources Information Center
Rapoport, Amos
1978-01-01
Concepts derived from general man-environment system (MES) models are applied to the specific problem of nomadic sedentarization. The analysis focuses on the manner in which residential mobility may function as a central element in nomadic cultures. (Author/MA)
General background on modeling and specifics of modeling vapor intrusion are given. Three classical model applications are described and related to the problem of petroleum vapor intrusion. These indicate the need for model calibration and uncertainty analysis. Evaluation of Bi...
RELATIVE CONTRIBUTIONS OF THREE DESCRIPTIVE METHODS: IMPLICATIONS FOR BEHAVIORAL ASSESSMENT
Pence, Sacha T; Roscoe, Eileen M; Bourret, Jason C; Ahearn, William H
2009-01-01
This study compared the outcomes of three descriptive analysis methods—the ABC method, the conditional probability method, and the conditional and background probability method—to each other and to the results obtained from functional analyses. Six individuals who had been diagnosed with developmental delays and exhibited problem behavior participated. Functional analyses indicated that participants' problem behavior was maintained by social positive reinforcement (n = 2), social negative reinforcement (n = 2), or automatic reinforcement (n = 2). Results showed that for all but 1 participant, descriptive analysis outcomes were similar across methods. In addition, for all but 1 participant, the descriptive analysis outcome differed substantially from the functional analysis outcome. This supports the general finding that descriptive analysis is a poor means of determining functional relations. PMID:19949536
Postmus, Douwe; Tervonen, Tommi; van Valkenhoef, Gert; Hillege, Hans L; Buskens, Erik
2014-09-01
A standard practice in health economic evaluation is to monetize health effects by assuming a certain societal willingness-to-pay per unit of health gain. Although the resulting net monetary benefit (NMB) is easy to compute, the use of a single willingness-to-pay threshold assumes expressibility of the health effects on a single non-monetary scale. To relax this assumption, this article proves that the NMB framework is a special case of the more general stochastic multi-criteria acceptability analysis (SMAA) method. Specifically, as SMAA does not restrict the number of criteria to two and also does not require the marginal rates of substitution to be constant, there are problem instances for which the use of this more general method may result in a better understanding of the trade-offs underlying the reimbursement decision-making problem. This is illustrated by applying both methods in a case study related to infertility treatment.
Asymptotics of empirical eigenstructure for high dimensional spiked covariance.
Wang, Weichen; Fan, Jianqing
2017-06-01
We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.
Asymptotics of empirical eigenstructure for high dimensional spiked covariance
Wang, Weichen
2017-01-01
We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies. PMID:28835726
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schunert, Sebastian; Wang, Congjian; Wang, Yaqi
Rattlesnake and MAMMOTH are the designated TREAT analysis tools currently being developed at the Idaho National Laboratory. Concurrent with development of the multi-physics, multi-scale capabilities, sensitivity analysis and uncertainty quantification (SA/UQ) capabilities are required for predicitive modeling of the TREAT reactor. For steady-state SA/UQ, that is essential for setting initial conditions for the transients, generalized perturbation theory (GPT) will be used. This work describes the implementation of a PETSc based solver for the generalized adjoint equations that constitute a inhomogeneous, rank deficient problem. The standard approach is to use an outer iteration strategy with repeated removal of the fundamental modemore » contamination. The described GPT algorithm directly solves the GPT equations without the need of an outer iteration procedure by using Krylov subspaces that are orthogonal to the operator’s nullspace. Three test problems are solved and provide sufficient verification for the Rattlesnake’s GPT capability. We conclude with a preliminary example evaluating the impact of the Boron distribution in the TREAT reactor using perturbation theory.« less
Model verification of mixed dynamic systems. [POGO problem in liquid propellant rockets
NASA Technical Reports Server (NTRS)
Chrostowski, J. D.; Evensen, D. A.; Hasselman, T. K.
1978-01-01
A parameter-estimation method is described for verifying the mathematical model of mixed (combined interactive components from various engineering fields) dynamic systems against pertinent experimental data. The model verification problem is divided into two separate parts: defining a proper model and evaluating the parameters of that model. The main idea is to use differences between measured and predicted behavior (response) to adjust automatically the key parameters of a model so as to minimize response differences. To achieve the goal of modeling flexibility, the method combines the convenience of automated matrix generation with the generality of direct matrix input. The equations of motion are treated in first-order form, allowing for nonsymmetric matrices, modeling of general networks, and complex-mode analysis. The effectiveness of the method is demonstrated for an example problem involving a complex hydraulic-mechanical system.
Scalable High-order Methods for Multi-Scale Problems: Analysis, Algorithms and Application
2016-02-26
Karniadakis, “Resilient algorithms for reconstructing and simulating gappy flow fields in CFD ”, Fluid Dynamic Research, vol. 47, 051402, 2015. 2. Y. Yu, H...simulation, domain decomposition, CFD , gappy data, estimation theory, and gap-tooth algorithm. 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...objective of this project was to develop a general CFD framework for multifidelity simula- tions to target multiscale problems but also resilience in
ERIC Educational Resources Information Center
Coursen, David
Estimates of the precise costs of school vandalism vary widely, but the seriousness of the problem is beyond dispute. It is possible to get a general idea of the nature and motivation of most vandals and, in doing so, begin to understand the problem and devise solutions for it. There are two basic approaches to vandalism prevention. Currently, as…
Development of an integrated BEM approach for hot fluid structure interaction
NASA Technical Reports Server (NTRS)
Dargush, G. F.; Banerjee, P. K.
1989-01-01
The progress made toward the development of a boundary element formulation for the study of hot fluid-structure interaction in Earth-to-Orbit engine hot section components is reported. The convective viscous integral formulation was derived and implemented in the general purpose computer program GP-BEST. The new convective kernel functions, in turn, necessitated the development of refined integration techniques. As a result, however, since the physics of the problem is embedded in these kernels, boundary element solutions can now be obtained at very high Reynolds number. Flow around obstacles can be solved approximately with an efficient linearized boundary-only analysis or, more exactly, by including all of the nonlinearities present in the neighborhood of the obstacle. The other major accomplishment was the development of a comprehensive fluid-structure interaction capability within GP-BEST. This new facility is implemented in a completely general manner, so that quite arbitrary geometry, material properties and boundary conditions may be specified. Thus, a single analysis code (GP-BEST) can be used to run structures-only problems, fluids-only problems, or the combined fluid-structure problem. In all three cases, steady or transient conditions can be selected, with or without thermal effects. Nonlinear analyses can be solved via direct iteration or by employing a modified Newton-Raphson approach.
Computer vision for microscopy diagnosis of malaria.
Tek, F Boray; Dempster, Andrew G; Kale, Izzet
2009-07-13
This paper reviews computer vision and image analysis studies aiming at automated diagnosis or screening of malaria infection in microscope images of thin blood film smears. Existing works interpret the diagnosis problem differently or propose partial solutions to the problem. A critique of these works is furnished. In addition, a general pattern recognition framework to perform diagnosis, which includes image acquisition, pre-processing, segmentation, and pattern classification components, is described. The open problems are addressed and a perspective of the future work for realization of automated microscopy diagnosis of malaria is provided.
Interlaminar stresses in composite laminates: A perturbation analysis
NASA Technical Reports Server (NTRS)
Hsu, P. W.; Herakovich, C. T.
1976-01-01
A general method of solution for an elastic balanced symmetric composite laminate subject to a uniaxial extension was developed based upon a perturbation analysis of a limiting free body containing an interfacial plane. The solution satisfies more physical requirements and boundary conditions than previous investigations, and predicts smooth continuous interlaminar stresses with no instabilities. It determines the finite maximum intensity for the interlaminar normal stress in all laminates, provides mathematical evidences for the singular stresses in angle-ply laminates, suggests the need for the experimental determination of an important problem parameter, and introduces a viable means for solving related problems of practical interest.
General aviation aircraft interior noise problem: Some suggested solutions
NASA Technical Reports Server (NTRS)
Roskam, J.; Navaneethan, R.
1984-01-01
Laboratory investigation of sound transmission through panels and the use of modern data analysis techniques applied to actual aircraft is used to determine methods to reduce general aviation interior noise. The experimental noise reduction characteristics of stiffened flat and curved panels with damping treatment are discussed. The experimental results of double-wall panels used in the general aviation industry are given. The effects of skin panel material, fiberglass insulation and trim panel material on the noise reduction characteristics of double-wall panels are investigated. With few modifications, the classical sound transmission theory can be used to design the interior noise control treatment of aircraft. Acoustic intensity and analysis procedures are included.
NASA Technical Reports Server (NTRS)
Bittker, David A.
1996-01-01
A generalized version of the NASA Lewis general kinetics code, LSENS, is described. The new code allows the use of global reactions as well as molecular processes in a chemical mechanism. The code also incorporates the capability of performing sensitivity analysis calculations for a perfectly stirred reactor rapidly and conveniently at the same time that the main kinetics calculations are being done. The GLSENS code has been extensively tested and has been found to be accurate and efficient. Nine example problems are presented and complete user instructions are given for the new capabilities. This report is to be used in conjunction with the documentation for the original LSENS code.
NASA Technical Reports Server (NTRS)
Basili, V. R.; Zelkowitz, M. V.
1978-01-01
In a brief evaluation of software-related considerations, it is found that suitable approaches for software development depend to a large degree on the characteristics of the particular project involved. An analysis is conducted of development problems in an environment in which ground support software is produced for spacecraft control. The amount of work involved is in the range from 6 to 10 man-years. Attention is given to a general project summary, a programmer/analyst survey, a component summary, a component status report, a resource summary, a change report, a computer program run analysis, aspects of data collection on a smaller scale, progress forecasting, problems of overhead, and error analysis.
Anger Expression Types and Interpersonal Problems in Nurses.
Han, Aekyung; Won, Jongsoon; Kim, Oksoo; Lee, Sang E
2015-06-01
The purpose of this study was to investigate the anger expression types in nurses and to analyze the differences between the anger expression types and interpersonal problems. The data were collected from 149 nurses working in general hospitals with 300 beds or more in Seoul or Gyeonggi province, Korea. For anger expression type, the anger expression scale from the Korean State-Trait Anger Expression Inventory was used. For interpersonal problems, the short form of the Korean Inventory of Interpersonal Problems Circumplex Scales was used. Data were analyzed using descriptive statistics, cluster analysis, multivariate analysis of variance, and Duncan's multiple comparisons test. Three anger expression types in nurses were found: low-anger expression, anger-in, and anger-in/control type. From the results of multivariate analysis of variance, there were significant differences between anger expression types and interpersonal problems (Wilks lambda F = 3.52, p < .001). Additionally, anger-in/control type was found to have the most difficulty with interpersonal problems by Duncan's post hoc test (p < .050). Based on this research, the development of an anger expression intervention program for nurses is recommended to establish the means of expressing the suppressed emotions, which would help the nurses experience less interpersonal problems. Copyright © 2015. Published by Elsevier B.V.
Effect Size Measure and Analysis of Single Subject Designs
ERIC Educational Resources Information Center
Society for Research on Educational Effectiveness, 2013
2013-01-01
One of the vexing problems in the analysis of SSD is in the assessment of the effect of intervention. Serial dependence notwithstanding, the linear model approach that has been advanced involves, in general, the fitting of regression lines (or curves) to the set of observations within each phase of the design and comparing the parameters of these…
Thermal analysis elements of liquefied gas storage tanks
NASA Astrophysics Data System (ADS)
Yanvarev, I. A.; Krupnikov, A. V.
2017-08-01
Tasks of solving energy and resource efficient usage problems, both for oil producing companies and for companies extracting and transporting natural gas, are associated with liquefied petroleum gas technology development. Improving the operation efficiency of liquefied products storages provides for conducting structural, functional, and appropriate thermal analysis of tank parks in the general case as complex dynamic thermal systems.
Heat-Energy Analysis for Solar Receivers
NASA Technical Reports Server (NTRS)
Lansing, F. L.
1982-01-01
Heat-energy analysis program (HEAP) solves general heat-transfer problems, with some specific features that are "custom made" for analyzing solar receivers. Can be utilized not only to predict receiver performance under varying solar flux, ambient temperature and local heat-transfer rates but also to detect locations of hotspots and metallurgical difficulties and to predict performance sensitivity of neighboring component parameters.
Analysis of Discourse Accent and Discursive Practices I&W
2010-09-01
events in a cultural memory. Episodic discourse encompasses general principles, concepts , symbols and rituals used by actors to address problems in...COGNITIVE/INTEGRATIVE COMPLEXITY PROOF-OF- CONCEPT ............................ 51 5.1 Historical Background and Literature...formal training or expertise in critical discourse analysis. In addition, a proof-of- concept was conducted of an existing methodology for tracking
Dimension Reduction for the Landau-de Gennes Model on Curved Nematic Thin Films
NASA Astrophysics Data System (ADS)
Golovaty, Dmitry; Montero, José Alberto; Sternberg, Peter
2017-12-01
We use the method of Γ -convergence to study the behavior of the Landau-de Gennes model for a nematic liquid crystalline film attached to a general fixed surface in the limit of vanishing thickness. This paper generalizes the approach in Golovaty et al. (J Nonlinear Sci 25(6):1431-1451, 2015) where we considered a similar problem for a planar surface. Since the anchoring energy dominates when the thickness of the film is small, it is essential to understand its influence on the structure of the minimizers of the limiting energy. In particular, the anchoring energy dictates the class of admissible competitors and the structure of the limiting problem. We assume general weak anchoring conditions on the top and the bottom surfaces of the film and strong Dirichlet boundary conditions on the lateral boundary of the film when the surface is not closed. We establish a general convergence result to an energy defined on the surface that involves a somewhat surprising remnant of the normal component of the tensor gradient. Then we exhibit one effect of curvature through an analysis of the behavior of minimizers to the limiting problem when the substrate is a frustum.
NASA Astrophysics Data System (ADS)
Domin, Daniel S.
1999-01-01
The science laboratory instructional environment is ideal for fostering the development of problem-solving, manipulative, and higher-order thinking skills: the skills needed by today's learner to compete in an ever increasing technology-based society. This paper reports the results of a content analysis of ten general chemistry laboratory manuals. Three experiments from each manual were examined for evidence of higher-order cognitive activities. Analysis was based upon the six major cognitive categories of Bloom's Taxonomy of Educational Objectives: knowledge, comprehension, application, analysis, synthesis, and evaluation. The results of this study show that the overwhelming majority of general chemistry laboratory manuals provide tasks that require the use of only the lower-order cognitive skills: knowledge, comprehension, and application. Two of the laboratory manuals were disparate in having activities that utilized higher-order cognition. I describe the instructional strategies used within these manuals to foster higher-order cognitive development.
An equivalent domain integral method for three-dimensional mixed-mode fracture problems
NASA Technical Reports Server (NTRS)
Shivakumar, K. N.; Raju, I. S.
1991-01-01
A general formulation of the equivalent domain integral (EDI) method for mixed mode fracture problems in cracked solids is presented. The method is discussed in the context of a 3-D finite element analysis. The J integral consists of two parts: the volume integral of the crack front potential over a torus enclosing the crack front and the crack surface integral due to the crack front potential plus the crack face loading. In mixed mode crack problems the total J integral is split into J sub I, J sub II, and J sub III representing the severity of the crack front in three modes of deformations. The direct and decomposition methods are used to separate the modes. These two methods were applied to several mixed mode fracture problems, were analyzed, and results were found to agree well with those available in the literature. The method lends itself to be used as a post-processing subroutine in a general purpose finite element program.
An equivalent domain integral method for three-dimensional mixed-mode fracture problems
NASA Technical Reports Server (NTRS)
Shivakumar, K. N.; Raju, I. S.
1992-01-01
A general formulation of the equivalent domain integral (EDI) method for mixed mode fracture problems in cracked solids is presented. The method is discussed in the context of a 3-D finite element analysis. The J integral consists of two parts: the volume integral of the crack front potential over a torus enclosing the crack front and the crack surface integral due to the crack front potential plus the crack face loading. In mixed mode crack problems the total J integral is split into J sub I, J sub II, and J sub III representing the severity of the crack front in three modes of deformations. The direct and decomposition methods are used to separate the modes. These two methods were applied to several mixed mode fracture problems, were analyzed, and results were found to agree well with those available in the literature. The method lends itself to be used as a post-processing subroutine in a general purpose finite element program.
General-purpose abductive algorithm for interpretation
NASA Astrophysics Data System (ADS)
Fox, Richard K.; Hartigan, Julie
1996-11-01
Abduction, inference to the best explanation, is an information-processing task that is useful for solving interpretation problems such as diagnosis, medical test analysis, legal reasoning, theory evaluation, and perception. The task is a generative one in which an explanation comprising of domain hypotheses is assembled and used to account for given findings. The explanation is taken to be an interpretation as to why the findings have arisen within the given situation. Research in abduction has led to the development of a general-purpose computational strategy which has been demonstrated on all of the above types of problems. This abduction strategy can be performed in layers so that different types of knowledge can come together in deriving an explanation at different levels of description. Further, the abduction strategy is tractable and offers a very useful tradeoff between confidence in the explanation and completeness of the explanation. This paper will describe this computational strategy for abduction and demonstrate its usefulness towards perceptual problems by examining problem-solving systems in speech recognition and natural language understanding.
An Emerging New Risk Analysis Science: Foundations and Implications.
Aven, Terje
2018-05-01
To solve real-life problems-such as those related to technology, health, security, or climate change-and make suitable decisions, risk is nearly always a main issue. Different types of sciences are often supporting the work, for example, statistics, natural sciences, and social sciences. Risk analysis approaches and methods are also commonly used, but risk analysis is not broadly accepted as a science in itself. A key problem is the lack of explanatory power and large uncertainties when assessing risk. This article presents an emerging new risk analysis science based on novel ideas and theories on risk analysis developed in recent years by the risk analysis community. It builds on a fundamental change in thinking, from the search for accurate predictions and risk estimates, to knowledge generation related to concepts, theories, frameworks, approaches, principles, methods, and models to understand, assess, characterize, communicate, and (in a broad sense) manage risk. Examples are used to illustrate the importance of this distinct/separate risk analysis science for solving risk problems, supporting science in general and other disciplines in particular. © 2017 The Authors Risk Analysis published by Wiley Periodicals, Inc. on behalf of Society for Risk Analysis.
Schmidhuber, Jürgen
2013-01-01
Most of computer science focuses on automatically solving given computational problems. I focus on automatically inventing or discovering problems in a way inspired by the playful behavior of animals and humans, to train a more and more general problem solver from scratch in an unsupervised fashion. Consider the infinite set of all computable descriptions of tasks with possibly computable solutions. Given a general problem-solving architecture, at any given time, the novel algorithmic framework PowerPlay (Schmidhuber, 2011) searches the space of possible pairs of new tasks and modifications of the current problem solver, until it finds a more powerful problem solver that provably solves all previously learned tasks plus the new one, while the unmodified predecessor does not. Newly invented tasks may require to achieve a wow-effect by making previously learned skills more efficient such that they require less time and space. New skills may (partially) re-use previously learned skills. The greedy search of typical PowerPlay variants uses time-optimal program search to order candidate pairs of tasks and solver modifications by their conditional computational (time and space) complexity, given the stored experience so far. The new task and its corresponding task-solving skill are those first found and validated. This biases the search toward pairs that can be described compactly and validated quickly. The computational costs of validating new tasks need not grow with task repertoire size. Standard problem solver architectures of personal computers or neural networks tend to generalize by solving numerous tasks outside the self-invented training set; PowerPlay’s ongoing search for novelty keeps breaking the generalization abilities of its present solver. This is related to Gödel’s sequence of increasingly powerful formal theories based on adding formerly unprovable statements to the axioms without affecting previously provable theorems. The continually increasing repertoire of problem-solving procedures can be exploited by a parallel search for solutions to additional externally posed tasks. PowerPlay may be viewed as a greedy but practical implementation of basic principles of creativity (Schmidhuber, 2006a, 2010). A first experimental analysis can be found in separate papers (Srivastava et al., 2012a,b, 2013). PMID:23761771
On-Orbit Range Set Applications
NASA Astrophysics Data System (ADS)
Holzinger, M.; Scheeres, D.
2011-09-01
History and methodology of Δv range set computation is briefly reviewed, followed by a short summary of the Δv optimal spacecraft servicing problem literature. Service vehicle placement is approached from a Δv range set viewpoint, providing a framework under which the analysis becomes quite geometric and intuitive. The optimal servicing tour design problem is shown to be a specific instantiation of the metric- Traveling Salesman Problem (TSP), which in general is an NP-hard problem. The Δv-TSP is argued to be quite similar to the Euclidean-TSP, for which approximate optimal solutions may be found in polynomial time. Applications of range sets are demonstrated using analytical and simulation results.
Nonequilibrium Statistical Operator Method and Generalized Kinetic Equations
NASA Astrophysics Data System (ADS)
Kuzemsky, A. L.
2018-01-01
We consider some principal problems of nonequilibrium statistical thermodynamics in the framework of the Zubarev nonequilibrium statistical operator approach. We present a brief comparative analysis of some approaches to describing irreversible processes based on the concept of nonequilibrium Gibbs ensembles and their applicability to describing nonequilibrium processes. We discuss the derivation of generalized kinetic equations for a system in a heat bath. We obtain and analyze a damped Schrödinger-type equation for a dynamical system in a heat bath. We study the dynamical behavior of a particle in a medium taking the dissipation effects into account. We consider the scattering problem for neutrons in a nonequilibrium medium and derive a generalized Van Hove formula. We show that the nonequilibrium statistical operator method is an effective, convenient tool for describing irreversible processes in condensed matter.
Doha, E.H.; Abd-Elhameed, W.M.; Youssri, Y.H.
2014-01-01
Two families of certain nonsymmetric generalized Jacobi polynomials with negative integer indexes are employed for solving third- and fifth-order two point boundary value problems governed by homogeneous and nonhomogeneous boundary conditions using a dual Petrov–Galerkin method. The idea behind our method is to use trial functions satisfying the underlying boundary conditions of the differential equations and the test functions satisfying the dual boundary conditions. The resulting linear systems from the application of our method are specially structured and they can be efficiently inverted. The use of generalized Jacobi polynomials simplify the theoretical and numerical analysis of the method and also leads to accurate and efficient numerical algorithms. The presented numerical results indicate that the proposed numerical algorithms are reliable and very efficient. PMID:26425358
Computer technologies and institutional memory
NASA Technical Reports Server (NTRS)
Bell, Christopher; Lachman, Roy
1989-01-01
NASA programs for manned space flight are in their 27th year. Scientists and engineers who worked continuously on the development of aerospace technology during that period are approaching retirement. The resulting loss to the organization will be considerable. Although this problem is general to the NASA community, the problem was explored in terms of the institutional memory and technical expertise of a single individual in the Man-Systems division. The main domain of the expert was spacecraft lighting, which became the subject area for analysis in these studies. The report starts with an analysis of the cumulative expertise and institutional memory of technical employees of organizations such as NASA. A set of solutions to this problem are examined and found inadequate. Two solutions were investigated at length: hypertext and expert systems. Illustrative examples were provided of hypertext and expert system representation of spacecraft lighting. These computer technologies can be used to ameliorate the problem of the loss of invaluable personnel.
Propagation of uncertainty by Monte Carlo simulations in case of basic geodetic computations
NASA Astrophysics Data System (ADS)
Wyszkowska, Patrycja
2017-12-01
The determination of the accuracy of functions of measured or adjusted values may be a problem in geodetic computations. The general law of covariance propagation or in case of the uncorrelated observations the propagation of variance (or the Gaussian formula) are commonly used for that purpose. That approach is theoretically justified for the linear functions. In case of the non-linear functions, the first-order Taylor series expansion is usually used but that solution is affected by the expansion error. The aim of the study is to determine the applicability of the general variance propagation law in case of the non-linear functions used in basic geodetic computations. The paper presents errors which are a result of negligence of the higher-order expressions and it determines the range of such simplification. The basis of that analysis is the comparison of the results obtained by the law of propagation of variance and the probabilistic approach, namely Monte Carlo simulations. Both methods are used to determine the accuracy of the following geodetic computations: the Cartesian coordinates of unknown point in the three-point resection problem, azimuths and distances of the Cartesian coordinates, height differences in the trigonometric and the geometric levelling. These simulations and the analysis of the results confirm the possibility of applying the general law of variance propagation in basic geodetic computations even if the functions are non-linear. The only condition is the accuracy of observations, which cannot be too low. Generally, this is not a problem with using present geodetic instruments.
Using a general problem-solving strategy to promote transfer.
Youssef-Shalala, Amina; Ayres, Paul; Schubert, Carina; Sweller, John
2014-09-01
Cognitive load theory was used to hypothesize that a general problem-solving strategy based on a make-as-many-moves-as-possible heuristic could facilitate problem solutions for transfer problems. In four experiments, school students were required to learn about a topic through practice with a general problem-solving strategy, through a conventional problem solving strategy or by studying worked examples. In Experiments 1 and 2 using junior high school students learning geometry, low knowledge students in the general problem-solving group scored significantly higher on near or far transfer tests than the conventional problem-solving group. In Experiment 3, an advantage for a general problem-solving group over a group presented worked examples was obtained on far transfer tests using the same curriculum materials, again presented to junior high school students. No differences between conditions were found in Experiments 1, 2, or 3 using test problems similar to the acquisition problems. Experiment 4 used senior high school students studying economics and found the general problem-solving group scored significantly higher than the conventional problem-solving group on both similar and transfer tests. It was concluded that the general problem-solving strategy was helpful for novices, but not for students that had access to domain-specific knowledge. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Research in geosciences policy
NASA Technical Reports Server (NTRS)
Brunner, Ronald D.
1992-01-01
The general task was to look beyond the adverse physical impacts and to defining the policy problem. In order for policy actions to be effective, they must address the right policy problems, which will be different from and broader than the physical problems. We will work on defining the policy problems with a view to indicating how practical solutions might be implemented. In particular, public officials need advice on what should be said, and done, and for what purposes. That advice needs to be based on systematic analysis of: (1) the scholarly literature in the social sciences, and related disciplines; (2) the charging content of the policy debate at the center of attention; and (3) how citizens perceive and understand issues related to global change. We will conduct this analysis. Chapter 1 and 2 each reports work on defining the policy problem and analyzing the scholarly literature. Chapters 3 and 4, respectively, address the policy debate and citizen viewpoints in issues related to global change.
Transpiration cooling of hypersonic blunt bodies with finite rate surface reactions
NASA Technical Reports Server (NTRS)
Henline, William D.
1989-01-01
The convective heat flux blockage to blunt body and hypersonic vehicles by transpiration cooling are presented. The general problem of mass addition to laminar boundary layers is reviewed. Results of similarity analysis of the boundary layer problem are provided for surface heat flux with transpiration cooling. Detailed non-similar results are presented from the numerical program, BLIMPK. Comparisons are made with the similarity theory. The effects of surface catalysis are investigated.
A general multiscale framework for the emergent effective elastodynamics of metamaterials
NASA Astrophysics Data System (ADS)
Sridhar, A.; Kouznetsova, V. G.; Geers, M. G. D.
2018-02-01
This paper presents a general multiscale framework towards the computation of the emergent effective elastodynamics of heterogeneous materials, to be applied for the analysis of acoustic metamaterials and phononic crystals. The generality of the framework is exemplified by two key characteristics. First, the underlying formalism relies on the Floquet-Bloch theorem to derive a robust definition of scales and scale separation. Second, unlike most homogenization approaches that rely on a classical volume average, a generalized homogenization operator is defined with respect to a family of particular projection functions. This yields a generalized macro-scale continuum, instead of the classical Cauchy continuum. This enables (in a micromorphic sense) to homogenize the rich dispersive behavior resulting from both Bragg scattering and local resonance. For an arbitrary unit cell, the homogenization projection functions are constructed using the Floquet-Bloch eigenvectors obtained in the desired frequency regime at select high symmetry points, which effectively resolves the emergent phenomena dominating that regime. Furthermore, a generalized Hill-Mandel condition is proposed that ensures power consistency between the homogenized and full-scale model. A high-order spatio-temporal gradient expansion is used to localize the multiscale problem leading to a series of recursive unit cell problems giving the appropriate micro-mechanical corrections. The developed multiscale method is validated against standard numerical Bloch analysis of the dispersion spectra of example unit cells encompassing multiple high-order branches generated by local resonance and/or Bragg scattering.
NASA Astrophysics Data System (ADS)
Tohir, M.; Abidin, Z.; Dafik; Hobri
2018-04-01
Arithmetics is one of the topics in Mathematics, which deals with logic and detailed process upon generalizing formula. Creativity and flexibility are needed in generalizing formula of arithmetics series. This research aimed at analyzing students creative thinking skills in generalizing arithmetic series. The triangulation method and research-based learning was used in this research. The subjects were students of the Master Program of Mathematics Education in Faculty of Teacher Training and Education at Jember University. The data was collected by giving assignments to the students. The data collection was done by giving open problem-solving task and documentation study to the students to arrange generalization pattern based on the dependent function formula i and the function depend on i and j. Then, the students finished the next problem-solving task to construct arithmetic generalization patterns based on the function formula which depends on i and i + n and the sum formula of functions dependent on i and j of the arithmetic compiled. The data analysis techniques operative in this study was Miles and Huberman analysis model. Based on the result of data analysis on task 1, the levels of students creative thinking skill were classified as follows; 22,22% of the students categorized as “not creative” 38.89% of the students categorized as “less creative” category; 22.22% of the students categorized as “sufficiently creative” and 16.67% of the students categorized as “creative”. By contrast, the results of data analysis on task 2 found that the levels of students creative thinking skills were classified as follows; 22.22% of the students categorized as “sufficiently creative”, 44.44% of the students categorized as “creative” and 33.33% of the students categorized as “very creative”. This analysis result can set the basis for teaching references and actualizing a better teaching model in order to increase students creative thinking skills.
Dietrich, Andrea; Ormel, Johan; Buitelaar, Jan K; Verhulst, Frank C; Hoekstra, Pieter J; Hartman, Catharina A
2013-08-01
Anxiety and depressive problems have often been related to higher hypothalamic-pituitary-adrenal (HPA)-axis activity (basal morning cortisol levels and cortisol awakening response [CAR]) and externalizing problems to lower HPA-axis activity. However, associations appear weaker and more inconsistent than initially assumed. Previous studies from the Tracking Adolescents Individual Lives Study (TRAILS) suggested sex-differences in these relationships and differential associations with specific dimensions of depressive problems in a general population sample of children (10-12 years). Using the TRAILS population sample (n=1604), we tested hypotheses on the association between single day cortisol (basal morning levels and CAR) and specifically constructed dimensions of anxiety (cognitive versus somatic), depressive (cognitive-affective versus somatic), and externalizing problems (reactive versus proactive aggression), and explored the modifying role of sex. Moreover, we repeated analyses in an independent same-aged clinic-referred sample (n=357). Structural Equation Modeling was used to investigate the association between cortisol and higher- and lower-order (thus, broad and specific) problem dimensions based on self-reports in an integrated model. Overall, findings were consistent across the population and clinic-referred samples, as well as with the existing literature. Most support was found for higher cortisol (mainly CAR) in relation to depressive problems. However, in general, associations were weak in both samples. Therefore, the present results shed doubt on the relevance of single day cortisol measurements for problem behaviors in the milder range. Associations may be stronger in more severe or persistent psychopathology. Copyright © 2012 Elsevier Ltd. All rights reserved.
Analysis of the single-vehicle cyclic inventory routing problem
NASA Astrophysics Data System (ADS)
Aghezzaf, El-Houssaine; Zhong, Yiqing; Raa, Birger; Mateo, Manel
2012-11-01
The single-vehicle cyclic inventory routing problem (SV-CIRP) consists of a repetitive distribution of a product from a single depot to a selected subset of customers. For each customer, selected for replenishments, the supplier collects a corresponding fixed reward. The objective is to determine the subset of customers to replenish, the quantity of the product to be delivered to each and to design the vehicle route so that the resulting profit (difference between the total reward and the total logistical cost) is maximised while preventing stockouts at each of the selected customers. This problem appears often as a sub-problem in many logistical problems. In this article, the SV-CIRP is formulated as a mixed-integer program with a nonlinear objective function. After a thorough analysis of the structure of the problem and its features, an exact algorithm for its solution is proposed. This exact algorithm requires only solutions of linear mixed-integer programs. Values of a savings-based heuristic for this problem are compared to the optimal values obtained for a set of some test problems. In general, the gap may get as large as 25%, which justifies the effort to continue exploring and developing exact and approximation algorithms for the SV-CIRP.
Association of Problem Gambling with Type of Gambling Among Italian General Population.
Scalese, Marco; Bastiani, Luca; Salvadori, Stefano; Gori, Mercedes; Lewis, Isabella; Jarre, Paolo; Molinaro, Sabrina
2016-09-01
The origin of gambling disorders is uncertain; however, research has shown a tendency to focus on specific types of games as a potential important risk factor. The principal aim of this study is to examine the relationships between types of gambling practices and gambling disorder. The data were extracted from IPSAD-Italia(®) 2010-2011 (Italian Population Survey on Alcohol and other Drugs), a survey among the Italian general population which collects socio-cultural information, information about the use of drugs, legal substances and gambling habits. In order to identify the "problem gambler" we used the Problem Gambling Severity Index. Three groups are considered in this analysis: no-risk gamblers, low-risk gamblers, moderate-risk/problem gamblers. Type of gambling practice was considered among two types of gambler: one-game players and multi-games players. 1.9 % of multi-game players were considered problem gamblers, only 0.6 % of one-game players were problem gamblers (p < 0.001). The percentage of players who were low and moderate-risk gamblers was approximately double among multi-game players, with 14.4 % low-risk and 5.8 % moderate-risk; compared with 7.7 % low-risk and 2.5 % moderate risk among one-game players. Results of ordinal logistic regression analysis confirmed that higher level of gambling severity was associated with multi-game players (OR = 2.23, p < 0.0001). Video-poker/slot-machines show the highest association with gambling severity among both one-game players and multi-game players, with scores of OR equal to 4.3 and 4.5 respectively. These findings suggest a popular perception of risk associated with this type of gambling for the development of gambling problems.
Space Trajectories Error Analysis (STEAP) Programs. Volume 1: Analytic manual, update
NASA Technical Reports Server (NTRS)
1971-01-01
Manual revisions are presented for the modified and expanded STEAP series. The STEAP 2 is composed of three independent but related programs: NOMAL for the generation of n-body nominal trajectories performing a number of deterministic guidance events; ERRAN for the linear error analysis and generalized covariance analysis along specific targeted trajectories; and SIMUL for testing the mathematical models used in the navigation and guidance process. The analytic manual provides general problem description, formulation, and solution and the detailed analysis of subroutines. The programmers' manual gives descriptions of the overall structure of the programs as well as the computational flow and analysis of the individual subroutines. The user's manual provides information on the input and output quantities of the programs. These are updates to N69-36472 and N69-36473.
2008-02-01
combined thermal g effect and initial current field. The model is implemented using Abaqus user element subroutine and verified against the experimental...Finite Element Formulation The proposed model is implemented with ABAQUS general purpose finite element program using thermal -displacement analysis...option. ABAQUS and other commercially available finite element codes do not have the capability to solve general electromigration problem directly. Thermal
Quantitative local analysis of nonlinear systems
NASA Astrophysics Data System (ADS)
Topcu, Ufuk
This thesis investigates quantitative methods for local robustness and performance analysis of nonlinear dynamical systems with polynomial vector fields. We propose measures to quantify systems' robustness against uncertainties in initial conditions (regions-of-attraction) and external disturbances (local reachability/gain analysis). S-procedure and sum-of-squares relaxations are used to translate Lyapunov-type characterizations to sum-of-squares optimization problems. These problems are typically bilinear/nonconvex (due to local analysis rather than global) and their size grows rapidly with state/uncertainty space dimension. Our approach is based on exploiting system theoretic interpretations of these optimization problems to reduce their complexity. We propose a methodology incorporating simulation data in formal proof construction enabling more reliable and efficient search for robustness and performance certificates compared to the direct use of general purpose solvers. This technique is adapted both to region-of-attraction and reachability analysis. We extend the analysis to uncertain systems by taking an intentionally simplistic and potentially conservative route, namely employing parameter-independent rather than parameter-dependent certificates. The conservatism is simply reduced by a branch-and-hound type refinement procedure. The main thrust of these methods is their suitability for parallel computing achieved by decomposing otherwise challenging problems into relatively tractable smaller ones. We demonstrate proposed methods on several small/medium size examples in each chapter and apply each method to a benchmark example with an uncertain short period pitch axis model of an aircraft. Additional practical issues leading to a more rigorous basis for the proposed methodology as well as promising further research topics are also addressed. We show that stability of linearized dynamics is not only necessary but also sufficient for the feasibility of the formulations in region-of-attraction analysis. Furthermore, we generalize an upper bound refinement procedure in local reachability/gain analysis which effectively generates non-polynomial certificates from polynomial ones. Finally, broader applicability of optimization-based tools stringently depends on the availability of scalable/hierarchial algorithms. As an initial step toward this direction, we propose a local small-gain theorem and apply to stability region analysis in the presence of unmodeled dynamics.
Tenth NASTRAN User's Colloquium
NASA Technical Reports Server (NTRS)
1982-01-01
The development of the NASTRAN computer program, a general purpose finite element computer code for structural analysis, was discussed. The application and development of NASTRAN is presented in the following topics: improvements and enhancements; developments of pre and postprocessors; interactive review system; the use of harmonic expansions in magnetic field problems; improving a dynamic model with test data using Linwood; solution of axisymmetric fluid structure interaction problems; large displacements and stability analysis of nonlinear propeller structures; prediction of bead area contact load at the tire wheel interface; elastic plastic analysis of an overloaded breech ring; finite element solution of torsion and other 2-D Poisson equations; new capability for elastic aircraft airloads; usage of substructuring analysis in the get away special program; solving symmetric structures with nonsymmetric loads; evaluation and reduction of errors induced by Guyan transformation.
Fourteenth NASTRAN (R) Users' Colloquium
NASA Technical Reports Server (NTRS)
1986-01-01
The proceedings of a colloquium are presented along with technical papers contributed during the conference. Reviewed are general applications of finite element methodology and the specific application of the NASA Structural Analysis System, NASTRAN, to a variety of static and dynamic sturctural problems.
Three-Dimensional Orthogonal Co-ordinates
ERIC Educational Resources Information Center
Astin, J.
1974-01-01
A systematic approach to general orthogonal co-ordinates, suitable for use near the end of a beginning vector analysis course, is presented. It introduces students to tensor quantities and shows how equations and quantities needed in classical problems can be determined. (Author/LS)
Stability and performance tradeoffs in bi-lateral telemanipulation
NASA Technical Reports Server (NTRS)
Hannaford, Blake
1989-01-01
Kinesthetic force feedback provides measurable increase in remote manipulation system performance. Intensive computation time requirements or operation under conditions of time delay can cause serious stability problems in control-system design. Here, a simplified linear analysis of this stability problem is presented for the forward-flow generalized architecture, applying the hybrid two-port representation to express the loop gain of the traditional master-slave architecture, which can be subjected to similar analysis. The hybrid two-port representation is also used to express the effects on the fidelity of manipulation or feel of one design approach used to stabilize the forward-flow architecture. The results suggest that, when local force feedback at the slave side is used to reduce manipulator stability problems, a price is paid in terms of telemanipulation fidelity.
Ståhlnacke, Katri; Söderfeldt, Björn
2013-01-01
Dental materials are perceived as a health problem by some people, although scientists do not agree about possible causes of such problems. The aim of this paper was to gain a deeper knowledge and understanding of experiences from living with health problems attributed to dental materials. Addressed topics were the type of problem, both as to general and oral health, perceived causes of the problems,their experienced effect on life, and reception by health professionals. Persons, who in a previous large questionnaire study had answered that they had experienced troubles from dental materials and also agreed to answer follow-up questions, were contacted with a request to take part in an interview study. Eleven individual interviews were held.The interviews were transcribed verbatim and the material was analysed according to the Qualitative Content Analysis method. Meaning units were extracted and condensed into a number of codes, which were combined into subcategories, categories, and themes. Four themes were identified: 1) Long-term oral, mental, and somatic difficulties of varying character, caused by dental amalgam. 2) Problems treated mainly by replacement of dental material in fillings. 3) Powerful effects on life, mostly negative. 4) The reception by health professionals was generally good, but with elements of encounters where they felt treated with nonchalance and lack of respect. In conclusion, people who attributed their health difficulties to dental materials had a complex range of problems and the perception was that amalgam/mercury was the cause of the troubles. The reception from health professionals was perceived as generally good, although with occasional negative experiences.
ERIC Educational Resources Information Center
Burke, John C.
2012-01-01
The objective of my dissertation is to create a general approach to evaluating IS/IT projects using Real Option Analysis (ROA). This is an important problem because an IT Project Portfolio (ITPP) can represent hundreds of projects, millions of dollars of investment and hundreds of thousands of employee hours. Therefore, any advance in the…
ANTECEDENT VERSUS CONSEQUENT EVENTS AS PREDICTORS OF PROBLEM BEHAVIOR
Camp, Erin M; Iwata, Brian A; Hammond, Jennifer L; Bloom, Sarah E
2009-01-01
Comparisons of results from descriptive and functional analyses of problem behavior generally have shown poor correspondence. Most descriptive analyses have focused on relations between consequent events and behavior, and it has been noted that attention is a common consequence for problem behavior even though it may not be a functional reinforcer. Because attention may be prescribed simply as a means of stopping serious problem behavior, it is possible that naturally occurring antecedent events (establishing operations) might be better predictors of problem behavior than consequences. We conducted descriptive and functional analyses of the problem behaviors of 7 participants. Conditional probabilities based on combined antecedent and consequent events showed correspondence with the functional analysis data for 4 of the 7 participants, but antecedent events were no better than consequent events in identifying the function of problem behavior. PMID:19949538
Highly accurate adaptive finite element schemes for nonlinear hyperbolic problems
NASA Astrophysics Data System (ADS)
Oden, J. T.
1992-08-01
This document is a final report of research activities supported under General Contract DAAL03-89-K-0120 between the Army Research Office and the University of Texas at Austin from July 1, 1989 through June 30, 1992. The project supported several Ph.D. students over the contract period, two of which are scheduled to complete dissertations during the 1992-93 academic year. Research results produced during the course of this effort led to 6 journal articles, 5 research reports, 4 conference papers and presentations, 1 book chapter, and two dissertations (nearing completion). It is felt that several significant advances were made during the course of this project that should have an impact on the field of numerical analysis of wave phenomena. These include the development of high-order, adaptive, hp-finite element methods for elastodynamic calculations and high-order schemes for linear and nonlinear hyperbolic systems. Also, a theory of multi-stage Taylor-Galerkin schemes was developed and implemented in the analysis of several wave propagation problems, and was configured within a general hp-adaptive strategy for these types of problems. Further details on research results and on areas requiring additional study are given in the Appendix.
The pattern of anxiolytic and hypnotic management by Australian general practice trainees.
Holliday, Simon M; Morgan, Simon; Tapley, Amanda; Henderson, Kim M; Dunlop, Adrian J; van Driel, Mieke L; Spike, Neil A; McArthur, Lawrence A; Ball, Jean; Oldmeadow, Christopher J; Magin, Parker J
2017-03-01
Guidelines recommend anxiolytics and hypnotics (A/H) as second-line, short-term medications. We aimed to establish prevalence and associations of A/H prescribing by Australian general practice (GP) trainees. A cross-sectional analysis from a cohort study of vocational trainees from four GP Regional Training Providers during 2010-2013. General practice trainees act as independent practitioners (including for prescribing purposes) while having recourse to advice from a GP supervisor. Practice and trainee demographic data were collected as well as patient, clinical and educational data from 60 consecutive consultations of each trainee each training term. Analysis was at the level of individual problem managed, with the outcome factor being prescription of any anxiolytic or hypnotic. Overall, 645 registrars (response rate 94.0%) prescribed 68 582 medications in 69 621 consultations (with 112 890 problems managed). A/Hs were prescribed for 1.3% of problems managed and comprised 2.2% of all prescriptions. They were prescribed particularly for insomnia (28.2%) or anxiety (21.8%), but also for many 'off-label' indications. Significant associations of A/H prescriptions were: patient-level (greater age, Aboriginal and Torres Strait Islander status, English-speaking background, being new to the trainee but not to the practice); trainee-level (male) and consultation-level (longer duration, pre-existing problem, specialist referral not being made). Prescribing was significantly lower in one of the four Regional Training Providers. GP trainees, inconsistent with most guideline recommendations, prescribe A/Hs mainly as maintenance therapy to unfamiliar and older patients. Our results suggest that changes in management approaches are needed which may be facilitated by support for psychotherapeutic training. [Holliday SM, Morgan S, Tapley A, Henderson KM, Dunlop AJ, van Driel ML, Spike NA, McArthur LA, Ball J, Oldmeadow CJ, Magin PJ. The pattern of anxiolytic and hypnotic management by Australian general practice trainees. Drug Alcohol Rev 2017;36:261-269]. © 2016 Australasian Professional Society on Alcohol and other Drugs.
Some historical relationships between science and technology with implications for behavior analysis
Moxley, Roy A.
1989-01-01
The relationship between science and technology is examined in terms of some implications for behavior analysis. Problems result when this relationship is seen as one in which science generally begets technology in a one-way, or hierarchical, relationship. These problems are not found when the relationship between science and technology is seen as two-way, or symmetrical, within a larger context of relationships. Some historical examples are presented. Collectively, these and other examples in the references weaken the case for a prevailing one-way, hierarchical relationship and strengthen the case for a two-way, symmetrical relationship. In addition to being more accurate historically, the symmetrical relationship is also more consistent with the principles of behavior analysis. PMID:22478016
SAMSAN- MODERN NUMERICAL METHODS FOR CLASSICAL SAMPLED SYSTEM ANALYSIS
NASA Technical Reports Server (NTRS)
Frisch, H. P.
1994-01-01
SAMSAN was developed to aid the control system analyst by providing a self consistent set of computer algorithms that support large order control system design and evaluation studies, with an emphasis placed on sampled system analysis. Control system analysts have access to a vast array of published algorithms to solve an equally large spectrum of controls related computational problems. The analyst usually spends considerable time and effort bringing these published algorithms to an integrated operational status and often finds them less general than desired. SAMSAN reduces the burden on the analyst by providing a set of algorithms that have been well tested and documented, and that can be readily integrated for solving control system problems. Algorithm selection for SAMSAN has been biased toward numerical accuracy for large order systems with computational speed and portability being considered important but not paramount. In addition to containing relevant subroutines from EISPAK for eigen-analysis and from LINPAK for the solution of linear systems and related problems, SAMSAN contains the following not so generally available capabilities: 1) Reduction of a real non-symmetric matrix to block diagonal form via a real similarity transformation matrix which is well conditioned with respect to inversion, 2) Solution of the generalized eigenvalue problem with balancing and grading, 3) Computation of all zeros of the determinant of a matrix of polynomials, 4) Matrix exponentiation and the evaluation of integrals involving the matrix exponential, with option to first block diagonalize, 5) Root locus and frequency response for single variable transfer functions in the S, Z, and W domains, 6) Several methods of computing zeros for linear systems, and 7) The ability to generate documentation "on demand". All matrix operations in the SAMSAN algorithms assume non-symmetric matrices with real double precision elements. There is no fixed size limit on any matrix in any SAMSAN algorithm; however, it is generally agreed by experienced users, and in the numerical error analysis literature, that computation with non-symmetric matrices of order greater than about 200 should be avoided or treated with extreme care. SAMSAN attempts to support the needs of application oriented analysis by providing: 1) a methodology with unlimited growth potential, 2) a methodology to insure that associated documentation is current and available "on demand", 3) a foundation of basic computational algorithms that most controls analysis procedures are based upon, 4) a set of check out and evaluation programs which demonstrate usage of the algorithms on a series of problems which are structured to expose the limits of each algorithm's applicability, and 5) capabilities which support both a priori and a posteriori error analysis for the computational algorithms provided. The SAMSAN algorithms are coded in FORTRAN 77 for batch or interactive execution and have been implemented on a DEC VAX computer under VMS 4.7. An effort was made to assure that the FORTRAN source code was portable and thus SAMSAN may be adaptable to other machine environments. The documentation is included on the distribution tape or can be purchased separately at the price below. SAMSAN version 2.0 was developed in 1982 and updated to version 3.0 in 1988.
The Prisoner Problem--A Generalization.
ERIC Educational Resources Information Center
Gannon, Gerald E.; Martelli, Mario U.
2000-01-01
Presents a generalization to the classic prisoner problem, which is inherently interesting and has a solution within the reach of most high school mathematics students. Suggests the problem as a way to emphasize to students the final step in a problem-solver's tool kit, considering possible generalizations when a particular problem has been…
Group decision-making techniques for natural resource management applications
Coughlan, Beth A.K.; Armour, Carl L.
1992-01-01
This report is an introduction to decision analysis and problem-solving techniques for professionals in natural resource management. Although these managers are often called upon to make complex decisions, their training in the natural sciences seldom provides exposure to the decision-making tools developed in management science. Our purpose is to being to fill this gap. We present a general analysis of the pitfalls of group problem solving, and suggestions for improved interactions followed by the specific techniques. Selected techniques are illustrated. The material is easy to understand and apply without previous training or excessive study and is applicable to natural resource management issues.
Some spectral approximation of one-dimensional fourth-order problems
NASA Technical Reports Server (NTRS)
Bernardi, Christine; Maday, Yvon
1989-01-01
Some spectral type collocation method well suited for the approximation of fourth-order systems are proposed. The model problem is the biharmonic equation, in one and two dimensions when the boundary conditions are periodic in one direction. It is proved that the standard Gauss-Lobatto nodes are not the best choice for the collocation points. Then, a new set of nodes related to some generalized Gauss type quadrature formulas is proposed. Also provided is a complete analysis of these formulas including some new issues about the asymptotic behavior of the weights and we apply these results to the analysis of the collocation method.
The use of optimization techniques to design controlled diffusion compressor blading
NASA Technical Reports Server (NTRS)
Sanger, N. L.
1982-01-01
A method for automating compressor blade design using numerical optimization, and applied to the design of a controlled diffusion stator blade row is presented. A general purpose optimization procedure is employed, based on conjugate directions for locally unconstrained problems and on feasible directions for locally constrained problems. Coupled to the optimizer is an analysis package consisting of three analysis programs which calculate blade geometry, inviscid flow, and blade surface boundary layers. The optimizing concepts and selection of design objective and constraints are described. The procedure for automating the design of a two dimensional blade section is discussed, and design results are presented.
Investigation of fatigue by Australian general practice registrars: a cross-sectional study.
Morgan, Simon; Henderson, Kim M; Tapley, Amanda; Thomson, Allison; Wilson, Jessica; Scott, John; Spike, Neil A; McArthur, Lawrie; van Driel, Mieke L; Magin, Parker J
2015-06-01
Fatigue is the most common undifferentiated problem presenting in general practice. Previous studies have shown that this presentation leads to multiple investigations. There is no published literature describing the management of patients with fatigue by general practice (GP) registrars. To document the investigation-ordering behaviour of GP registrars in managing patients with a new diagnosis of unexplained fatigue. This was a cross-sectional analysis of data from Registrar Clinical Encounters in Training (ReCEnT), an ongoing cohort study of GP registrars' consultations. We established the prevalence of new diagnoses of unexplained fatigue and associations with that diagnosis, the rate of test ordering and the number and types of investigations ordered. 644 registrars contributed data from 68 986 encounters. In 0.78% of patient encounters, a new diagnosis of unexplained fatigue was made. Pathology was ordered in 78.4% of these problems (versus 18.1% in non-fatigue problems), at a rate of 488 tests per 100 new fatigue problems. Our study suggests that unexplained fatigue elicits a non-rational approach to test ordering by registrars. These findings contribute to the understanding of GP registrar management of fatigue, and undifferentiated presentations more broadly, and suggest educational approaches to improve practice, including dealing with uncertainty.
NASA Astrophysics Data System (ADS)
Dvorak, R.; Henrard, J.
1993-06-01
Topics addressed include planetary theories, the Sitnikov problem, asteroids, resonance, general dynamical systems, and chaos and stability. Particular attention is given to recent progress in the theory and application of symplectic integrators, a computer-aided analysis of the Sitnikov problem, the chaotic behavior of trajectories for the asteroidal resonances, and the resonant motion in the restricted three-body problem. Also discussed are the second order long-period motion of Hyperion, meteorites from the asteroid 6 Hebe, and least squares parameter estimation in chaotic differential equations.
Normal-mode-based analysis of electron plasma waves with second-order Hermitian formalism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramos, J. J.; White, R. L.
The classic problem of the dynamic evolution and Landau damping of linear Langmuir electron waves in a collisionless plasma with Maxwellian background is cast as a second-order, self-adjoint problem with a continuum spectrum of real and positive squared frequencies. The corresponding complete basis of singular normal modes is obtained, along with their orthogonality relation. This yields easily the general expression of the time-reversal-invariant solution for any initial-value problem. Examples are then given for specific initial conditions that illustrate different behaviors of the Landau-damped macroscopic moments of the perturbations.
NASA Technical Reports Server (NTRS)
Lansing, F. L.
1979-01-01
A computer program which can distinguish between different receiver designs, and predict transient performance under variable solar flux, or ambient temperatures, etc. has a basic structure that fits a general heat transfer problem, but with specific features that are custom-made for solar receivers. The code is written in MBASIC computer language. The methodology followed in solving the heat transfer problem is explained. A program flow chart, an explanation of input and output tables, and an example of the simulation of a cavity-type solar receiver are included.
Normal-mode-based analysis of electron plasma waves with second-order Hermitian formalism
Ramos, J. J.; White, R. L.
2018-03-01
The classic problem of the dynamic evolution and Landau damping of linear Langmuir electron waves in a collisionless plasma with Maxwellian background is cast as a second-order, self-adjoint problem with a continuum spectrum of real and positive squared frequencies. The corresponding complete basis of singular normal modes is obtained, along with their orthogonality relation. This yields easily the general expression of the time-reversal-invariant solution for any initial-value problem. Examples are then given for specific initial conditions that illustrate different behaviors of the Landau-damped macroscopic moments of the perturbations.
Review and analysis of Virginia traffic law affecting bicycle safety.
DOT National Transportation Integrated Search
1980-01-01
In response to House Joint Resolution #105, passed during the 1980 session of the Virginia General Assembly, a study was made to assess the nature and scope of the bicycle-motor vehicle crash problem in the Commonwealth, to determine which provisions...
ERIC Educational Resources Information Center
Mathematics Teaching, 1973
1973-01-01
This column includes the description of a game involving addition and subtraction of integers represented by colored bricks, a general formula for an enlargement in the Cartesian plane, an analysis of the possibilities for certain games of board Solitaire, and a BASIC program for a recreational mathematics problem. (DT)
Basic research needed for stimulating the development of behavioral technologies
Mace, F. Charles
1994-01-01
The costs of disconnection between the basic and applied sectors of behavior analysis are reviewed, and some solutions to these problems are proposed. Central to these solutions are collaborations between basic and applied behavioral scientists in programmatic research that addresses the behavioral basis and solution of human behavior problems. This kind of collaboration parallels the deliberate interactions between basic and applied researchers that have proven to be so profitable in other scientific fields, such as medicine. Basic research questions of particular relevance to the development of behavioral technologies are posed in the following areas: response allocation, resistance to change, countercontrol, formation and differentiation/discrimination of stimulus and response classes, analysis of low-rate behavior, and rule-governed behavior. Three interrelated strategies to build connections between the basic and applied analysis of behavior are identified: (a) the development of nonhuman animal models of human behavior problems using operations that parallel plausible human circumstances, (b) replication of the modeled relations with human subjects in the operant laboratory, and (c) tests of the generality of the model with actual human problems in natural settings. PMID:16812734
SIG: a general-purpose signal processing program. User's manual. Revision 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lager, D.; Azevedo, S.
1985-05-09
SIG is a general-purpose signal processing, analysis, and display program. Its main purpose is to perform manipulations on time-domain and frequenccy-domain signals. The manual contains a complete description of the SIG program from the user's stand-point. A brief exercise in using SIG is shown. Complete descriptions are given of each command in the SIG core. General information about the SIG structure, command processor, and graphics options are provided. An example usage of SIG for solving a problem is developed, and error message formats are briefly discussed. (LEW)
NASA Astrophysics Data System (ADS)
Sumin, M. I.
2015-06-01
A parametric nonlinear programming problem in a metric space with an operator equality constraint in a Hilbert space is studied assuming that its lower semicontinuous value function at a chosen individual parameter value has certain subdifferentiability properties in the sense of nonlinear (nonsmooth) analysis. Such subdifferentiability can be understood as the existence of a proximal subgradient or a Fréchet subdifferential. In other words, an individual problem has a corresponding generalized Kuhn-Tucker vector. Under this assumption, a stable sequential Kuhn-Tucker theorem in nondifferential iterative form is proved and discussed in terms of minimizing sequences on the basis of the dual regularization method. This theorem provides necessary and sufficient conditions for the stable construction of a minimizing approximate solution in the sense of Warga in the considered problem, whose initial data can be approximately specified. A substantial difference of the proved theorem from its classical same-named analogue is that the former takes into account the possible instability of the problem in the case of perturbed initial data and, as a consequence, allows for the inherited instability of classical optimality conditions. This theorem can be treated as a regularized generalization of the classical Uzawa algorithm to nonlinear programming problems. Finally, the theorem is applied to the "simplest" nonlinear optimal control problem, namely, to a time-optimal control problem.
NASA Astrophysics Data System (ADS)
Pan'kov, A. A.
1997-05-01
The feasibility of using a generalized self-consistent method for predicting the effective elastic properties of composites with random hybrid structures has been examined. Using this method, the problem is reduced to solution of simpler special averaged problems for composites with single inclusions and corresponding transition layers in the medium examined. The dimensions of the transition layers are defined by correlation radii of the composite random structure of the composite, while the heterogeneous elastic properties of the transition layers take account of the probabilities for variation of the size and configuration of the inclusions using averaged special indicator functions. Results are given for a numerical calculation of the averaged indicator functions and analysis of the effect of the micropores in the matrix-fiber interface region on the effective elastic properties of unidirectional fiberglass—epoxy using the generalized self-consistent method and compared with experimental data and reported solutions.
Coherent states, quantum gravity, and the Born-Oppenheimer approximation. I. General considerations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stottmeister, Alexander, E-mail: alexander.stottmeister@gravity.fau.de; Thiemann, Thomas, E-mail: thomas.thiemann@gravity.fau.de
2016-06-15
This article, as the first of three, aims at establishing the (time-dependent) Born-Oppenheimer approximation, in the sense of space adiabatic perturbation theory, for quantum systems constructed by techniques of the loop quantum gravity framework, especially the canonical formulation of the latter. The analysis presented here fits into a rather general framework and offers a solution to the problem of applying the usual Born-Oppenheimer ansatz for molecular (or structurally analogous) systems to more general quantum systems (e.g., spin-orbit models) by means of space adiabatic perturbation theory. The proposed solution is applied to a simple, finite dimensional model of interacting spin systems,more » which serves as a non-trivial, minimal model of the aforesaid problem. Furthermore, it is explained how the content of this article and its companion affect the possible extraction of quantum field theory on curved spacetime from loop quantum gravity (including matter fields).« less
More on Weinberg's no-go theorem in quantum gravity
NASA Astrophysics Data System (ADS)
Nagahama, Munehiro; Oda, Ichiro
2018-05-01
We complement Weinberg's no-go theorem on the cosmological constant problem in quantum gravity by generalizing it to the case of a scale-invariant theory. Our analysis makes use of the effective action and the BRST symmetry in a manifestly covariant quantum gravity instead of the classical Lagrangian density and the G L (4 ) symmetry in classical gravity. In this sense, our proof is very general since it does not depend on details of quantum gravity and holds true for general gravitational theories which are invariant under diffeomorphisms. As an application of our theorem, we comment on an idea that in the asymptotic safety scenario the functional renormalization flow drives a cosmological constant to zero, solving the cosmological constant problem without reference to fine tuning of parameters. Finally, we also comment on the possibility of extending the Weinberg theorem in quantum gravity to the case where the translational invariance is spontaneously broken.
Time-nonlocal kinetic equations, jerk and hyperjerk in plasmas and solar physics
NASA Astrophysics Data System (ADS)
El-Nabulsi, Rami Ahmad
2018-06-01
The simulation and analysis of nonlocal effects in fluids and plasmas is an inherently complicated problem due to the massive breadth of physics required to describe the nonlocal dynamics. This is a multi-physics problem that draws upon various miscellaneous fields, such as electromagnetism and statistical mechanics. In this paper we strive to focus on one narrow but motivating mathematical way: the derivation of nonlocal plasma-fluid equations from a generalized nonlocal Liouville derivative operator motivated from Suykens's nonlocal arguments. The paper aims to provide a guideline toward modeling nonlocal effects occurring in plasma-fluid systems by means of a generalized nonlocal Boltzmann equation. The generalized nonlocal equations of fluid dynamics are derived and their implications in plasma-fluid systems are addressed, discussed and analyzed. Three main topics were discussed: Landau damping in plasma electrodynamics, ideal MHD and solar wind. A number of features were revealed, analyzed and confronted with recent research results and observations.
Comment on "Horizontal aquifer movement in a theis-theim confined system, by Donald C. Helm
Hsieh, Paul A.; Cooley, Richard L.
1995-01-01
In a recent paper, Helm [1994] presents an analysis of horizontal aquifer movement induced by groundwater withdrawal from a confined aquifer in which fluid and grains are incompressible. The analysis considers the aquifer in isolation (ignoring overlying and underlying strata) and assumes that the aquifer deforms purely in the horizontal direction (with no vertical movement). Helm's solution for grain displacement is obtained through introduction of a quantity known as bulk flux, qb, defined asqb = nvw + (1 - n)vswhere n is porosity, vw is velocity of water, and vs is the velocity of the solid grains. On the basis of the bulk flux concept, Helm develops an explanation for the driving force on the bulk material.It is our view that Helm's analysis is subject to four limitations. First, Helm's assumption of zero vertical displacement is not supported by field observations and could result in over- estimation of radial displacement. Second, in ignoring the role of overlying and underlying strata, Helm's solution does not yield reliable estimates of aquifer deformation. Third, Helm's solution method works only for problems that involve one spatial coordinate (for example, x or r) but does not generally work for problems involving three-dimensional flow and de- formation. Fourth, Helm's explanation of the driving force on the bulk material is faulty for general three-dimensional problems. The purpose of our comment is to discuss these four issues.
Hazards and occupational risk in hard coal mines - a critical analysis of legal requirements
NASA Astrophysics Data System (ADS)
Krause, Marcin
2017-11-01
This publication concerns the problems of occupational safety and health in hard coal mines, the basic elements of which are the mining hazards and the occupational risk. The work includes a comparative analysis of selected provisions of general and industry-specific law regarding the analysis of hazards and occupational risk assessment. Based on a critical analysis of legal requirements, basic assumptions regarding the practical guidelines for occupational risk assessment in underground coal mines have been proposed.
An Extended Microcomputer-Based Network Optimization Package.
1982-10-01
Analysis, Laxenberq, Austria, 1981, pp. 781-808. 9. Anton , H., Elementary Linear Algebra , John Wiley & Sons, New York, 1977. 10. Koopmans, T. C...fCaRUlue do leVee. aide It 001100"M OW eedea9f’ OF Nooke~e Network, generalized network, microcomputer, optimization, network with gains, linear ...Oboe &111111041 network problem, in turn, can be viewed as a specialization of a linear programuing problem having at most two non-zero entries in each
The general solution to the classical problem of finite Euler Bernoulli beam
NASA Technical Reports Server (NTRS)
Hussaini, M. Y.; Amba-Rao, C. L.
1977-01-01
An analytical solution is obtained for the problem of free and forced vibrations of a finite Euler Bernoulli beam with arbitrary (partially fixed) boundary conditions. The effects of linear viscous damping, Winkler foundation, constant axial tension, a concentrated mass, and an arbitrary forcing function are included in the analysis. No restriction is placed on the values of the parameters involved, and the solution presented here contains all cited previous solutions as special cases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gartling, D.K.
The theoretical and numerical background for the finite element computer program, TORO II, is presented in detail. TORO II is designed for the multi-dimensional analysis of nonlinear, electromagnetic field problems described by the quasi-static form of Maxwell`s equations. A general description of the boundary value problems treated by the program is presented. The finite element formulation and the associated numerical methods used in TORO II are also outlined. Instructions for the use of the code are documented in SAND96-0903; examples of problems analyzed with the code are also provided in the user`s manual. 24 refs., 8 figs.
Solanki, Neeraj; Kumar, Anuj; Awasthi, Neha; Kundu, Anjali; Mathur, Suveet; Bidhumadhav, Suresh
2016-06-01
Dental problems serve as additional burden on the children with special health care needs (CSHCN) because of additional hospitalization pressure, they face for the treatment of various serious medical problems. These patients have higher incidence of dental caries due to increased quantity of sugar involved in the drug therapies and lower salivary flow in the oral cavity. Such patients are difficult to treat with local anesthesia or inhaled sedatives. Single-sitting dental treatment is possible in these patients with general anesthesia. Therefore, we conducted this retrospective analysis of oral health status of CSHCN receiving various dental treatments in a given population. A total of 200 CSHCN of age 14 years or less reporting in the pediatric wing of the general hospital from 2005 to 2014 that underwent comprehensive dental treatment under general anesthesia were included in the study. Patients with history of any additional systemic illness, any malignancy, any known drug allergy, or previous history of any dental treatment were excluded from the study. Complete mouth rehabilitation was done in these patients under general anesthesia following standard protocols. Data regarding the patient's disability, type, duration, and severity of disability was collected and analyzed. All the results were analyzed by Statistical Package for the Social Sciences (SPSS) software. Chi-square test, Student's t-test, and one-way analysis of variance were used to assess the level of significance. Statistically significant results were obtained while analyzing the subject's decayed missing filled/decayed extracted filled teeth indices divided based on age. Significant difference was observed only in cases where patients underwent complete crown placement even when divided based on type of disability. While analyzing the prevalence, statistically significant results were observed in patients when divided based on their age. In CSHCN, dental pathologies and caries indices are increased regardless of the type or extent of disability. Children with special health care needs should be given special oral health care, and regular dental checkup should be conducted as they are more prone to have dental problems.
NASA Astrophysics Data System (ADS)
Yang, Bei; Qin, Qi-zhong; Han, Ling-li; Lin, Jing; Chen, Yu
2018-02-01
To investigate the relieving effects of hot spring balneotherapy on mental stress, sleep disorder, general health problems, and women's health problems in sub-healthy people, we recruited 500 volunteers in sub-health in Chongqing, and 362 volunteers completed the project, including 223 in the intervention group and 139 in the control group. The intervention group underwent hot spring balneotherapy for 5 months, while the control group did not. The two groups took questionnaire investigation (general data, mental stress, emotional status, sleep quality, general health problems, as well as some women's health problems) and physical examination (height, weight, waist circumference, blood pressure, blood lipid, blood sugar) 5 months before and after the intervention, respectively. After intervention, sleep disorder (difficulty in falling asleep ( P = 0.017); dreaminess, nightmare suffering, and restless sleep ( P = 0.013); easy awakening ( P = 0.003) and difficulty in falling into sleep again after awakening( P = 0.016); and mental stress ( P = 0.031) and problems of general health (head pain ( P = 0.026), joint pain( P = 0.009), leg or foot cramps ( P = 0.001), blurred vision ( P = 0.009)) were relieved significantly in the intervention group, as compared with the control group. While other indicators (fatigue, eye tiredness, limb numbness, constipation, skin allergy) and women's health problems (breast distending pain; dysmenorrhea, irregular menstruation) were relieved significantly in the self-comparison of the intervention group before and after intervention ( P < 0.05), but showed no statistically significant difference between two groups ( P > 0.05). All indications (except bad mood, low mood, and worry or irritability) in the intervention group significantly improved, with effect size from 0.096 to 1.302. Multiple logistic regression analysis showed that the frequency, length, and location of balneotherapy in the intervention group were the factors influencing emotion, sleep, and health condition ( P < 0.05). Relief of insomnia, fatigue, and leg or foot cramps was greater in old-age group than in young-aged group ( P < 0.05). Physical examination found that waist circumferences in women of various ages under 55 years were significantly reduced in the intervention group ( P < 0.05), while that in men did not significantly change ( P > 0.05). Spa therapy (balneotherapy) relieves mental stress, sleep disorder, general health, and reduces women's waist circumferences in sub-healthy people.
A Management Case Analysis of the Department of Defense Contractor Risk Assessment Guide Program
1990-12-01
ORGANIZATION ......... 17 1. Past Government Organization Problems... 17 2. Current Government Organization ......... 17 3. The Contractor’s Organizatin ...Program and urged CODSIA to take a leadership role in encouraging its members to participate. The DCAA [Ref ll:p. 45] informed defense contractors in...conferences, and through the leadership of officials including the Under Secretary of the Navy for Acquisition, DOD Comptroller General, DOD Inspector General
Optimal steering for kinematic vehicles with applications to spatially distributed agents
NASA Astrophysics Data System (ADS)
Brown, Scott; Praeger, Cheryl E.; Giudici, Michael
While there is no universal method to address control problems involving networks of autonomous vehicles, there exist a few promising schemes that apply to different specific classes of problems, which have attracted the attention of many researchers from different fields. In particular, one way to extend techniques that address problems involving a single autonomous vehicle to those involving teams of autonomous vehicles is to use the concept of Voronoi diagram. The Voronoi diagram provides a spatial partition of the environment the team of vehicles operate in, where each element of this partition is associated with a unique vehicle from the team. The partition induces a graph abstraction of the operating space that is in an one-to-one correspondence with the network abstraction of the team of autonomous vehicles; a fact that can provide both conceptual and analytical advantages during mission planning and execution. In this dissertation, we propose the use of a new class of Voronoi-like partitioning schemes with respect to state-dependent proximity (pseudo-) metrics rather than the Euclidean distance or other generalized distance functions, which are typically used in the literature. An important nuance here is that, in contrast to the Euclidean distance, state-dependent metrics can succinctly capture system theoretic features of each vehicle from the team (e.g., vehicle kinematics), as well as the environment-vehicle interactions, which are induced, for example, by local winds/currents. We subsequently illustrate how the proposed concept of state-dependent Voronoi-like partition can induce local control schemes for problems involving networks of spatially distributed autonomous vehicles by examining a sequential pursuit problem of a maneuvering target by a group of pursuers distributed in the plane. The construction of generalized Voronoi diagrams with respect to state-dependent metrics poses some significant challenges. First, the generalized distance metric may be a function of the direction of motion of the vehicle (anisotropic pseudo-distance function) and/or may not be expressible in closed form. Second, such problems fall under the general class of partitioning problems for which the vehicles' dynamics must be taken into account. The topology of the vehicle's configuration space may be non-Euclidean, for example, it may be a manifold embedded in a Euclidean space. In other words, these problems may not be reducible to generalized Voronoi diagram problems for which efficient construction schemes, analytical and/or computational, exist in the literature. This research effort pursues three main objectives. First, we present the complete solution of different steering problems involving a single vehicle in the presence of motion constraints imposed by the maneuverability envelope of the vehicle and/or the presence of a drift field induced by winds/currents in its vicinity. The analysis of each steering problem involving a single vehicle provides us with a state-dependent generalized metric, such as the minimum time-to-go/come. We subsequently use these state-dependent generalized distance functions as the proximity metrics in the formulation of generalized Voronoi-like partitioning problems. The characterization of the solutions of these state-dependent Voronoi-like partitioning problems using either analytical or computational techniques constitutes the second main objective of this dissertation. The third objective of this research effort is to illustrate the use of the proposed concept of state-dependent Voronoi-like partition as a means for passing from control techniques that apply to problems involving a single vehicle to problems involving networks of spatially distributed autonomous vehicles. To this aim, we formulate the problem of sequential/relay pursuit of a maneuvering target by a group of spatially distributed pursuers and subsequently propose a distributed group pursuit strategy that directly derives from the solution of a state-dependent Voronoi-like partitioning problem. (Abstract shortened by UMI.)
Variable Complexity Structural Optimization of Shells
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.; Venkataraman, Satchi
1999-01-01
Structural designers today face both opportunities and challenges in a vast array of available analysis and optimization programs. Some programs such as NASTRAN, are very general, permitting the designer to model any structure, to any degree of accuracy, but often at a higher computational cost. Additionally, such general procedures often do not allow easy implementation of all constraints of interest to the designer. Other programs, based on algebraic expressions used by designers one generation ago, have limited applicability for general structures with modem materials. However, when applicable, they provide easy understanding of design decisions trade-off. Finally, designers can also use specialized programs suitable for designing efficiently a subset of structural problems. For example, PASCO and PANDA2 are panel design codes, which calculate response and estimate failure much more efficiently than general-purpose codes, but are narrowly applicable in terms of geometry and loading. Therefore, the problem of optimizing structures based on simultaneous use of several models and computer programs is a subject of considerable interest. The problem of using several levels of models in optimization has been dubbed variable complexity modeling. Work under NASA grant NAG1-2110 has been concerned with the development of variable complexity modeling strategies with special emphasis on response surface techniques. In addition, several modeling issues for the design of shells of revolution were studied.
Variable Complexity Structural Optimization of Shells
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.; Venkataraman, Satchi
1998-01-01
Structural designers today face both opportunities and challenges in a vast array of available analysis and optimization programs. Some programs such as NASTRAN, are very general, permitting the designer to model any structure, to any degree of accuracy, but often at a higher computational cost. Additionally, such general procedures often do not allow easy implementation of all constraints of interest to the designer. Other programs, based on algebraic expressions used by designers one generation ago, have limited applicability for general structures with modem materials. However, when applicable, they provide easy understanding of design decisions trade-off. Finally, designers can also use specialized programs suitable for designing efficiently a subset of structural problems. For example, PASCO and PANDA2 are panel design codes, which calculate response and estimate failure much more efficiently than general-purpose codes, but are narrowly applicable in terms of geometry and loading. Therefore, the problem of optimizing structures based on simultaneous use of several models and computer programs is a subject of considerable interest. The problem of using several levels of models in optimization has been dubbed variable complexity modeling. Work under NASA grant NAG1-1808 has been concerned with the development of variable complexity modeling strategies with special emphasis on response surface techniques. In addition several modeling issues for the design of shells of revolution were studied.
Ring, Adele; Dowrick, Christopher; Humphris, Gerry; Salmon, Peter
2004-05-01
To identify the ways in which patients with medically unexplained symptoms present their problems and needs to general practitioners and to identify the forms of presentation that might lead general practitioners to feel pressurised to deliver somatic interventions. Qualitative analysis of audiorecorded consultations between patients and general practitioners. 7 general practices in Merseyside, England. 36 patients selected consecutively from 21 general practices, in whom doctors considered that patients' symptoms were medically unexplained. Inductive qualitative analysis of ways in which patients presented their symptoms to general practitioners. Although 34 patients received somatic interventions (27 received drug prescriptions, 12 underwent investigations, and four were referred), only 10 requested them. However, patients presented in other ways that had the potential to pressurise general practitioners, including: graphic and emotional language; complex patterns of symptoms that resisted explanation; description of emotional and social effects of symptoms; reference to other individuals as authority for the severity of symptoms; and biomedical explanations. Most patients with unexplained symptoms received somatic interventions from their general practitioners but had not requested them. Though such patients apparently seek to engage the general practitioner by conveying the reality of their suffering, general practitioners respond symptomatically.
DOT National Transportation Integrated Search
2012-11-01
When a bridge engineer encounters a design or analysis problem concerning a bridge substructure, that structure will commonly have a mixture of member types, some slender, and some squat. Slender members are generally governed by flexure, and normal ...
Guidelines for analyzing the capacity of d-regions with premature concrete deterioration of ASR/DEF.
DOT National Transportation Integrated Search
2015-03-01
When a bridge engineer encounters a design or analysis problem concerning a bridge substructure, : that structure will commonly have a mixture of member types, some slender, and some squat. : Slender members are generally governed by flexure, and nor...
A SAFE consortium position paper: Update on microbial safety of fresh produce
USDA-ARS?s Scientific Manuscript database
Surveys of fresh produce demonstrate potential to become contaminated with pathogenic microorganisms. The analysis of microbiological risk is generally divided into three categories: Risk Assessment identifies the factors that contribute to a problem; Risk Management identifies ways to solve a probl...
Dynamics of Quantum Adiabatic Evolution Algorithm for Number Partitioning
NASA Technical Reports Server (NTRS)
Smelyanskiy, V. N.; Toussaint, U. V.; Timucin, D. A.
2002-01-01
We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size n. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxiliary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum excitation gap. g min, = O(n 2(exp -n/2), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simultaneous flipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to 'the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimensional quantum diffusion in the energy space. This effect provides a general limitation of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.
Dynamics of Quantum Adiabatic Evolution Algorithm for Number Partitioning
NASA Technical Reports Server (NTRS)
Smelyanskiy, Vadius; vonToussaint, Udo V.; Timucin, Dogan A.; Clancy, Daniel (Technical Monitor)
2002-01-01
We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size n. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxiliary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum exitation gap, gmin = O(n2(sup -n/2)), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simultaneous flipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimensional quantum diffusion in the energy space. This effect provides a general limitation of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.
Toward the automated analysis of plasma physics problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mynick, H.E.
1989-04-01
A program (CALC) is described, which carries out nontrivial plasma physics calculations, in a manner intended to emulate the approach of a human theorist. This includes the initial process of gathering the relevant equations from a plasma knowledge base, and then determining how to solve them. Solution of the sets of equations governing physics problems, which in general have a nonuniform,irregular structure, not amenable to solution by standardized algorithmic procedures, is facilitated by an analysis of the structure of the equations and the relations among them. This often permits decompositions of the full problem into subproblems, and other simplifications inmore » form, which renders the resultant subsystems soluble by more standardized tools. CALC's operation is illustrated by a detailed description of its treatment of a sample plasma calculation. 5 refs., 3 figs.« less
Ruiz, Mark A; Douglas, Kevin S; Edens, John F; Nikolova, Natalia L; Lilienfeld, Scott O
2012-03-01
We undertook a secondary data analysis to study issues relevant to co-occurring mental health and substance disorder in a combined sample of offenders (N = 3,197). Using the Personality Assessment Inventory, we compared the frequency of depressive, traumatic stress, and personality disorder symptom elevations across offenders with and without substance problems, identified the extent to which co-occurring problems were accompanied by risk factors for suicide and aggression, and tested for gender differences. Offenders with substance problems were more likely than others to have increased mental health problems and risk factors for suicide or aggression. Women with substance problems, compared with men, had higher depression, traumatic stress, and borderline features, in addition to lower antisocial features. The frequency with which suicide and aggression risk factors were associated with mental health problems was generally similar across men and women. Measurement issues relevant to co-occurring disorder and risk assessment are discussed.
NASA Technical Reports Server (NTRS)
Hornberger, G. M.; Rastetter, E. B.
1982-01-01
A literature review of the use of sensitivity analyses in modelling nonlinear, ill-defined systems, such as ecological interactions is presented. Discussions of previous work, and a proposed scheme for generalized sensitivity analysis applicable to ill-defined systems are included. This scheme considers classes of mathematical models, problem-defining behavior, analysis procedures (especially the use of Monte-Carlo methods), sensitivity ranking of parameters, and extension to control system design.
An analysis of hypercritical states in elastic and inelastic systems
NASA Astrophysics Data System (ADS)
Kowalczk, Maciej
The author raises a wide range of problems whose common characteristic is an analysis of hypercritical states in elastic and inelastic systems. the article consists of two basic parts. The first part primarily discusses problems of modelling hypercritical states, while the second analyzes numerical methods (so-called continuation methods) used to solve non-linear problems. The original approaches for modelling hypercritical states found in this article include the combination of plasticity theory and an energy condition for cracking, accounting for the variability and cyclical nature of the forms of fracture of a brittle material under a die, and the combination of plasticity theory and a simplified description of the phenomenon of localization along a discontinuity line. The author presents analytical solutions of three non-linear problems for systems made of elastic/brittle/plastic and elastic/ideally plastic materials. The author proceeds to discuss the analytical basics of continuation methods and analyzes the significance of the parameterization of non-linear problems, provides a method for selecting control parameters based on an analysis of the rank of a rectangular matrix of a uniform system of increment equations, and also provides a new method for selecting an equilibrium path originating from a bifurcation point. The author provides a general outline of continuation methods based on an analysis of the rank of a matrix of a corrective system of equations. The author supplements his theoretical solutions with numerical solutions of non-linear problems for rod systems and problems of the plastic disintegration of a notched rectangular plastic plate.
Friedli, K; King, M B; Lloyd, M
2000-01-01
BACKGROUND: Counselling is currently adopted in many general practices, despite limited evidence of clinical and cost effectiveness. AIM: To compare direct and indirect costs of counsellors and general practitioners (GPs) in providing care to people with emotional problems. METHOD: We carried out a prospective, randomized controlled trial of non-directive counselling and routine general practice care in 14 general practices in north London. Counsellors adhered to a Rogerian model of counselling. The counselling sessions ranged from one to 12 sessions over 12 weeks. As reported elsewhere, there were no differences in clinical outcomes between the two groups. Therefore, we conducted a cost minimisation analysis. We present only the economic outcomes in this paper. Main outcome measures were cost data (service utilisation, travel, and work absence) at baseline, three months, and nine months. RESULTS: One hundred and thirty-six patients with emotional problems, mainly depression, took part. Seventy patients were randomised to the counsellors and 66 to the GPs. The average direct and indirect costs for the counsellor was 162.09 Pounds more per patient after three months compared with costs for the GP group; however, over the following six months the counsellor group was 87.00 Pounds less per patient than the GP group. Over the total nine-month period, the counsellor group remained more expensive per patient. CONCLUSIONS: Referral to counselling is no more clinically effective or expensive than GP care over a nine-month period in terms of direct plus indirect costs. However, further research is needed to establish indirect costs of introducing a counsellor into general practice. PMID:10897510
Sorsdahl, Katherine; Stein, Dan J; Myers, Bronwyn
2017-04-01
The Social Problem Solving Inventory-Revised Short-Form (SPSI-R:SF) has been used in several countries to identify problem-solving deficits among clinical and general populations in order to guide cognitive-behavioural interventions. Yet, very few studies have evaluated its psychometric properties. Three language versions of the questionnaire were administered to a general population sample comprising 1000 participants (771 English-, 178 Afrikaans- and 101 Xhosa-speakers). Of these participants, 210 were randomly selected to establish test-retest reliability (70 in each language). Principal component analysis was performed to examine the applicability of the factor structure of the original questionnaire to the South African data. Supplementary psychometric analyses were performed, including internal consistency and test-retest reliability. Collectively, results provide initial evidence of the reliability and validity of the SPSI-R:SF for the assessment of problem solving deficits in South Africa. Further studies that explore how the Afrikaans language version of the SPSI-R:SF can be improved and that establish the predictive validity of scores on the SPSI-R:SF are needed. © 2015 International Union of Psychological Science.
Functional analysis and treatment of problem behavior evoked by noise.
McCord, B E; Iwata, B A; Galensky, T L; Ellingson, S A; Thomson, R J
2001-01-01
We conducted a four-part investigation to develop methods for assessing and treating problem behavior evoked by noise. In Phase 1, 7 participants with developmental disabilities who were described as being hypersensitive to specific noises were exposed to a series of noises under controlled conditions. Results for 2 of the participants verified that noise was apparently an aversive event. In Phase 2, results of functional analyses indicated that these 2 participants' problem behaviors were maintained by escape from noise. In Phase 3, preference assessments were conducted to identify reinforcers that might be used during treatment. Finally, in Phase 4, the 2 participants' problem behaviors were successfully treated with extinction, stimulus fading, and a differential-reinforcement-of-other-behavior (DRO) contingency (only 1 participant required DRO). Treatment effects for both participants generalized to their home environments and were maintained during a follow-up assessment. Procedures and results were discussed in terms of their relevance to the systematic assessment of noise as an establishing operation (EO) and, more generally, to the identification of idiosyncratic EO influences on behavior. PMID:11800184
Health-Related Quality of Life of the General German Population in 2015: Results from the EQ-5D-5L.
Huber, Manuel B; Felix, Julia; Vogelmann, Martin; Leidl, Reiner
2017-04-16
The EQ-5D-5L is a widely used generic instrument to measure health-related quality of life. This study evaluates health perception in a representative sample of the general German population from 2015. To compare results over time, a component analysis technique was used that separates changes in the description and valuation of health states. The whole sample and also subgroups, stratified by sociodemographic parameters as well as disease affliction, were analyzed. In total, 2040 questionnaires (48.4% male, mean age 47.3 year) were included. The dimension with the lowest number of reported problems was self-care (93.0% without problems), and the dimension with the highest proportion of impairment was pain/discomfort (71.2% without problems). Some 64.3% of the study population were identified as problem-free. The visual analog scale (VAS) mean for all participants was 85.1. Low education was connected with significantly lower VAS scores, but the effect was small. Depression, heart disease, and diabetes had a strong significant negative effect on reported VAS means. Results were slightly better than those in a similar 2012 survey; the most important driver was the increase in the share of the study population that reported to be problem-free. In international comparisons, health perception of the general German population is relatively high and, compared with previous German studies, fairly stable over recent years. Elderly and sick people continue to report significant reductions in perceived health states.
Introduction to Generalized Functions with Applications in Aerodynamics and Aeroacoustics
NASA Technical Reports Server (NTRS)
Farassat, F.
1994-01-01
Generalized functions have many applications in science and engineering. One useful aspect is that discontinuous functions can be handled as easily as continuous or differentiable functions and provide a powerful tool in formulating and solving many problems of aerodynamics and acoustics. Furthermore, generalized function theory elucidates and unifies many ad hoc mathematical approaches used by engineers and scientists. We define generalized functions as continuous linear functionals on the space of infinitely differentiable functions with compact support, then introduce the concept of generalized differentiation. Generalized differentiation is the most important concept in generalized function theory and the applications we present utilize mainly this concept. First, some results of classical analysis, are derived with the generalized function theory. Other applications of the generalized function theory in aerodynamics discussed here are the derivations of general transport theorems for deriving governing equations of fluid mechanics, the interpretation of the finite part of divergent integrals, the derivation of the Oswatitsch integral equation of transonic flow, and the analysis of velocity field discontinuities as sources of vorticity. Applications in aeroacoustics include the derivation of the Kirchhoff formula for moving surfaces, the noise from moving surfaces, and shock noise source strength based on the Ffowcs Williams-Hawkings equation.
NASA Technical Reports Server (NTRS)
Labudde, R. A.
1972-01-01
An attempt has been made to keep the programs as subroutine oriented as possible. Usually only the main programs are directly concerned with the problem of total cross sections. In particular the subroutines POLFIT, BILINR, GASS59/MAXLIK, SYMOR, MATIN, STUDNT, DNTERP, DIFTAB, FORDIF, EPSALG, REGFAL and ADSIMP are completely general, and are concerned only with the problems of numerical analysis and statistics. Each subroutine is independently documented.
Newman, Michelle G.; Jacobson, Nicholas C.; Erickson, Thane M.; Fisher, Aaron J.
2016-01-01
Objective We examined dimensional interpersonal problems as moderators of cognitive behavioral therapy (CBT) versus its components (cognitive therapy [CT] and behavioral therapy [BT]). We predicted that people with generalized anxiety disorder (GAD) whose interpersonal problems reflected more dominance and intrusiveness would respond best to a relaxation-based BT compared to CT or CBT, based on studies showing that people with personality features associated with a need for autonomy respond best to treatments that are more experiential, concrete, and self-directed compared to therapies involving abstract analysis of one’s problems (e.g., containing CT). Method This was a secondary analysis of Borkovec, Newman, Pincus, and Lytle (2002). Forty-seven participants with principal diagnoses of GAD were assigned randomly to combined CBT (n = 16), CT (n = 15), or BT (n = 16). Results As predicted, compared to participants with less intrusiveness, those with dimensionally more intrusiveness responded with greater GAD symptom reduction to BT than to CBT at posttreatment and greater change to BT than to CT or CBT across all follow-up points. Similarly, those with more dominance responded better to BT compared to CT and CBT at all follow-up points. Additionally, being overly nurturant at baseline was associated with GAD symptoms at baseline, post, and all follow-up time-points regardless of therapy condition. Conclusions Generally anxious individuals with domineering and intrusive problems associated with higher need for control may respond better to experiential behavioral interventions than to cognitive interventions, which may be perceived as a direct challenge of their perceptions. PMID:28077221
Newman, Michelle G; Jacobson, Nicholas C; Erickson, Thane M; Fisher, Aaron J
2017-01-01
We examined dimensional interpersonal problems as moderators of cognitive behavioral therapy (CBT) versus its components (cognitive therapy [CT] and behavioral therapy [BT]). We predicted that people with generalized anxiety disorder (GAD) whose interpersonal problems reflected more dominance and intrusiveness would respond best to a relaxation-based BT compared to CT or CBT, based on studies showing that people with personality features associated with a need for autonomy respond best to treatments that are more experiential, concrete, and self-directed compared to therapies involving abstract analysis of one's problems (e.g., containing CT). This was a secondary analysis of Borkovec, Newman, Pincus, and Lytle (2002). Forty-seven participants with principal diagnoses of GAD were assigned randomly to combined CBT (n = 16), CT (n = 15), or BT (n = 16). As predicted, compared to participants with less intrusiveness, those with dimensionally more intrusiveness responded with greater GAD symptom reduction to BT than to CBT at posttreatment and greater change to BT than to CT or CBT across all follow-up points. Similarly, those with more dominance responded better to BT compared to CT and CBT at all follow-up points. Additionally, being overly nurturant at baseline was associated with GAD symptoms at baseline, post, and all follow-up time-points regardless of therapy condition. Generally anxious individuals with domineering and intrusive problems associated with higher need for control may respond better to experiential behavioral interventions than to cognitive interventions, which may be perceived as a direct challenge of their perceptions. Copyright © 2016. Published by Elsevier Ltd.
Language-specific memory for everyday arithmetic facts in Chinese-English bilinguals.
Chen, Yalin; Yanke, Jill; Campbell, Jamie I D
2016-04-01
The role of language in memory for arithmetic facts remains controversial. Here, we examined transfer of memory training for evidence that bilinguals may acquire language-specific memory stores for everyday arithmetic facts. Chinese-English bilingual adults (n = 32) were trained on different subsets of simple addition and multiplication problems. Each operation was trained in one language or the other. The subsequent test phase included all problems with addition and multiplication alternating across trials in two blocks, one in each language. Averaging over training language, the response time (RT) gains for trained problems relative to untrained problems were greater in the trained language than in the untrained language. Subsequent analysis showed that English training produced larger RT gains for trained problems relative to untrained problems in English at test relative to the untrained Chinese language. In contrast, there was no evidence with Chinese training that problem-specific RT gains differed between Chinese and the untrained English language. We propose that training in Chinese promoted a translation strategy for English arithmetic (particularly multiplication) that produced strong cross-language generalization of practice, whereas training in English strengthened relatively weak, English-language arithmetic memories and produced little generalization to Chinese (i.e., English training did not induce an English translation strategy for Chinese language trials). The results support the existence of language-specific strengthening of memory for everyday arithmetic facts.
Ecological Effects of Weather Modification: A Problem Analysis.
ERIC Educational Resources Information Center
Cooper, Charles F.; Jolly, William C.
This publication reviews the potential hazards to the environment of weather modification techniques as they eventually become capable of producing large scale weather pattern modifications. Such weather modifications could result in ecological changes which would generally require several years to be fully evident, including the alteration of…
Analysis of light vehicle crashes and pre-crash scenarios based on the 2000 General Estimates System
DOT National Transportation Integrated Search
2003-02-01
This report analyzes the problem of light vehicle crashes in the United States to support the development and assessment of effective crash avoidance systems as part of the U.S. Department of Transportation's Intelligent Vehicle Initiative. The analy...
MULTIVARIATE RECEPTOR MODELS-CURRENT PRACTICE AND FUTURE TRENDS. (R826238)
Multivariate receptor models have been applied to the analysis of air quality data for sometime. However, solving the general mixture problem is important in several other fields. This paper looks at the panoply of these models with a view of identifying common challenges and ...
Methods for Estimating Payload/Vehicle Design Loads
NASA Technical Reports Server (NTRS)
Chen, J. C.; Garba, J. A.; Salama, M. A.; Trubert, M. R.
1983-01-01
Several methods compared with respect to accuracy, design conservatism, and cost. Objective of survey: reduce time and expense of load calculation by selecting approximate method having sufficient accuracy for problem at hand. Methods generally applicable to dynamic load analysis in other aerospace and other vehicle/payload systems.
Environmental Monitoring Networks Optimization Using Advanced Active Learning Algorithms
NASA Astrophysics Data System (ADS)
Kanevski, Mikhail; Volpi, Michele; Copa, Loris
2010-05-01
The problem of environmental monitoring networks optimization (MNO) belongs to one of the basic and fundamental tasks in spatio-temporal data collection, analysis, and modeling. There are several approaches to this problem, which can be considered as a design or redesign of monitoring network by applying some optimization criteria. The most developed and widespread methods are based on geostatistics (family of kriging models, conditional stochastic simulations). In geostatistics the variance is mainly used as an optimization criterion which has some advantages and drawbacks. In the present research we study an application of advanced techniques following from the statistical learning theory (SLT) - support vector machines (SVM) and the optimization of monitoring networks when dealing with a classification problem (data are discrete values/classes: hydrogeological units, soil types, pollution decision levels, etc.) is considered. SVM is a universal nonlinear modeling tool for classification problems in high dimensional spaces. The SVM solution is maximizing the decision boundary between classes and has a good generalization property for noisy data. The sparse solution of SVM is based on support vectors - data which contribute to the solution with nonzero weights. Fundamentally the MNO for classification problems can be considered as a task of selecting new measurement points which increase the quality of spatial classification and reduce the testing error (error on new independent measurements). In SLT this is a typical problem of active learning - a selection of the new unlabelled points which efficiently reduce the testing error. A classical approach (margin sampling) to active learning is to sample the points closest to the classification boundary. This solution is suboptimal when points (or generally the dataset) are redundant for the same class. In the present research we propose and study two new advanced methods of active learning adapted to the solution of MNO problem: 1) hierarchical top-down clustering in an input space in order to remove redundancy when data are clustered, and 2) a general method (independent on classifier) which gives posterior probabilities that can be used to define the classifier confidence and corresponding proposals for new measurement points. The basic ideas and procedures are explained by applying simulated data sets. The real case study deals with the analysis and mapping of soil types, which is a multi-class classification problem. Maps of soil types are important for the analysis and 3D modeling of heavy metals migration in soil and prediction risk mapping. The results obtained demonstrate the high quality of SVM mapping and efficiency of monitoring network optimization by using active learning approaches. The research was partly supported by SNSF projects No. 200021-126505 and 200020-121835.
Banchetti, Rosalba
2005-01-01
A comparative reappraisal of the general problem of evolutionary trends and constraints of the locomotion phenomenon from prokaryotes to protozoa to metazoa was carried on. They elaborated different propulsive systems, different control systems of motion and different analysis systems of the stimuli. A general understanding of the locomotion phenomenon was reached and ciliate behaviour was positioned within the wider context of the evolution of biological displacement.
Chen, Zhe; Honomichl, Ryan; Kennedy, Diane; Tan, Enda
2016-06-01
The present study examines 5- to 8-year-old children's relation reasoning in solving matrix completion tasks. This study incorporates a componential analysis, an eye-tracking method, and a microgenetic approach, which together allow an investigation of the cognitive processing strategies involved in the development and learning of children's relational thinking. Developmental differences in problem-solving performance were largely due to deficiencies in engaging the processing strategies that are hypothesized to facilitate problem-solving performance. Feedback designed to highlight the relations between objects within the matrix improved 5- and 6-year-olds' problem-solving performance, as well as their use of appropriate processing strategies. Furthermore, children who engaged the processing strategies early on in the task were more likely to solve subsequent problems in later phases. These findings suggest that encoding relations, integrating rules, completing the model, and generalizing strategies across tasks are critical processing components that underlie relational thinking. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Hatton, Chris; Emerson, Eric; Robertson, Janet; Baines, Susannah
2017-11-24
Children with mild/moderate intellectual disabilities are at greater risk for mental health problems, with socio-economic factors and adversity partly accounting for this. Fewer data are available for adolescents. Secondary analysis was undertaken of the Next Steps annual panel study following a cohort through adolescence into adulthood containing self-report mental health data up to age 16/17. Participants with mild/moderate intellectual disabilities were identified through data linkage with educational records. Adolescents with mild/moderate intellectual disabilities were more likely than non-disabled peers to experience socio-economic disadvantage and bullying. Incidence rates of mental health problems were generally not significantly different between adolescents with and without intellectual disabilities. These findings are consistent with higher rates of persistent mental health problems beginning earlier among children with intellectual disabilities. Greater attention needs to be paid to the timecourse of mental health problems, and the impact of socio-economic factors, family and peers on mental health. © 2017 John Wiley & Sons Ltd.
Development of an integrated BEM approach for hot fluid structure interaction
NASA Technical Reports Server (NTRS)
Dargush, G. F.; Banerjee, P. K.; Shi, Y.
1990-01-01
A comprehensive boundary element method is presented for transient thermoelastic analysis of hot section Earth-to-Orbit engine components. This time-domain formulation requires discretization of only the surface of the component, and thus provides an attractive alternative to finite element analysis for this class of problems. In addition, steep thermal gradients, which often occur near the surface, can be captured more readily since with a boundary element approach there are no shape functions to constrain the solution in the direction normal to the surface. For example, the circular disc analysis indicates the high level of accuracy that can be obtained. In fact, on the basis of reduced modeling effort and improved accuracy, it appears that the present boundary element method should be the preferred approach for general problems of transient thermoelasticity.
Rapid iterative reanalysis for automated design
NASA Technical Reports Server (NTRS)
Bhatia, K. G.
1973-01-01
A method for iterative reanalysis in automated structural design is presented for a finite-element analysis using the direct stiffness approach. A basic feature of the method is that the generalized stiffness and inertia matrices are expressed as functions of structural design parameters, and these generalized matrices are expanded in Taylor series about the initial design. Only the linear terms are retained in the expansions. The method is approximate because it uses static condensation, modal reduction, and the linear Taylor series expansions. The exact linear representation of the expansions of the generalized matrices is also described and a basis for the present method is established. Results of applications of the present method to the recalculation of the natural frequencies of two simple platelike structural models are presented and compared with results obtained by using a commonly applied analysis procedure used as a reference. In general, the results are in good agreement. A comparison of the computer times required for the use of the present method and the reference method indicated that the present method required substantially less time for reanalysis. Although the results presented are for relatively small-order problems, the present method will become more efficient relative to the reference method as the problem size increases. An extension of the present method to static reanalysis is described, ana a basis for unifying the static and dynamic reanalysis procedures is presented.
The Information Content of Discrete Functions and Their Application in Genetic Data Analysis.
Sakhanenko, Nikita A; Kunert-Graf, James; Galas, David J
2017-12-01
The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. We present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discrete variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis-that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. We illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.
NASA Technical Reports Server (NTRS)
Chan, S. T. K.; Lee, C. H.; Brashears, M. R.
1975-01-01
A finite element algorithm for solving unsteady, three-dimensional high velocity impact problems is presented. A computer program was developed based on the Eulerian hydroelasto-viscoplastic formulation and the utilization of the theorem of weak solutions. The equations solved consist of conservation of mass, momentum, and energy, equation of state, and appropriate constitutive equations. The solution technique is a time-dependent finite element analysis utilizing three-dimensional isoparametric elements, in conjunction with a generalized two-step time integration scheme. The developed code was demonstrated by solving one-dimensional as well as three-dimensional impact problems for both the inviscid hydrodynamic model and the hydroelasto-viscoplastic model.
Analysis of edge impact stresses in composite plates
NASA Technical Reports Server (NTRS)
Moon, F. C.; Kang, C. K.
1974-01-01
The in-plane edge impact of composite plates, with or without a protection strip, is investigated. A computational analysis based on the Fast Fourier Transform technique is presented. The particular application of the present method is in the understanding of the foreign object damage problem of composite fan blades. The method is completely general and may be applied to the study of other stress wave propagation problems in a half space. Results indicate that for the protective strip to be effective in reducing impact stresses in the composite the thickness must be equal or greater than the impact contact dimension. Large interface shear stresses at the strip - composite boundary can be induced under impact.
High frequency vibration analysis by the complex envelope vectorization.
Giannini, O; Carcaterra, A; Sestieri, A
2007-06-01
The complex envelope displacement analysis (CEDA) is a procedure to solve high frequency vibration and vibro-acoustic problems, providing the envelope of the physical solution. CEDA is based on a variable transformation mapping the high frequency oscillations into signals of low frequency content and has been successfully applied to one-dimensional systems. However, the extension to plates and vibro-acoustic fields met serious difficulties so that a general revision of the theory was carried out, leading finally to a new method, the complex envelope vectorization (CEV). In this paper the CEV method is described, underlying merits and limits of the procedure, and a set of applications to vibration and vibro-acoustic problems of increasing complexity are presented.
Approximation of the Newton Step by a Defect Correction Process
NASA Technical Reports Server (NTRS)
Arian, E.; Batterman, A.; Sachs, E. W.
1999-01-01
In this paper, an optimal control problem governed by a partial differential equation is considered. The Newton step for this system can be computed by solving a coupled system of equations. To do this efficiently with an iterative defect correction process, a modifying operator is introduced into the system. This operator is motivated by local mode analysis. The operator can be used also for preconditioning in Generalized Minimum Residual (GMRES). We give a detailed convergence analysis for the defect correction process and show the derivation of the modifying operator. Numerical tests are done on the small disturbance shape optimization problem in two dimensions for the defect correction process and for GMRES.
Wood, Phillip Karl; Jackson, Kristina M
2013-08-01
Researchers studying longitudinal relationships among multiple problem behaviors sometimes characterize autoregressive relationships across constructs as indicating "protective" or "launch" factors or as "developmental snares." These terms are used to indicate that initial or intermediary states of one problem behavior subsequently inhibit or promote some other problem behavior. Such models are contrasted with models of "general deviance" over time in which all problem behaviors are viewed as indicators of a common linear trajectory. When fit of the "general deviance" model is poor and fit of one or more autoregressive models is good, this is taken as support for the inhibitory or enhancing effect of one construct on another. In this paper, we argue that researchers consider competing models of growth before comparing deviance and time-bound models. Specifically, we propose use of the free curve slope intercept (FCSI) growth model (Meredith & Tisak, 1990) as a general model to typify change in a construct over time. The FCSI model includes, as nested special cases, several statistical models often used for prospective data, such as linear slope intercept models, repeated measures multivariate analysis of variance, various one-factor models, and hierarchical linear models. When considering models involving multiple constructs, we argue the construct of "general deviance" can be expressed as a single-trait multimethod model, permitting a characterization of the deviance construct over time without requiring restrictive assumptions about the form of growth over time. As an example, prospective assessments of problem behaviors from the Dunedin Multidisciplinary Health and Development Study (Silva & Stanton, 1996) are considered and contrasted with earlier analyses of Hussong, Curran, Moffitt, and Caspi (2008), which supported launch and snare hypotheses. For antisocial behavior, the FCSI model fit better than other models, including the linear chronometric growth curve model used by Hussong et al. For models including multiple constructs, a general deviance model involving a single trait and multimethod factors (or a corresponding hierarchical factor model) fit the data better than either the "snares" alternatives or the general deviance model previously considered by Hussong et al. Taken together, the analyses support the view that linkages and turning points cannot be contrasted with general deviance models absent additional experimental intervention or control.
WOOD, PHILLIP KARL; JACKSON, KRISTINA M.
2014-01-01
Researchers studying longitudinal relationships among multiple problem behaviors sometimes characterize autoregressive relationships across constructs as indicating “protective” or “launch” factors or as “developmental snares.” These terms are used to indicate that initial or intermediary states of one problem behavior subsequently inhibit or promote some other problem behavior. Such models are contrasted with models of “general deviance” over time in which all problem behaviors are viewed as indicators of a common linear trajectory. When fit of the “general deviance” model is poor and fit of one or more autoregressive models is good, this is taken as support for the inhibitory or enhancing effect of one construct on another. In this paper, we argue that researchers consider competing models of growth before comparing deviance and time-bound models. Specifically, we propose use of the free curve slope intercept (FCSI) growth model (Meredith & Tisak, 1990) as a general model to typify change in a construct over time. The FCSI model includes, as nested special cases, several statistical models often used for prospective data, such as linear slope intercept models, repeated measures multivariate analysis of variance, various one-factor models, and hierarchical linear models. When considering models involving multiple constructs, we argue the construct of “general deviance” can be expressed as a single-trait multimethod model, permitting a characterization of the deviance construct over time without requiring restrictive assumptions about the form of growth over time. As an example, prospective assessments of problem behaviors from the Dunedin Multidisciplinary Health and Development Study (Silva & Stanton, 1996) are considered and contrasted with earlier analyses of Hussong, Curran, Moffitt, and Caspi (2008), which supported launch and snare hypotheses. For antisocial behavior, the FCSI model fit better than other models, including the linear chronometric growth curve model used by Hussong et al. For models including multiple constructs, a general deviance model involving a single trait and multimethod factors (or a corresponding hierarchical factor model) fit the data better than either the “snares” alternatives or the general deviance model previously considered by Hussong et al. Taken together, the analyses support the view that linkages and turning points cannot be contrasted with general deviance models absent additional experimental intervention or control. PMID:23880389
Stability analysis of multiple-robot control systems
NASA Technical Reports Server (NTRS)
Wen, John T.; Kreutz, Kenneth
1989-01-01
In a space telerobotic service scenario, cooperative motion and force control of multiple robot arms are of fundamental importance. Three paradigms to study this problem are proposed. They are distinguished by the set of variables used for control design. They are joint torques, arm tip force vectors, and an accelerated generalized coordinate set. Control issues related to each case are discussed. The latter two choices require complete model information, which presents practical modeling, computational, and robustness problems. Therefore, focus is on the joint torque control case to develop relatively model independent motion and internal force control laws. The rigid body assumption allows the motion and force control problems to be independently addressed. By using an energy motivated Lyapunov function, a simple proportional derivative plus gravity compensation type of motion control law is always shown to be stabilizing. The asymptotic convergence of the tracing error to zero requires the use of a generalized coordinate with the contact constraints taken into account. If a non-generalized coordinate is used, only convergence to a steady state manifold can be concluded. For the force control, both feedforward and feedback schemes are analyzed. The feedback control, if proper care has been taken, exhibits better robustness and transient performance.
A multiple maximum scatter difference discriminant criterion for facial feature extraction.
Song, Fengxi; Zhang, David; Mei, Dayong; Guo, Zhongwei
2007-12-01
Maximum scatter difference (MSD) discriminant criterion was a recently presented binary discriminant criterion for pattern classification that utilizes the generalized scatter difference rather than the generalized Rayleigh quotient as a class separability measure, thereby avoiding the singularity problem when addressing small-sample-size problems. MSD classifiers based on this criterion have been quite effective on face-recognition tasks, but as they are binary classifiers, they are not as efficient on large-scale classification tasks. To address the problem, this paper generalizes the classification-oriented binary criterion to its multiple counterpart--multiple MSD (MMSD) discriminant criterion for facial feature extraction. The MMSD feature-extraction method, which is based on this novel discriminant criterion, is a new subspace-based feature-extraction method. Unlike most other subspace-based feature-extraction methods, the MMSD computes its discriminant vectors from both the range of the between-class scatter matrix and the null space of the within-class scatter matrix. The MMSD is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the benchmark database, FERET, show that the MMSD out-performs state-of-the-art facial feature-extraction methods such as null space method, direct linear discriminant analysis (LDA), eigenface, Fisherface, and complete LDA.
[A preliminary analysis of the high birth rate in India].
Shao, N
1981-01-01
The author first provides some basic demographic data for India and points out that the current annual rate of population growth of 2.45 percent is slightly higher than the annual increase in food production. Problems in the areas of employment, education, housing, and transportation, as well as the general problem of poverty, are seen as a consequence of this imbalance. The lack of success of the national family planning program is attributed primarily to the failure to achieve a satisfactory rate of economic growth. Contributory factors include early marriage, the low status of women, the desire for large families, and administrative problems associated with the family planning program.
Incremental analysis of large elastic deformation of a rotating cylinder
NASA Technical Reports Server (NTRS)
Buchanan, G. R.
1976-01-01
The effect of finite deformation upon a rotating, orthotropic cylinder was investigated using a general incremental theory. The incremental equations of motion are developed using the variational principle. The governing equations are derived using the principle of virtual work for a body with initial stress. The governing equations are reduced to those for the title problem and a numerical solution is obtained using finite difference approximations. Since the problem is defined in terms of one independent space coordinate, the finite difference grid can be modified as the incremental deformation occurs without serious numerical difficulties. The nonlinear problem is solved incrementally by totaling a series of linear solutions.
The Problem of Size in Robust Design
NASA Technical Reports Server (NTRS)
Koch, Patrick N.; Allen, Janet K.; Mistree, Farrokh; Mavris, Dimitri
1997-01-01
To facilitate the effective solution of multidisciplinary, multiobjective complex design problems, a departure from the traditional parametric design analysis and single objective optimization approaches is necessary in the preliminary stages of design. A necessary tradeoff becomes one of efficiency vs. accuracy as approximate models are sought to allow fast analysis and effective exploration of a preliminary design space. In this paper we apply a general robust design approach for efficient and comprehensive preliminary design to a large complex system: a high speed civil transport (HSCT) aircraft. Specifically, we investigate the HSCT wing configuration design, incorporating life cycle economic uncertainties to identify economically robust solutions. The approach is built on the foundation of statistical experimentation and modeling techniques and robust design principles, and is specialized through incorporation of the compromise Decision Support Problem for multiobjective design. For large problems however, as in the HSCT example, this robust design approach developed for efficient and comprehensive design breaks down with the problem of size - combinatorial explosion in experimentation and model building with number of variables -and both efficiency and accuracy are sacrificed. Our focus in this paper is on identifying and discussing the implications and open issues associated with the problem of size for the preliminary design of large complex systems.
A Fiducial Approach to Extremes and Multiple Comparisons
ERIC Educational Resources Information Center
Wandler, Damian V.
2010-01-01
Generalized fiducial inference is a powerful tool for many difficult problems. Based on an extension of R. A. Fisher's work, we used generalized fiducial inference for two extreme value problems and a multiple comparison procedure. The first extreme value problem is dealing with the generalized Pareto distribution. The generalized Pareto…
Stability of flow of a thermoviscoelastic fluid between rotating coaxial circular cylinders
NASA Technical Reports Server (NTRS)
Ghandour, N. N.; Narasimhan, M. N. L.
1976-01-01
The stability problem of thermoviscoelastic fluid flow between rotating coaxial cylinders is investigated using nonlinear thermoviscoelastic constitutive equations due to Eringen and Koh. The velocity field is found to be identical with that of the classical viscous case and the case of the viscoelastic fluid, but the temperature and pressure fields are found to be different. By imposing some physically reasonable mechanical and geometrical restrictions on the flow, and by a suitable mathematical analysis, the problem is reduced to a characteristic value problem. The resulting problem is solved and stability criteria are obtained in terms of critical Taylor numbers. In general, it is found that thermoviscoelastic fluids are more stable than classical viscous fluids and viscoinelastic fluids under similar conditions.
The control data "GIRAFFE" system for interactive graphic finite element analysis
NASA Technical Reports Server (NTRS)
Park, S.; Brandon, D. M., Jr.
1975-01-01
The Graphical Interface for Finite Elements (GIRAFFE) general purpose interactive graphics application package was described. This system may be used as a pre/post processor for structural analysis computer programs. It facilitates the operations of creating, editing, or reviewing all the structural input/output data on a graphics terminal in a time-sharing mode of operation. An application program for a simple three-dimensional plate problem was illustrated.
Landscape Analysis and Algorithm Development for Plateau Plagued Search Spaces
2011-02-28
Final Report for AFOSR #FA9550-08-1-0422 Landscape Analysis and Algorithm Development for Plateau Plagued Search Spaces August 1, 2008 to November 30...focused on developing high level general purpose algorithms , such as Tabu Search and Genetic Algorithms . However, understanding of when and why these... algorithms perform well still lags. Our project extended the theory of certain combi- natorial optimization problems to develop analytical
Important factors in the maximum likelihood analysis of flight test maneuvers
NASA Technical Reports Server (NTRS)
Iliff, K. W.; Maine, R. E.; Montgomery, T. D.
1979-01-01
The information presented is based on the experience in the past 12 years at the NASA Dryden Flight Research Center of estimating stability and control derivatives from over 3500 maneuvers from 32 aircraft. The overall approach to the analysis of dynamic flight test data is outlined. General requirements for data and instrumentation are discussed and several examples of the types of problems that may be encountered are presented.
Importance-performance analysis as a guide for hospitals in improving their provision of services.
Whynes, D K; Reed, G
1995-11-01
As a result of the 1990 National Health Services Act, hospitals now compete with one another to win service contracts. A high level of service quality represents an important ingredient of a successful competitive strategy, yet, in general, hospitals have little external information on which to base quality decisions. Specifically, in their efforts to win contracts from fundholding general practitioners, hospitals require information on that which these purchasers deem important with respect to quality, and on how these purchasers assess the quality of their current service performance. The problem is complicated by the fact that hospital service quality, in itself, is multi-dimensional. In other areas of economic activity, the information problem has been resolved by importance-performance analysis and this paper reports the findings of such an analysis conducted for hosptials in the Trent region. The importance and performance service quality ratings of fundholders were obtained from a questionnaire survey and used in a particular variant of importance-performance analysis, which possesses certain advantages over more conventional approaches. In addition to providing empirical data on the determinants of service quality, as perceived by the purchasers of hospital services, this paper demonstrates how such information can be successfully employed in a quality enhancement strategy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wahlgren, Thomas; Nilsson, Sten; Lennernaes, Bo
2007-11-01
Purpose: To explore the long-term general and disease-specific health-related quality of life (HRQOL) >5 years after combined radiotherapy for localized prostate cancer, including a high-dose-rate brachytherapy boost and hormonal deprivation therapy. Methods and Materials: Of 196 eligible patients with localized prostate cancer (Stage T1-T3a) consecutively treated with curative radiotherapy at our institution between June 1998 and August 2000, 182 (93%) completed the European Organization for Research and Treatment of Cancer Quality of Life questionnaires QLQ-C30 and QLQ-PR25, including specific questions on fecal incontinence >5 years after treatment in September 2005. A comparison with age-matched normative data was done, as wellmore » as a longitudinal analysis using HRQOL data from a previous study. Results: The analysis included 158 nonrecurrent patients. Comparisons made with normative data showed that physical and role functioning were significantly better statistically and social functioning was significantly worse. Diarrhea and sleep disturbances were more pronounced and pain less pronounced than in a normal male population. The longitudinal analysis of disease-specific HRQOL showed that urinary urgency and erectile problems persisted 5 years after treatment, and nocturia and hormonally dependent symptoms had declined significantly, with a statistically significant difference. Fecal incontinence was recognized by 25% of patients, of whom 80% considered it a minor problem. Conclusion: More than 5 years after combined radiotherapy, irritative urinary problems and erectile dysfunction remain concerns, although severe bowel disturbance and fecal incontinence seem to be minor problems. Longitudinally, a decline mainly in hormonally dependent symptoms was seen. Minor differences in general HRQOL compared with normative data were observed, possibly including 'response shift' effects.« less
NASA Astrophysics Data System (ADS)
Ghiaei, Farhad; Kankal, Murat; Anilan, Tugce; Yuksek, Omer
2018-01-01
The analysis of rainfall frequency is an important step in hydrology and water resources engineering. However, a lack of measuring stations, short duration of statistical periods, and unreliable outliers are among the most important problems when designing hydrology projects. In this study, regional rainfall analysis based on L-moments was used to overcome these problems in the Eastern Black Sea Basin (EBSB) of Turkey. The L-moments technique was applied at all stages of the regional analysis, including determining homogeneous regions, in addition to fitting and estimating parameters from appropriate distribution functions in each homogeneous region. We studied annual maximum rainfall height values of various durations (5 min to 24 h) from seven rain gauge stations located in the EBSB in Turkey, which have gauging periods of 39 to 70 years. Homogeneity of the region was evaluated by using L-moments. The goodness-of-fit criterion for each distribution was defined as the ZDIST statistics, depending on various distributions, including generalized logistic (GLO), generalized extreme value (GEV), generalized normal (GNO), Pearson type 3 (PE3), and generalized Pareto (GPA). GLO and GEV determined the best distributions for short (5 to 30 min) and long (1 to 24 h) period data, respectively. Based on the distribution functions, the governing equations were extracted for calculation of intensities of 2, 5, 25, 50, 100, 250, and 500 years return periods (T). Subsequently, the T values for different rainfall intensities were estimated using data quantifying maximum amount of rainfall at different times. Using these T values, duration, altitude, latitude, and longitude values were used as independent variables in a regression model of the data. The determination coefficient ( R 2) value indicated that the model yields suitable results for the regional relationship of intensity-duration-frequency (IDF), which is necessary for the design of hydraulic structures in small and medium sized catchments.
Extreme value analysis in biometrics.
Hüsler, Jürg
2009-04-01
We review some approaches of extreme value analysis in the context of biometrical applications. The classical extreme value analysis is based on iid random variables. Two different general methods are applied, which will be discussed together with biometrical examples. Different estimation, testing, goodness-of-fit procedures for applications are discussed. Furthermore, some non-classical situations are considered where the data are possibly dependent, where a non-stationary behavior is observed in the data or where the observations are not univariate. A few open problems are also stated.
Understanding Adult Age Differences in the Frequency of Problems With Friends.
Schlosnagle, Leo; Strough, JoNell
2017-01-01
We investigated characteristics of younger and older adults' friendships. Younger (N = 39) and older (N = 39) adults completed measures pertaining to a specific friend they had (i.e., contact frequency, positive friendship quality, and negative friendship quality) and their frequency of problems with friends in general. Older adults reported fewer problems with friends in general, and fewer negative friendship qualities, less frequent contact, and more positive friendship qualities with a specific friend than younger adults. Contact frequency, positive friendship quality, and negative friendship quality with a specific friend were related to frequency of problems with friends in general, but only contact frequency was a significant mediator of the relation between age and frequency of problems with friends in general. Results show that characteristics of a specific friendship relate to problems with friends in general, and that contact frequency with a specific friend mediates the relation between age and problems with friends in general. Implications are discussed. © The Author(s) 2016.
The Contribution of Particle Swarm Optimization to Three-Dimensional Slope Stability Analysis
A Rashid, Ahmad Safuan; Ali, Nazri
2014-01-01
Over the last few years, particle swarm optimization (PSO) has been extensively applied in various geotechnical engineering including slope stability analysis. However, this contribution was limited to two-dimensional (2D) slope stability analysis. This paper applied PSO in three-dimensional (3D) slope stability problem to determine the critical slip surface (CSS) of soil slopes. A detailed description of adopted PSO was presented to provide a good basis for more contribution of this technique to the field of 3D slope stability problems. A general rotating ellipsoid shape was introduced as the specific particle for 3D slope stability analysis. A detailed sensitivity analysis was designed and performed to find the optimum values of parameters of PSO. Example problems were used to evaluate the applicability of PSO in determining the CSS of 3D slopes. The first example presented a comparison between the results of PSO and PLAXI-3D finite element software and the second example compared the ability of PSO to determine the CSS of 3D slopes with other optimization methods from the literature. The results demonstrated the efficiency and effectiveness of PSO in determining the CSS of 3D soil slopes. PMID:24991652
The contribution of particle swarm optimization to three-dimensional slope stability analysis.
Kalatehjari, Roohollah; Rashid, Ahmad Safuan A; Ali, Nazri; Hajihassani, Mohsen
2014-01-01
Over the last few years, particle swarm optimization (PSO) has been extensively applied in various geotechnical engineering including slope stability analysis. However, this contribution was limited to two-dimensional (2D) slope stability analysis. This paper applied PSO in three-dimensional (3D) slope stability problem to determine the critical slip surface (CSS) of soil slopes. A detailed description of adopted PSO was presented to provide a good basis for more contribution of this technique to the field of 3D slope stability problems. A general rotating ellipsoid shape was introduced as the specific particle for 3D slope stability analysis. A detailed sensitivity analysis was designed and performed to find the optimum values of parameters of PSO. Example problems were used to evaluate the applicability of PSO in determining the CSS of 3D slopes. The first example presented a comparison between the results of PSO and PLAXI-3D finite element software and the second example compared the ability of PSO to determine the CSS of 3D slopes with other optimization methods from the literature. The results demonstrated the efficiency and effectiveness of PSO in determining the CSS of 3D soil slopes.
Proceedings of the 6. international conference on stability and handling of liquid fuels. Volume 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giles, H.N.
Volume 2 of these proceedings contain 42 papers arranged under the following topical sections: Fuel blending and compatibility; Middle distillates; Microbiology; Alternative fuels; General topics (analytical methods, tank remediation, fuel additives, storage stability); and Poster presentations (analysis methods, oxidation kinetics, health problems).
Guidance and Counseling Program K-12. Report of Evaluation.
ERIC Educational Resources Information Center
Roberts, Joan; Weslander, Darrell L.
This evaluation of the Des Moines school guidance program provides an introductory history of guidance and counseling services in the district. General results of the evaluation focus on changing counselor roles, and problems caused by lack of specific program guidelines. Counselor profiles and interviews are presented, with an analysis of…
Nonisothermal Analysis of Solution Kinetics by Spreadsheet Simulation
ERIC Educational Resources Information Center
de Levie, Robert
2012-01-01
A fast and generally applicable alternative solution to the problem of determining the useful shelf life of medicinal solutions is described. It illustrates the power and convenience of the combination of numerical simulation and nonlinear least squares with a practical pharmaceutical application of chemical kinetics and thermodynamics, validated…
Sixteenth NASTRAN (R) Users' Colloquium
NASA Technical Reports Server (NTRS)
1988-01-01
These are the proceedings of the Sixteenth NASTRAN Users' Colloquium held in Arlington, Virginia from 25 to 29 April, 1988. Technical papers contributed by participants review general application of finite element methodology and the specific application of the NASA Structural Analysis System (NASTRAN) to a variety of static and dynamic structural problems.
A Simultaneous Analysis Problem for Advanced General Chemistry Laboratories.
ERIC Educational Resources Information Center
Leary, J. J.; Gallaher, T. N.
1983-01-01
Oxidation of magnesium metal in air has been used as an introductory experiment for determining the formula of a compound. The experiment described employs essentially the same laboratory procedure but is significantly more advanced in terms of information sought. Procedures and sample calculations/results are provided. (JN)
Codeswitching and Stance: Issues in Interpretation
ERIC Educational Resources Information Center
Jaffe, Alexandra
2007-01-01
This article explores the long-standing problem of ascribing meaning to individual acts of codeswitching. Drawing on ethnographic data from bilingual classrooms in Corsica, I situate the analysis of codeswitching within the more general question of the interpretation of speaker stance, which is defined as speakers' positioning with regard to both…
Administrator Preparation: Looking Backwards and Forwards
ERIC Educational Resources Information Center
Bridges, Edwin
2012-01-01
Purpose: The purpose of this paper was to conduct a critical analysis of the origins and implementation of problem-based learning in educational administration as a window into the limitations of this approach and more generally administrator preparation. Design/methodology/approach: The author reviewed the published work of the originator from…
Trends in Children's Video Game Play: Practical but Not Creative Thinking
ERIC Educational Resources Information Center
Hamlen, Karla R.
2013-01-01
Prior research has found common trends among children's video game play as related to gender, age, interests, creativity, and other descriptors. This study re-examined the previously reported trends by utilizing principal components analysis with variables such as creativity, general characteristics, and problem-solving methods to determine…
Training Evaluation: An Analysis of the Stakeholders' Evaluation Needs
ERIC Educational Resources Information Center
Guerci, Marco; Vinante, Marco
2011-01-01
Purpose: In recent years, the literature on program evaluation has examined multi-stakeholder evaluation, but training evaluation models and practices have not generally taken this problem into account. The aim of this paper is to fill this gap. Design/methodology/approach: This study identifies intersections between methodologies and approaches…
The Pursuit of Understanding in Clinical Reasoning.
ERIC Educational Resources Information Center
Feltovich, Paul J.; Patel, Vimla L.
Trends in emphases in the study of clinical reasoning are examined, with attention to three major branches of research: problem-solving, knowledge engineering, and propositional analysis. There has been a general progression from a focus on the generic form of clinical reasoning to an emphasis on medical content that supports the reasoning…
Automated Loads Analysis System (ATLAS)
NASA Technical Reports Server (NTRS)
Gardner, Stephen; Frere, Scot; O’Reilly, Patrick
2013-01-01
ATLAS is a generalized solution that can be used for launch vehicles. ATLAS is used to produce modal transient analysis and quasi-static analysis results (i.e., accelerations, displacements, and forces) for the payload math models on a specific Shuttle Transport System (STS) flight using the shuttle math model and associated forcing functions. This innovation solves the problem of coupling of payload math models into a shuttle math model. It performs a transient loads analysis simulating liftoff, landing, and all flight events between liftoff and landing. ATLAS utilizes efficient and numerically stable algorithms available in MSC/NASTRAN.
Asymptotic analysis of the narrow escape problem in dendritic spine shaped domain: three dimensions
NASA Astrophysics Data System (ADS)
Li, Xiaofei; Lee, Hyundae; Wang, Yuliang
2017-08-01
This paper deals with the three-dimensional narrow escape problem in a dendritic spine shaped domain, which is composed of a relatively big head and a thin neck. The narrow escape problem is to compute the mean first passage time of Brownian particles traveling from inside the head to the end of the neck. The original model is to solve a mixed Dirichlet-Neumann boundary value problem for the Poisson equation in the composite domain, and is computationally challenging. In this paper we seek to transfer the original problem to a mixed Robin-Neumann boundary value problem by dropping the thin neck part, and rigorously derive the asymptotic expansion of the mean first passage time with high order terms. This study is a nontrivial three-dimensional generalization of the work in Li (2014 J. Phys. A: Math. Theor. 47 505202), where a two-dimensional analogue domain is considered.
Development of a questionnaire to evaluate coping strategies for skin problems.
Hernández-Fernaud, Estefanía; Hernández, Bernardo; Ruiz, Cristina; Ruiz, Antonia
2009-05-01
The aim of this work was to develop an instrument (Coping Strategies for Skin Problems Questionnaire) suitable for identifying the coping strategies people use for general skin problems. We analyzed its psychometric properties when applied to a sample of 299 individuals. Factor analysis shows a 6-factor structure referring to the wish to change, problem-solving strategies, the search for information and asking for social support, the ability to distance oneself from the problem and to see the positive aspects of the situation. These factors explain 60.77% of the variance and show an internal consistency higher than 0.67. We analyse the validity of the questionnaire and identify different coping profiles depending on the degree of skin damage as assessed by the participants and their search for health services. According to the psychometric properties obtained, we conclude that our instrument is valid and reliable for use with people presenting skin problems.
Ashkenazi, Sarit; Rosenberg-Lee, Miriam; Metcalfe, Arron W.S.; Swigart, Anna G.; Menon, Vinod
2014-01-01
The study of developmental disorders can provide a unique window into the role of domain-general cognitive abilities and neural systems in typical and atypical development. Mathematical disabilities (MD) are characterized by marked difficulty in mathematical cognition in the presence of preserved intelligence and verbal ability. Although studies of MD have most often focused on the role of core deficits in numerical processing, domain-general cognitive abilities, in particular working memory (WM), have also been implicated. Here we identify specific WM components that are impaired in children with MD and then examine their role in arithmetic problem solving. Compared to typically developing (TD) children, the MD group demonstrated lower arithmetic performance and lower visuo-spatial working memory (VSWM) scores with preserved abilities on the phonological and central executive components of WM. Whole brain analysis revealed that, during arithmetic problem solving, left posterior parietal cortex, bilateral dorsolateral and ventrolateral prefrontal cortex, cingulate gyrus and precuneus, and fusiform gyrus responses were positively correlated with VSWM ability in TD children, but not in the MD group. Additional analyses using a priori posterior parietal cortex regions previously implicated in WM tasks, demonstrated a convergent pattern of results during arithmetic problem solving. These results suggest that MD is characterized by a common locus of arithmetic and VSWM deficits at both the cognitive and functional neuroanatomical levels. Unlike TD children, children with MD do not use VSWM resources appropriately during arithmetic problem solving. This work advances our understanding of VSWM as an important domain-general cognitive process in both typical and atypical mathematical skill development. PMID:23896444
Coomer, R A
2013-07-01
The aim of this qualitative study was to describe the problems that parents or caregivers of children with mental health disabilities and disorders in Namibia experience when accessing healthcare resources for their children. Data was collected through focus group discussions with the participants and individual interviews with the key informants. Overall, a total of 41 people provided information for this study. Thematic data analysis was used to assess the data. The main barriers experienced by the parents were poor service provision, transport and money, whilst access to education services facilitated access to healthcare services. The challenges go beyond commonly-reported problems such as sub-optimal service provision and include the basic challenge of lack of transportation to reach healthcare services. Many of the barriers identified in this study have been related to general problems with the healthcare system in Namibia. Therefore there is a need to address general concerns about healthcare provision as well as improve specific services for children with mental health disabilities and disorders in Namibia.
Kim, Won Hwa; Chung, Moo K; Singh, Vikas
2013-01-01
The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape's local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem.
A General Exponential Framework for Dimensionality Reduction.
Wang, Su-Jing; Yan, Shuicheng; Yang, Jian; Zhou, Chun-Guang; Fu, Xiaolan
2014-02-01
As a general framework, Laplacian embedding, based on a pairwise similarity matrix, infers low dimensional representations from high dimensional data. However, it generally suffers from three issues: 1) algorithmic performance is sensitive to the size of neighbors; 2) the algorithm encounters the well known small sample size (SSS) problem; and 3) the algorithm de-emphasizes small distance pairs. To address these issues, here we propose exponential embedding using matrix exponential and provide a general framework for dimensionality reduction. In the framework, the matrix exponential can be roughly interpreted by the random walk over the feature similarity matrix, and thus is more robust. The positive definite property of matrix exponential deals with the SSS problem. The behavior of the decay function of exponential embedding is more significant in emphasizing small distance pairs. Under this framework, we apply matrix exponential to extend many popular Laplacian embedding algorithms, e.g., locality preserving projections, unsupervised discriminant projections, and marginal fisher analysis. Experiments conducted on the synthesized data, UCI, and the Georgia Tech face database show that the proposed new framework can well address the issues mentioned above.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.; Korivi, Vamshi M.
1991-01-01
A gradient-based design optimization strategy for practical aerodynamic design applications is presented, which uses the 2D thin-layer Navier-Stokes equations. The strategy is based on the classic idea of constructing different modules for performing the major tasks such as function evaluation, function approximation and sensitivity analysis, mesh regeneration, and grid sensitivity analysis, all driven and controlled by a general-purpose design optimization program. The accuracy of aerodynamic shape sensitivity derivatives is validated on two viscous test problems: internal flow through a double-throat nozzle and external flow over a NACA 4-digit airfoil. A significant improvement in aerodynamic performance has been achieved in both cases. Particular attention is given to a consistent treatment of the boundary conditions in the calculation of the aerodynamic sensitivity derivatives for the classic problems of external flow over an isolated lifting airfoil on 'C' or 'O' meshes.
Karakaya, Jale; Karabulut, Erdem; Yucel, Recai M.
2015-01-01
Modern statistical methods using incomplete data have been increasingly applied in a wide variety of substantive problems. Similarly, receiver operating characteristic (ROC) analysis, a method used in evaluating diagnostic tests or biomarkers in medical research, has also been increasingly popular problem in both its development and application. While missing-data methods have been applied in ROC analysis, the impact of model mis-specification and/or assumptions (e.g. missing at random) underlying the missing data has not been thoroughly studied. In this work, we study the performance of multiple imputation (MI) inference in ROC analysis. Particularly, we investigate parametric and non-parametric techniques for MI inference under common missingness mechanisms. Depending on the coherency of the imputation model with the underlying data generation mechanism, our results show that MI generally leads to well-calibrated inferences under ignorable missingness mechanisms. PMID:26379316
NASA Technical Reports Server (NTRS)
Schallhorn, Paul; Majumdar, Alok
2012-01-01
This paper describes a finite volume based numerical algorithm that allows multi-dimensional computation of fluid flow within a system level network flow analysis. There are several thermo-fluid engineering problems where higher fidelity solutions are needed that are not within the capacity of system level codes. The proposed algorithm will allow NASA's Generalized Fluid System Simulation Program (GFSSP) to perform multi-dimensional flow calculation within the framework of GFSSP s typical system level flow network consisting of fluid nodes and branches. The paper presents several classical two-dimensional fluid dynamics problems that have been solved by GFSSP's multi-dimensional flow solver. The numerical solutions are compared with the analytical and benchmark solution of Poiseulle, Couette and flow in a driven cavity.
NASA Astrophysics Data System (ADS)
Schumacher, Florian; Friederich, Wolfgang
Due to increasing computational resources, the development of new numerically demanding methods and software for imaging Earth's interior remains of high interest in Earth sciences. Here, we give a description from a user's and programmer's perspective of the highly modular, flexible and extendable software package ASKI-Analysis of Sensitivity and Kernel Inversion-recently developed for iterative scattering-integral-based seismic full waveform inversion. In ASKI, the three fundamental steps of solving the seismic forward problem, computing waveform sensitivity kernels and deriving a model update are solved by independent software programs that interact via file output/input only. Furthermore, the spatial discretizations of the model space used for solving the seismic forward problem and for deriving model updates, respectively, are kept completely independent. For this reason, ASKI does not contain a specific forward solver but instead provides a general interface to established community wave propagation codes. Moreover, the third fundamental step of deriving a model update can be repeated at relatively low costs applying different kinds of model regularization or re-selecting/weighting the inverted dataset without need to re-solve the forward problem or re-compute the kernels. Additionally, ASKI offers the user sensitivity and resolution analysis tools based on the full sensitivity matrix and allows to compose customized workflows in a consistent computational environment. ASKI is written in modern Fortran and Python, it is well documented and freely available under terms of the GNU General Public License (http://www.rub.de/aski).
Multicriteria analysis of ontologically represented information
NASA Astrophysics Data System (ADS)
Wasielewska, K.; Ganzha, M.; Paprzycki, M.; Bǎdicǎ, C.; Ivanovic, M.; Lirkov, I.
2014-11-01
Our current work concerns the development of a decision support system for the software selection problem. The main idea is to utilize expert knowledge to help the user in selecting the best software / method / computational resource to solve a computational problem. Obviously, this involves multicriterial decision making and the key open question is: which method to choose. The context of the work is provided by the Agents in Grid (AiG) project, where the software selection (and thus multicriterial analysis) is to be realized when all information concerning the problem, the hardware and the software is ontologically represented. Initially, we have considered the Analytical Hierarchy Process (AHP), which is well suited for the hierarchical data structures (e.g., such that have been formulated in terms of ontologies). However, due to its well-known shortcomings, we have decided to extend our search for the multicriterial analysis method best suited for the problem in question. In this paper we report results of our search, which involved: (i) TOPSIS (Technique for Order Preference by Similarity to Ideal Solution), (ii) PROMETHEE, and (iii) GRIP (Generalized Regression with Intensities of Preference). We also briefly argue why other methods have not been considered as valuable candidates.
Probabilistic finite elements for fatigue and fracture analysis
NASA Astrophysics Data System (ADS)
Belytschko, Ted; Liu, Wing Kam
1993-04-01
An overview of the probabilistic finite element method (PFEM) developed by the authors and their colleagues in recent years is presented. The primary focus is placed on the development of PFEM for both structural mechanics problems and fracture mechanics problems. The perturbation techniques are used as major tools for the analytical derivation. The following topics are covered: (1) representation and discretization of random fields; (2) development of PFEM for the general linear transient problem and nonlinear elasticity using Hu-Washizu variational principle; (3) computational aspects; (4) discussions of the application of PFEM to the reliability analysis of both brittle fracture and fatigue; and (5) a stochastic computational tool based on stochastic boundary element (SBEM). Results are obtained for the reliability index and corresponding probability of failure for: (1) fatigue crack growth; (2) defect geometry; (3) fatigue parameters; and (4) applied loads. These results show that initial defect is a critical parameter.
NASA Astrophysics Data System (ADS)
Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu
2018-04-01
A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.
High-dimensional cluster analysis with the Masked EM Algorithm
Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.
2014-01-01
Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694
Probabilistic finite elements for fatigue and fracture analysis
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Liu, Wing Kam
1993-01-01
An overview of the probabilistic finite element method (PFEM) developed by the authors and their colleagues in recent years is presented. The primary focus is placed on the development of PFEM for both structural mechanics problems and fracture mechanics problems. The perturbation techniques are used as major tools for the analytical derivation. The following topics are covered: (1) representation and discretization of random fields; (2) development of PFEM for the general linear transient problem and nonlinear elasticity using Hu-Washizu variational principle; (3) computational aspects; (4) discussions of the application of PFEM to the reliability analysis of both brittle fracture and fatigue; and (5) a stochastic computational tool based on stochastic boundary element (SBEM). Results are obtained for the reliability index and corresponding probability of failure for: (1) fatigue crack growth; (2) defect geometry; (3) fatigue parameters; and (4) applied loads. These results show that initial defect is a critical parameter.
NASA Technical Reports Server (NTRS)
Farassat, F.; Baty, R. S.
2000-01-01
The study of the shock structure in a viscous heat conducting fluid is an old problem. We study this problem from a novel mathematical point of view. A new class of generalized functions is defined where multiplication of any two functions is allowed with the usual properties. A Heaviside function in this class has the unit jump at occurring on an infinitesimal interval of the nonstandard analysis (NSA) in the halo of . This jump has a smooth microstructure over the infinitesimal interval . From this point of view, we have a new class of Heaviside functions, and their derivatives the Dirac delta functions, which are equivalent when viewed as continuous linear functionals over the test function space of Schwartz. However, they differ in their microstructures which in applications are determined from physics of the problem as shown in our presentation.
A linear programming approach to max-sum problem: a review.
Werner, Tomás
2007-07-01
The max-sum labeling problem, defined as maximizing a sum of binary (i.e., pairwise) functions of discrete variables, is a general NP-hard optimization problem with many applications, such as computing the MAP configuration of a Markov random field. We review a not widely known approach to the problem, developed by Ukrainian researchers Schlesinger et al. in 1976, and show how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product. In particular, we review Schlesinger et al.'s upper bound on the max-sum criterion, its minimization by equivalent transformations, its relation to the constraint satisfaction problem, the fact that this minimization is dual to a linear programming relaxation of the original problem, and the three kinds of consistency necessary for optimality of the upper bound. We revisit problems with Boolean variables and supermodular problems. We describe two algorithms for decreasing the upper bound. We present an example application for structural image analysis.
Ung, Elise Meyn; Erichsen, Cecilie Birkmose; Poulsen, Stig; Lau, Marianne Engelbrecht; Simonsen, Sebastian; Davidsen, Annika Helgadóttir
2017-01-01
Interpersonal problems are thought to play an essential role in the development and maintenance of eating disorders. The aim of the current study was to investigate whether a specific interpersonal profile could be identified in a group of patients diagnosed with Bulimia Nervosa, Binge Eating Disorder, or Eating Disorders Not Otherwise Specified, and to explore if specific types of interpersonal problems were systematically related to treatment outcome in this group of patients. The participants were 159 patients who received systemic/narrative outpatient group psychotherapy. Interpersonal problems were measured at baseline, and eating disorder symptoms were measured pre- and post treatment. Data were analysed with the Structural Summary Method, a particular method for the analysis of the Inventory of Interpersonal Problems, and hierarchical regression analysis was conducted. The patients demonstrated a generally Non-assertive and Friendly-submissive interpersonal style. No significant association between the overall level of interpersonal problems and treatment outcome was identified. However, the results showed a correlation between being cold and hostile and poor treatment outcome, while being domineering showed a trend approaching significance in predicting better treatment outcome. The results indicate that patients with eating disorders show a specific interpersonal profile, and suggest that particular types of interpersonal problems are associated with treatment outcome.
Step by Step: Biology Undergraduates’ Problem-Solving Procedures during Multiple-Choice Assessment
Prevost, Luanna B.; Lemons, Paula P.
2016-01-01
This study uses the theoretical framework of domain-specific problem solving to explore the procedures students use to solve multiple-choice problems about biology concepts. We designed several multiple-choice problems and administered them on four exams. We trained students to produce written descriptions of how they solved the problem, and this allowed us to systematically investigate their problem-solving procedures. We identified a range of procedures and organized them as domain general, domain specific, or hybrid. We also identified domain-general and domain-specific errors made by students during problem solving. We found that students use domain-general and hybrid procedures more frequently when solving lower-order problems than higher-order problems, while they use domain-specific procedures more frequently when solving higher-order problems. Additionally, the more domain-specific procedures students used, the higher the likelihood that they would answer the problem correctly, up to five procedures. However, if students used just one domain-general procedure, they were as likely to answer the problem correctly as if they had used two to five domain-general procedures. Our findings provide a categorization scheme and framework for additional research on biology problem solving and suggest several important implications for researchers and instructors. PMID:27909021
Transverse instability of periodic and generalized solitary waves for a fifth-order KP model
NASA Astrophysics Data System (ADS)
Haragus, Mariana; Wahlén, Erik
2017-02-01
We consider a fifth-order Kadomtsev-Petviashvili equation which arises as a two-dimensional model in the classical water-wave problem. This equation possesses a family of generalized line solitary waves which decay exponentially to periodic waves at infinity. We prove that these solitary waves are transversely spectrally unstable and that this instability is induced by the transverse instability of the periodic tails. We rely upon a detailed spectral analysis of some suitably chosen linear operators.
Uncertainties in predicting solar panel power output
NASA Technical Reports Server (NTRS)
Anspaugh, B.
1974-01-01
The problem of calculating solar panel power output at launch and during a space mission is considered. The major sources of uncertainty and error in predicting the post launch electrical performance of the panel are considered. A general discussion of error analysis is given. Examples of uncertainty calculations are included. A general method of calculating the effect on the panel of various degrading environments is presented, with references supplied for specific methods. A technique for sizing a solar panel for a required mission power profile is developed.
The General Factor of Personality: A General Critique.
Revelle, William; Wilt, Joshua
2013-10-01
Recently, it has been proposed that all non-cognitive measures of personality share a general factor of personality. A problem with many of these studies is a lack of clarity in defining a general factor. In this paper we address the multiple ways in which a general factor has been identified and argue that many of these approaches find factors that are not in fact general. Through the use of artificial examples, we show that a general factor is not: The first factor or component of a correlation or covariance matrix.The first factor resulting from a bifactor rotation or biquartimin transformationNecessarily the result of a confirmatory factor analysis forcing a bifactor solution We consider how the definition of what constitutes a general factor can lead to confusion, and we will demonstrate alternative ways of estimating the general factor saturation that are more appropriate.
The General Factor of Personality: A General Critique
Revelle, William; Wilt, Joshua
2013-01-01
Recently, it has been proposed that all non-cognitive measures of personality share a general factor of personality. A problem with many of these studies is a lack of clarity in defining a general factor. In this paper we address the multiple ways in which a general factor has been identified and argue that many of these approaches find factors that are not in fact general. Through the use of artificial examples, we show that a general factor is not: The first factor or component of a correlation or covariance matrix.The first factor resulting from a bifactor rotation or biquartimin transformationNecessarily the result of a confirmatory factor analysis forcing a bifactor solution We consider how the definition of what constitutes a general factor can lead to confusion, and we will demonstrate alternative ways of estimating the general factor saturation that are more appropriate. PMID:23956474
Huang, Jin; Vaughn, Michael G.
2016-01-01
This study examined the association between household food insecurity (insufficient access to adequate and nutritious food) and trajectories of externalising and internalising behaviour problems in children from kindergarten to fifth grade using longitudinal data from the Early Childhood Longitudinal Study—Kindergarten Cohort (ECLS-K), a nationally representative study in the USA. Household food insecurity was assessed using the eighteen-item standard food security scale, and children's behaviour problems were reported by teachers. Latent growth curve analysis was conducted on 7,348 children in the ECLS-K, separately for boys and girls. Following adjustment for an extensive array of confounding variables, results suggest that food insecurity generally was not associated with developmental change in children's behaviour problems. The impact of food insecurity on behaviour problems may be episodic or interact with certain developmental stages. PMID:27559210
Hellman, Matilda; Majamäki, Maija; Rolando, Sara; Bujalski, Michał; Lemmens, Paul
2015-03-01
Press items (N = 1327) about addiction related problems were collected from politically independent daily newspapers in Finland, Italy, the Netherlands, and Poland from 1991, 1998, and 2011. A synchronized qualitative coding was performed for discerning the descriptions of the genesis to the problems in terms of described causes to and reasons for why they occur. Environmental explanations were by far the most common and they varied most between the materials. The analysis documents how the portrayals include traces of their contextual origin, relating to different media tasks and welfare cultural traditions. Meaning-based differences were also assigned to the kind of problems that held the most salience in the press reporting. A general worry over societal change is tied into the explanations of accumulating addiction problems and underpins the press reporting in all countries.
Teunissen, Erik; Sherally, Jamilah; van den Muijsenbergh, Maria; Dowrick, Chris; van Weel-Baumgarten, Evelyn; van Weel, Chris
2014-01-01
Objective To explore health-seeking behaviour and experiences of undocumented migrants (UMs) in general practice in relation to mental health problems. Design Qualitative study using semistructured interviews and thematic analysis. Participants 15 UMs in the Netherlands, varying in age, gender, country of origin and education; inclusion until theoretical saturation was reached. Setting 4 cities in the Netherlands. Results UMs consider mental health problems to be directly related to their precarious living conditions. For support, they refer to friends and religion first, the general practitioner (GP) is their last resort. Barriers for seeking help include taboo on mental health problems, lack of knowledge of and trust in GPs competencies regarding mental health and general barriers in accessing healthcare as an UM (lack of knowledge of the right to access healthcare, fear of prosecution, financial constraints and practical difficulties). Once access has been gained, satisfaction with care is high. This is primarily due to the attitude of the GPs and the effectiveness of the treatment. Reasons for dissatisfaction with GP care are an experienced lack of time, lack of personal attention and absence of physical examination. Expectations of the GP vary, medication for mental health problems is not necessarily seen as a good practice. Conclusions UMs often see their precarious living conditions as an important determinant of their mental health; they do not easily seek help for mental health problems and various barriers hamper access to healthcare for them. Rather than for medication, UMs are looking for encouragement and support from their GP. We recommend that barriers experienced in seeking professional care are tackled at an institutional level as well as at the level of GP. PMID:25416057
Burger, Joanna; Myers, O; Boring, C S; Dixon, C; Lord, C; Ramos, R; Shukla, S; Gochfeld, Michael
2004-06-01
Perceptions about general environmental problems, governmental spending for these problems, and major concerns about the US Department of Energy's Los Alamos National Laboratory (LANL) were examined by interviewing 356 people attending a gun show in Albuquerque, New Mexico. The hypothesis that there are differences in these three areas as a function of ethnicity was examined. We predicted that if differences existed, they would exist for all three evaluations (general environmental problems, government spending, and environmental concerns about LANL). However, this was not the case; there were fewer ethnic differences concerning LANL. Hispanics rated most general environmental problems higher than Whites and rated their willingness to expend federal funds higher than Whites, although all groups gave a lower score on willingness than on concern. Further, the congruence between these two types of ratings was higher for Hispanics than for others. In general, the concerns expressed by subjects about LANL showed few ethnic differences, and everyone was most concerned about contamination. These data indicate that Hispanics attending a gun show are equally or more concerned than others about environmental problems generally but are not more concerned about LANL. The data can be useful for developing future research and stewardship plans and for understanding general environmental problems and their relationship to concerns about LANL. More generally, they indicate that the attitudes and perceptions of Hispanics deserve increased study in a general population.
Improving Fraud and Abuse Detection in General Physician Claims: A Data Mining Study
Joudaki, Hossein; Rashidian, Arash; Minaei-Bidgoli, Behrouz; Mahmoodi, Mahmood; Geraili, Bijan; Nasiri, Mahdi; Arab, Mohammad
2016-01-01
Background: We aimed to identify the indicators of healthcare fraud and abuse in general physicians’ drug prescription claims, and to identify a subset of general physicians that were more likely to have committed fraud and abuse. Methods: We applied data mining approach to a major health insurance organization dataset of private sector general physicians’ prescription claims. It involved 5 steps: clarifying the nature of the problem and objectives, data preparation, indicator identification and selection, cluster analysis to identify suspect physicians, and discriminant analysis to assess the validity of the clustering approach. Results: Thirteen indicators were developed in total. Over half of the general physicians (54%) were ‘suspects’ of conducting abusive behavior. The results also identified 2% of physicians as suspects of fraud. Discriminant analysis suggested that the indicators demonstrated adequate performance in the detection of physicians who were suspect of perpetrating fraud (98%) and abuse (85%) in a new sample of data. Conclusion: Our data mining approach will help health insurance organizations in low-and middle-income countries (LMICs) in streamlining auditing approaches towards the suspect groups rather than routine auditing of all physicians. PMID:26927587
Improving Fraud and Abuse Detection in General Physician Claims: A Data Mining Study.
Joudaki, Hossein; Rashidian, Arash; Minaei-Bidgoli, Behrouz; Mahmoodi, Mahmood; Geraili, Bijan; Nasiri, Mahdi; Arab, Mohammad
2015-11-10
We aimed to identify the indicators of healthcare fraud and abuse in general physicians' drug prescription claims, and to identify a subset of general physicians that were more likely to have committed fraud and abuse. We applied data mining approach to a major health insurance organization dataset of private sector general physicians' prescription claims. It involved 5 steps: clarifying the nature of the problem and objectives, data preparation, indicator identification and selection, cluster analysis to identify suspect physicians, and discriminant analysis to assess the validity of the clustering approach. Thirteen indicators were developed in total. Over half of the general physicians (54%) were 'suspects' of conducting abusive behavior. The results also identified 2% of physicians as suspects of fraud. Discriminant analysis suggested that the indicators demonstrated adequate performance in the detection of physicians who were suspect of perpetrating fraud (98%) and abuse (85%) in a new sample of data. Our data mining approach will help health insurance organizations in low-and middle-income countries (LMICs) in streamlining auditing approaches towards the suspect groups rather than routine auditing of all physicians. © 2016 by Kerman University of Medical Sciences.
Arheiam, Arheiam; Brown, Stephen L; Higham, Susan M; Albadri, Sondos; Harris, Rebecca V
2016-12-01
Diet diaries are recommended for dentists to monitor children's sugar consumption. Diaries provide multifaceted dietary information, but patients respond better to simpler advice. We explore how dentists integrate information from diet diaries to deliver useable advice to patients. As part of a questionnaire study of general dental practitioners (GDPs) in Northwest England, we asked dentists to specify the advice they would give a hypothetical patient based upon a diet diary case vignette. A sequential mixed method approach was used for data analysis: an initial inductive content analysis (ICA) to develop coding system to capture the complexity of dietary assessment and delivered advice. Using these codes, a quantitative analysis was conducted to examine correspondences between identified dietary problems and advice given. From these correspondences, we inferred how dentists reduced problems to give simple advice. A total of 229 dentists' responses were analysed. ICA on 40 questionnaires identified two distinctive approaches of developing diet advice: a summative (summary of issues into an all-encompassing message) and a selective approach (selection of a main message approach). In the quantitative analysis of all responses, raw frequencies indicated that dentists saw more problems than they advised on and provided highly specific advice on a restricted number of problems (e.g. not eating sugars before bedtime 50.7% or harmful items 42.4%, rather than simply reducing the amount of sugar 9.2%). Binary logistic regression models indicate that dentists provided specific advice that was tailored to the key problems that they identified. Dentists provided specific recommendations to address what they felt were key problems, whilst not intervening to address other problems that they may have felt less pressing. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
1994-01-01
General Purpose Boundary Element Solution Technology (GPBEST) software employs the boundary element method of mechanical engineering analysis, as opposed to finite element. It is, according to one of its developers, 10 times faster in data preparation and more accurate than other methods. Its use results in less expensive products because the time between design and manufacturing is shortened. A commercial derivative of a NASA-developed computer code, it is marketed by Best Corporation to solve problems in stress analysis, heat transfer, fluid analysis and yielding and cracking of solids. Other applications include designing tractor and auto parts, household appliances and acoustic analysis.
Yang, Bei; Qin, Qi-Zhong; Han, Ling-Li; Lin, Jing; Chen, Yu
2018-02-01
To investigate the relieving effects of hot spring balneotherapy on mental stress, sleep disorder, general health problems, and women's health problems in sub-healthy people, we recruited 500 volunteers in sub-health in Chongqing, and 362 volunteers completed the project, including 223 in the intervention group and 139 in the control group. The intervention group underwent hot spring balneotherapy for 5 months, while the control group did not. The two groups took questionnaire investigation (general data, mental stress, emotional status, sleep quality, general health problems, as well as some women's health problems) and physical examination (height, weight, waist circumference, blood pressure, blood lipid, blood sugar) 5 months before and after the intervention, respectively. After intervention, sleep disorder (difficulty in falling asleep (P = 0.017); dreaminess, nightmare suffering, and restless sleep (P = 0.013); easy awakening (P = 0.003) and difficulty in falling into sleep again after awakening(P = 0.016); and mental stress (P = 0.031) and problems of general health (head pain (P = 0.026), joint pain(P = 0.009), leg or foot cramps (P = 0.001), blurred vision (P = 0.009)) were relieved significantly in the intervention group, as compared with the control group. While other indicators (fatigue, eye tiredness, limb numbness, constipation, skin allergy) and women's health problems (breast distending pain; dysmenorrhea, irregular menstruation) were relieved significantly in the self-comparison of the intervention group before and after intervention (P < 0.05), but showed no statistically significant difference between two groups (P > 0.05). All indications (except bad mood, low mood, and worry or irritability) in the intervention group significantly improved, with effect size from 0.096 to 1.302. Multiple logistic regression analysis showed that the frequency, length, and location of balneotherapy in the intervention group were the factors influencing emotion, sleep, and health condition (P < 0.05). Relief of insomnia, fatigue, and leg or foot cramps was greater in old-age group than in young-aged group (P < 0.05). Physical examination found that waist circumferences in women of various ages under 55 years were significantly reduced in the intervention group (P < 0.05), while that in men did not significantly change (P > 0.05). Spa therapy (balneotherapy) relieves mental stress, sleep disorder, general health, and reduces women's waist circumferences in sub-healthy people.
Sensitivity of goodness-of-fit statistics to rainfall data rounding off
NASA Astrophysics Data System (ADS)
Deidda, Roberto; Puliga, Michelangelo
An analysis based on the L-moments theory suggests of adopting the generalized Pareto distribution to interpret daily rainfall depths recorded by the rain-gauge network of the Hydrological Survey of the Sardinia Region. Nevertheless, a big problem, not yet completely resolved, arises in the estimation of a left-censoring threshold able to assure a good fitting of rainfall data with the generalized Pareto distribution. In order to detect an optimal threshold, keeping the largest possible number of data, we chose to apply a “failure-to-reject” method based on goodness-of-fit tests, as it was proposed by Choulakian and Stephens [Choulakian, V., Stephens, M.A., 2001. Goodness-of-fit tests for the generalized Pareto distribution. Technometrics 43, 478-484]. Unfortunately, the application of the test, using percentage points provided by Choulakian and Stephens (2001), did not succeed in detecting a useful threshold value in most analyzed time series. A deeper analysis revealed that these failures are mainly due to the presence of large quantities of rounding off values among sample data, affecting the distribution of goodness-of-fit statistics and leading to significant departures from percentage points expected for continuous random variables. A procedure based on Monte Carlo simulations is thus proposed to overcome these problems.
The generalized quadratic knapsack problem. A neuronal network approach.
Talaván, Pedro M; Yáñez, Javier
2006-05-01
The solution of an optimization problem through the continuous Hopfield network (CHN) is based on some energy or Lyapunov function, which decreases as the system evolves until a local minimum value is attained. A new energy function is proposed in this paper so that any 0-1 linear constrains programming with quadratic objective function can be solved. This problem, denoted as the generalized quadratic knapsack problem (GQKP), includes as particular cases well-known problems such as the traveling salesman problem (TSP) and the quadratic assignment problem (QAP). This new energy function generalizes those proposed by other authors. Through this energy function, any GQKP can be solved with an appropriate parameter setting procedure, which is detailed in this paper. As a particular case, and in order to test this generalized energy function, some computational experiments solving the traveling salesman problem are also included.
Working memory, worry, and algebraic ability.
Trezise, Kelly; Reeve, Robert A
2014-05-01
Math anxiety (MA)-working memory (WM) relationships have typically been examined in the context of arithmetic problem solving, and little research has examined the relationship in other math domains (e.g., algebra). Moreover, researchers have tended to examine MA/worry separate from math problem solving activities and have used general WM tasks rather than domain-relevant WM measures. Furthermore, it seems to have been assumed that MA affects all areas of math. It is possible, however, that MA is restricted to particular math domains. To examine these issues, the current research assessed claims about the impact on algebraic problem solving of differences in WM and algebraic worry. A sample of 80 14-year-old female students completed algebraic worry, algebraic WM, algebraic problem solving, nonverbal IQ, and general math ability tasks. Latent profile analysis of worry and WM measures identified four performance profiles (subgroups) that differed in worry level and WM capacity. Consistent with expectations, subgroup membership was associated with algebraic problem solving performance: high WM/low worry>moderate WM/low worry=moderate WM/high worry>low WM/high worry. Findings are discussed in terms of the conceptual relationship between emotion and cognition in mathematics and implications for the MA-WM-performance relationship. Copyright © 2013 Elsevier Inc. All rights reserved.
Transient analysis of 1D inhomogeneous media by dynamic inhomogeneous finite element method
NASA Astrophysics Data System (ADS)
Yang, Zailin; Wang, Yao; Hei, Baoping
2013-12-01
The dynamic inhomogeneous finite element method is studied for use in the transient analysis of onedimensional inhomogeneous media. The general formula of the inhomogeneous consistent mass matrix is established based on the shape function. In order to research the advantages of this method, it is compared with the general finite element method. A linear bar element is chosen for the discretization tests of material parameters with two fictitious distributions. And, a numerical example is solved to observe the differences in the results between these two methods. Some characteristics of the dynamic inhomogeneous finite element method that demonstrate its advantages are obtained through comparison with the general finite element method. It is found that the method can be used to solve elastic wave motion problems with a large element scale and a large number of iteration steps.
NASA Technical Reports Server (NTRS)
1998-01-01
In 1966, MacNeal-Schwendler Corporation (MSC) was awarded a contract by NASA to develop a general purpose structural analysis program dubbed NASTRAN (NASA structural analysis). The first operational version was delivered in 1969. In 1982, MSC procured the rights to market their subsequent version of NASTRAN to industry as a problem solver for applications ranging from acoustics to heat transfer. Known today as MSC/NASTRAN, the program has thousands of users worldwide. NASTRAN is also distributed through COSMIC.
A note about Norbert Wiener and his contribution to Harmonic Analysis and Tauberian Theorems
NASA Astrophysics Data System (ADS)
Almira, J. M.; Romero, A. E.
2009-05-01
In this note we explain the main motivations Norbert Wiener had for the creation of his Generalized Harmonic Analysis [13] and his Tauberian Theorems [14]. Although these papers belong to the most pure mathematical tradition, they were deeply based on some Engineering and Physics Problems and Wiener was able to use them for such diverse areas as Optics, Brownian motion, Filter Theory, Prediction Theory and Cybernetics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmittroth, F.
1979-09-01
A documentation of the FERRET data analysis code is given. The code provides a way to combine related measurements and calculations in a consistent evaluation. Basically a very general least-squares code, it is oriented towards problems frequently encountered in nuclear data and reactor physics. A strong emphasis is on the proper treatment of uncertainties and correlations and in providing quantitative uncertainty estimates. Documentation includes a review of the method, structure of the code, input formats, and examples.
The Information Content of Discrete Functions and Their Application in Genetic Data Analysis
Sakhanenko, Nikita A.; Kunert-Graf, James; Galas, David J.
2017-10-13
The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. Here, we present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discretemore » variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis—that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. Finally, we illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.« less
The Information Content of Discrete Functions and Their Application in Genetic Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakhanenko, Nikita A.; Kunert-Graf, James; Galas, David J.
The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. Here, we present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discretemore » variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis—that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. Finally, we illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.« less
Harrison, Christopher M; Britt, Helena C; Charles, Janice
2011-08-15
Previous research with the Australian Morbidity and Treatment Survey (1990-1991) showed significant differences in general practitioner characteristics and patient mix of male and female GPs. Even after adjusting for these, it was seen that male and female GPs managed different types of medical conditions. The proportion of female GPs increased from 19.6% in 1990-1991 to 37.1% in 2009-2010. This study investigates whether differences remain two decades later. Analysis of 2009-2010 Bettering the Evaluation and Care of Health (BEACH) data examining GP characteristics, patient encounter characteristics, patient reasons for encounter (RFE), problem types managed and management methods used, by GP sex. Whether GP sex was an independent predictor of problem types being managed, or management methods used, was tested using multiple logistic regressions and Poisson regression. 988 GPs recorded 98 800 GP-patient encounters. Adjusted differences in clinical activity of male and female GPs. After adjustment, compared with male GPs, females recorded more RFEs about general and unspecified issues and endocrine, female genital, pregnancy and family planning problems; and fewer concerning the musculoskeletal, respiratory, skin and male genital systems. Female GPs managed more general and unspecified, digestive, circulatory, psychological, endocrine, female genital and social problems; recorded nearly 20% more clinical treatments and referrals; recorded nearly 10% more imaging and pathology tests; and 4.3% fewer medications. After two decades, even with increased numbers of female GPs, the differences in problems managed by male and female GPs remain, and will probably continue. Female GPs use more resources per encounter, but may not use more resources in terms of annual patient care.
Asymptotic boundary conditions for dissipative waves: General theory
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1990-01-01
An outstanding issue in the computational analysis of time dependent problems is the imposition of appropriate radiation boundary conditions at artificial boundaries. Accurate conditions are developed which are based on the asymptotic analysis of wave propagation over long ranges. Employing the method of steepest descents, dominant wave groups are identified and simple approximations to the dispersion relation are considered in order to derive local boundary operators. The existence of a small number of dominant wave groups may be expected for systems with dissipation. Estimates of the error as a function of domain size are derived under general hypotheses, leading to convergence results. Some practical aspects of the numerical construction of the asymptotic boundary operators are also discussed.
Perceived risk associated with ecstasy use: a latent class analysis approach
Martins, SS; Carlson, RG; Alexandre, PK; Falck, RS
2011-01-01
This study aims to define categories of perceived health problems among ecstasy users based on observed clustering of their perceptions of ecstasy-related health problems. Data from a community sample of ecstasy users (n=402) aged 18 to 30, in Ohio, was used in this study. Data was analyzed via Latent Class Analysis (LCA) and Regression. This study identified five different subgroups of ecstasy users based on their perceptions of health problems they associated with their ecstasy use. Almost one third of the sample (28.9%) belonged to a class with “low level of perceived problems” (Class 4). About one fourth (25.6%) of the sample (Class 2), had high probabilities of “perceiving problems on sexual-related items”, but generally low or moderate probabilities of perceiving problems in other areas. Roughly one-fifth of the sample (21.1%, Class 1) had moderate probabilities of perceiving ecstasy health-related problems in all areas. A small proportion of respondents (11.9%, Class 5) had high probabilities of reporting “perceived memory and cognitive problems, and of perceiving “ecstasy related-problems in all areas” (12.4%, Class 3). A large proportion of ecstasy users perceive either low or moderate risk associated with their ecstasy use. It is important to further investigate whether lower levels of risk perception are associated with persistence of ecstasy use. PMID:21296504
O’ Sullivan, Aifric
2017-01-01
A poor quality diet may be a common risk factor for both obesity and dental problems such as caries. The aim of this paper is to use classification tree analysis (CTA) to identify predictors of dental problems in a nationally representative cohort of Irish pre-school children. CTA was used to classify variables and describe interactions between multiple variables including socio-demographics, dietary intake, health-related behaviour, body mass index (BMI) and a dental problem. Data were derived from the second (2010/2011) wave of the ‘Growing Up in Ireland’ study (GUI) infant cohort at 3 years, n = 9793. The prevalence of dental problems was 5.0% (n = 493). The CTA model showed a sensitivity of 67% and specificity of 58.5% and overall correctly classified 59% of children. Ethnicity was the most significant predictor of dental problems followed by longstanding illness or disability, mother’s BMI and household income. The highest prevalence of dental problems was among children who were obese or underweight with a longstanding illness and an overweight mother. Frequency of intake of some foods showed interactions with the target variable. Results from this research highlight the interconnectedness of weight status, dental problems and general health and reinforce the importance of adopting a common risk factor approach when dealing with prevention of these diseases. PMID:29563431
[Patient expectations about decision-making for various health problems].
Delgado, Ana; López-Fernández, Luis Andrés; de Dios Luna, Juan; Saletti Cuesta, Lorena; Gil Garrido, Natalia; Puga González, Almudena
2010-01-01
To identify patient expectations of clinical decision-making at consultations with their general practitioners for distinct health problems and to determine the patient and general practitioner characteristics related to these expectations, with special focus on gender. We performed a multicenter cross-sectional study in 360 patients who were interviewed at home. Data on patients' sociodemographic, clinical characteristics and satisfaction were gathered. General practitioners supplied information on their gender and postgraduate training in family medicine. A questionnaire was used to collect data on patients' expectations that their general practitioner
NASA Astrophysics Data System (ADS)
Pueyo, Laurent
2016-01-01
A new class of high-contrast image analysis algorithms, that empirically fit and subtract systematic noise has lead to recent discoveries of faint exoplanet /substellar companions and scattered light images of circumstellar disks. The consensus emerging in the community is that these methods are extremely efficient at enhancing the detectability of faint astrophysical signal, but do generally create systematic biases in their observed properties. This poster provides a solution this outstanding problem. We present an analytical derivation of a linear expansion that captures the impact of astrophysical over/self-subtraction in current image analysis techniques. We examine the general case for which the reference images of the astrophysical scene moves azimuthally and/or radially across the field of view as a result of the observation strategy. Our new method method is based on perturbing the covariance matrix underlying any least-squares speckles problem and propagating this perturbation through the data analysis algorithm. This work is presented in the framework of Karhunen-Loeve Image Processing (KLIP) but it can be easily generalized to methods relying on linear combination of images (instead of eigen-modes). Based on this linear expansion, obtained in the most general case, we then demonstrate practical applications of this new algorithm. We first consider the case of the spectral extraction of faint point sources in IFS data and illustrate, using public Gemini Planet Imager commissioning data, that our novel perturbation based Forward Modeling (which we named KLIP-FM) can indeed alleviate algorithmic biases. We then apply KLIP-FM to the detection of point sources and show how it decreases the rate of false negatives while keeping the rate of false positives unchanged when compared to classical KLIP. This can potentially have important consequences on the design of follow-up strategies of ongoing direct imaging surveys.
Collision analysis of one kind of chaos-based hash function
NASA Astrophysics Data System (ADS)
Xiao, Di; Peng, Wenbing; Liao, Xiaofeng; Xiang, Tao
2010-02-01
In the last decade, various chaos-based hash functions have been proposed. Nevertheless, the corresponding analyses of them lag far behind. In this Letter, we firstly take a chaos-based hash function proposed very recently in Amin, Faragallah and Abd El-Latif (2009) [11] as a sample to analyze its computational collision problem, and then generalize the construction method of one kind of chaos-based hash function and summarize some attentions to avoid the collision problem. It is beneficial to the hash function design based on chaos in the future.
Drag Minimization for Wings and Bodies in Supersonic Flow
NASA Technical Reports Server (NTRS)
Heaslet, Max A; Fuller, Franklyn B
1958-01-01
The minimization of inviscid fluid drag is studied for aerodynamic shapes satisfying the conditions of linearized theory, and subject to imposed constraints on lift, pitching moment, base area, or volume. The problem is transformed to one of determining two-dimensional potential flows satisfying either Laplace's or Poisson's equations with boundary values fixed by the imposed conditions. A general method for determining integral relations between perturbation velocity components is developed. This analysis is not restricted in application to optimum cases; it may be used for any supersonic wing problem.
Literal algebra for satellite dynamics. [perturbation analysis
NASA Technical Reports Server (NTRS)
Gaposchkin, E. M.
1975-01-01
A description of the rather general class of operations available is given and the operations are related to problems in satellite dynamics. The implementation of an algebra processor is discussed. The four main categories of symbol processors are related to list processing, string manipulation, symbol manipulation, and formula manipulation. Fundamental required operations for an algebra processor are considered. It is pointed out that algebra programs have been used for a number of problems in celestial mechanics with great success. The advantage of computer algebra is its accuracy and speed.
NASA Technical Reports Server (NTRS)
Nakajima, Yukio; Padovan, Joe
1987-01-01
In a three-part series of papers, a generalized finite element methodology is formulated to handle traveling load problems involving large deformation fields in structure composed of viscoelastic media. The main thrust of this paper is to develop an overall finite element methodology and associated solution algorithms to handle the transient aspects of moving problems involving contact impact type loading fields. Based on the methodology and algorithms formulated, several numerical experiments are considered. These include the rolling/sliding impact of tires with road obstructions.
Some aspects of algorithm performance and modeling in transient analysis of structures
NASA Technical Reports Server (NTRS)
Adelman, H. M.; Haftka, R. T.; Robinson, J. C.
1981-01-01
The status of an effort to increase the efficiency of calculating transient temperature fields in complex aerospace vehicle structures is described. The advantages and disadvantages of explicit algorithms with variable time steps, known as the GEAR package, is described. Four test problems, used for evaluating and comparing various algorithms, were selected and finite-element models of the configurations are described. These problems include a space shuttle frame component, an insulated cylinder, a metallic panel for a thermal protection system, and a model of the wing of the space shuttle orbiter. Results generally indicate a preference for implicit over explicit algorithms for solution of transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures).
The influence of air traffic control message length and timing on pilot communication
NASA Technical Reports Server (NTRS)
Morrow, Daniel; Rodvold, Michelle
1993-01-01
The present paper outlines an approach to air traffic control (ATC) communication that is based on theories of dialogue organization and describes several steps or phases in routine controller-pilot communication. The introduction also describes several kinds of communication problems that often disrupt these steps, as well as how these problems may be caused by factors related to ATC messages, the communication medium (radio vs. data link) and task workload. Next, a part-task simulation study is described. This study focused on how problems in radio communication are related to message factors. More specifically, we examined if pilots are more likely to misunderstanding longer ATC messages. A more general goal of the study is to show that communication analysis can help trace where problem occur and why.
Fu, Zhongtao; Yang, Wenyu; Yang, Zhen
2013-08-01
In this paper, we present an efficient method based on geometric algebra for computing the solutions to the inverse kinematics problem (IKP) of the 6R robot manipulators with offset wrist. Due to the fact that there exist some difficulties to solve the inverse kinematics problem when the kinematics equations are complex, highly nonlinear, coupled and multiple solutions in terms of these robot manipulators stated mathematically, we apply the theory of Geometric Algebra to the kinematic modeling of 6R robot manipulators simply and generate closed-form kinematics equations, reformulate the problem as a generalized eigenvalue problem with symbolic elimination technique, and then yield 16 solutions. Finally, a spray painting robot, which conforms to the type of robot manipulators, is used as an example of implementation for the effectiveness and real-time of this method. The experimental results show that this method has a large advantage over the classical methods on geometric intuition, computation and real-time, and can be directly extended to all serial robot manipulators and completely automatized, which provides a new tool on the analysis and application of general robot manipulators.
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.
1991-01-01
Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.
Bayesian multivariate hierarchical transformation models for ROC analysis.
O'Malley, A James; Zou, Kelly H
2006-02-15
A Bayesian multivariate hierarchical transformation model (BMHTM) is developed for receiver operating characteristic (ROC) curve analysis based on clustered continuous diagnostic outcome data with covariates. Two special features of this model are that it incorporates non-linear monotone transformations of the outcomes and that multiple correlated outcomes may be analysed. The mean, variance, and transformation components are all modelled parametrically, enabling a wide range of inferences. The general framework is illustrated by focusing on two problems: (1) analysis of the diagnostic accuracy of a covariate-dependent univariate test outcome requiring a Box-Cox transformation within each cluster to map the test outcomes to a common family of distributions; (2) development of an optimal composite diagnostic test using multivariate clustered outcome data. In the second problem, the composite test is estimated using discriminant function analysis and compared to the test derived from logistic regression analysis where the gold standard is a binary outcome. The proposed methodology is illustrated on prostate cancer biopsy data from a multi-centre clinical trial.
Bayesian multivariate hierarchical transformation models for ROC analysis
O'Malley, A. James; Zou, Kelly H.
2006-01-01
SUMMARY A Bayesian multivariate hierarchical transformation model (BMHTM) is developed for receiver operating characteristic (ROC) curve analysis based on clustered continuous diagnostic outcome data with covariates. Two special features of this model are that it incorporates non-linear monotone transformations of the outcomes and that multiple correlated outcomes may be analysed. The mean, variance, and transformation components are all modelled parametrically, enabling a wide range of inferences. The general framework is illustrated by focusing on two problems: (1) analysis of the diagnostic accuracy of a covariate-dependent univariate test outcome requiring a Box–Cox transformation within each cluster to map the test outcomes to a common family of distributions; (2) development of an optimal composite diagnostic test using multivariate clustered outcome data. In the second problem, the composite test is estimated using discriminant function analysis and compared to the test derived from logistic regression analysis where the gold standard is a binary outcome. The proposed methodology is illustrated on prostate cancer biopsy data from a multi-centre clinical trial. PMID:16217836
Asymptotic modal analysis of a rectangular acoustic cavity excited by wall vibration
NASA Technical Reports Server (NTRS)
Peretti, Linda F.; Dowell, Earl H.
1992-01-01
Asymptotic modal analysis, a method that has recently been developed for structural dynamical systems, has been applied to a rectangular acoustic cavity. The cavity had a flexible vibrating portion on one wall, and the other five walls were rigid. Banded white noise was transmitted through the flexible portion (plate) only. Both the location along the wall and the size of the plate were varied. The mean square pressure levels of the cavity interior were computed as a ratio of the result obtained from classical modal analysis to that obtained from asymptotic modal analysis for the various plate configurations. In general, this ratio converged to 1.0 as the number of responding modes increased. Intensification effects were found due to both the excitation location and the response location. The asymptotic modal analysis method was both efficient and accurate in solving the given problem. The method has advantages over the traditional methods that are used for solving dynamics problems with a large number of responding modes.
Legal Problems in Broadcasting: Identification and Analysis of Selected Issues.
ERIC Educational Resources Information Center
Toohey, Daniel W.; And Others
This book is designed as an introduction and reference to broadcast law for commercial and noncommercial station managers and staff, college students of radio and television, and general readers. It is divided into two sections, "Freedom of Expression and Related Issues" and "Business Aspects of Programming." The first section contains seven…
The Generality of Interview-Informed Functional Analyses: Systematic Replications in School and Home
ERIC Educational Resources Information Center
Santiago, Joana L.; Hanley, Gregory P.; Moore, Keira; Jin, C. Sandy
2016-01-01
Behavioral interventions preceded by a functional analysis have been proven efficacious in treating severe problem behavior associated with autism. There is, however, a lack of research showing socially validated outcomes when assessment and treatment procedures are conducted by ecologically relevant individuals in typical settings. In this study,…
Barriers to wildland fire use: A preliminary problem analysis
Dustin L. Doane; Jay O' Laughlin; Penelope Morgan; Carol Miller
2006-01-01
American society has a general cultural bias toward controlling nature (Glover 2000) and, in particular, a strong bias for suppressing wildfire, even in wilderness (Saveland et al. 1988). Nevertheless, the Federal Wildland Fire Management Policy directs managers to "allow lightning-caused fires to play, as nearly as possible, their natural ecological role in...
How To Avoid Ethnocentricity and Stereotypes in Analysing Another Culture.
ERIC Educational Resources Information Center
Schroder, Hartmut
Methodological problems caused by an ethnocentric view in analyzing another culture are discussed along with some aspects of culture analysis in general and stereotypes about other cultures and their functions in cross-cultural communication. It is suggested that miscommunication is subject to various norms and value systems that are not made…
Environmental Conservation. The Oil and Gas Industries, Volume One.
ERIC Educational Resources Information Center
National Petroleum Council, Washington, DC.
Prepared in response to a Department of the Interior request, this report is a comprehensive study of environmental conservation problems as they relate to or have impact on the petroleum industry. It contains the general comments and conclusions of The National Petroleum Council based on an analysis of detailed data. For presentation of key…
Aeroelastic analysis of a troposkien-type wind turbine blade
NASA Technical Reports Server (NTRS)
Nitzsche, F.
1981-01-01
The linear aeroelastic equations for one curved blade of a vertical axis wind turbine in state vector form are presented. The method is based on a simple integrating matrix scheme together with the transfer matrix idea. The method is proposed as a convenient way of solving the associated eigenvalue problem for general support conditions.
Basic Research in Human Factors
1990-07-01
settings as military organizations, voluntary organizations, multinational corporations, 15 diplamatic corps, governmnt agencies, and couples managing a...developrent, the analysis suggests both problem and possible solutions. It also derives some general conclusions regarding the design and management ...organize and manage information spontaneasly in order to develop techniques which will help them do so more effectively. Attitudes toards and
Entry of Young People into Working Life. General Report.
ERIC Educational Resources Information Center
Organisation for Economic Cooperation and Development, Paris (France).
This booklet examines the problems encountered by youth while making the school-to-work transition and analyzes the measures undertaken in the Organization for Economic Co-operation and Development (OECD) member countries to deal with youth unemployment. Part I in an analysis of young people's difficulties in entering the working life describes…
The Theoretical Basis of Experience-Based Career Education.
ERIC Educational Resources Information Center
Jenks, C. Lynn
This study analyzes the extent to which the assumptions and procedures of the Experience-Based Career Education model (EBCE) as developed by the Far West Laboratory (FWL) are supported by empirical data and by recognized scholars in educational theory. The analysis is presented as relevant to the more general problem: the limited availability of…
USDA-ARS?s Scientific Manuscript database
Hyperspectral scattering provides an effective means for characterizing light scattering in the fruit and is thus promising for noninvasive assessment of apple firmness and soluble solids content (SSC). A critical problem encountered in application of hyperspectral scattering technology is analyzing...
Adaptation of MSC/NASTRAN to a supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gloudeman, J.F.; Hodge, J.C.
1982-01-01
MSC/NASTRAN is a large-scale general purpose digital computer program which solves a wider variety of engineering analysis problems by the finite element method. The program capabilities include static and dynamic structural analysis (linear and nonlinear), heat transfer, acoustics, electromagnetism and other types of field problems. It is used worldwide by large and small companies in such diverse fields as automotive, aerospace, civil engineering, shipbuilding, offshore oil, industrial equipment, chemical engineering, biomedical research, optics and government research. The paper presents the significant aspects of the adaptation of MSC/NASTRAN to the Cray-1. First, the general architecture and predominant functional use of MSC/NASTRANmore » are discussed to help explain the imperatives and the challenges of this undertaking. The key characteristics of the Cray-1 which influenced the decision to undertake this effort are then reviewed to help identify performance targets. An overview of the MSC/NASTRAN adaptation effort is then given to help define the scope of the project. Finally, some measures of MSC/NASTRAN's operational performance on the Cray-1 are given, along with a few guidelines to help avoid improper interpretation. 17 references.« less
Estimating errors in least-squares fitting
NASA Technical Reports Server (NTRS)
Richter, P. H.
1995-01-01
While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.
Karelin, A O; Lomtev, A Yu; Mozzhukhina, N A; Yeremin, G B; Nikonov, V A
Inhalation of fine particulate matters (PM and PM ) poses a threat for the health of population. Purpose of the study the analysis of the monitoring of fine particulate matters in the atmospheric air of Saint-Petersburg and identification of the main problems of the monitoring. Research methods methods of scientific hypothetical deductive cognition, sanitary-statistical methods, general logical methods and approaches of researches: analysis, synthesis, abstracting, generalization, induction. Results. The article represents the analysis of the monitoring of fine particulate matters in the atmospheric air of Saint- Petersburg. Only 11 in automatic monitoring stations out of 22 there is carried out the control of fine particulate matters: in 7 - PM and PM, and in 4 - PM The average year concentrations were below MAC in all the stations. The maximum concentrations achieved 3 MAC, but the repeatance of cases of exceedence of concentrations more than MAC was very rare. On the average of the city concentrations of PM were decreased from 0,8 MAC in 2006 and 1,1 MAC in 2007 to 0,5 MAC in 2013-14. The executed analysis revealed main problems of the monitoring of fine particulate matters in the Russian Federation. They include the absence of the usage 1of the officially approved methods of controlling of PM and PM in the atmospheric air until March 1, 2016, lack of the modern equipment for measurement of fine particulate matters. Conclusions. Therefore, the state of the monitoring of fine particulate matters in the atmospheric air in the Russian Federation fails to be satisfactory. It is necessary to improve system of the monitoring, create modern Russian appliances, methods and means for measurement of fine particulate matters concentrations in the atmospheric air.
Problem solving therapy - use and effectiveness in general practice.
Pierce, David
2012-09-01
Problem solving therapy (PST) is one of the focused psychological strategies supported by Medicare for use by appropriately trained general practitioners. This article reviews the evidence base for PST and its use in the general practice setting. Problem solving therapy involves patients learning or reactivating problem solving skills. These skills can then be applied to specific life problems associated with psychological and somatic symptoms. Problem solving therapy is suitable for use in general practice for patients experiencing common mental health conditions and has been shown to be as effective in the treatment of depression as antidepressants. Problem solving therapy involves a series of sequential stages. The clinician assists the patient to develop new empowering skills, and then supports them to work through the stages of therapy to determine and implement the solution selected by the patient. Many experienced GPs will identify their own existing problem solving skills. Learning about PST may involve refining and focusing these skills.
Mahoney, A; Jouriles, E N; Scavone, J
1997-12-01
Examined whether marital discord over childrearing contributes to child behavior problems after taking into account general marital adjustment, and if child age moderates associations between child behavior problems and either general marital adjustment or marital discord over childrearing. Participants were 146 two-parent families seeking services for their child's (4 to 9 years of age) conduct problems. Data on marital functioning and child behavior problems were collected from both parents. Mothers' and fathers' reports of marital discord over childrearing related positively to child externalizing problems after accounting for general marital adjustment. Child age moderated associations between fathers' reports of general marital adjustment and both internalizing and externalizing child problems, with associations being stronger in families with younger children. The discussion highlights the role that developmental factors may play in understanding the link between marital and child behavior problems in clinic-referred families.
Gorzalczany, Marian B; Rudzinski, Filip
2017-06-07
This paper presents a generalization of self-organizing maps with 1-D neighborhoods (neuron chains) that can be effectively applied to complex cluster analysis problems. The essence of the generalization consists in introducing mechanisms that allow the neuron chain--during learning--to disconnect into subchains, to reconnect some of the subchains again, and to dynamically regulate the overall number of neurons in the system. These features enable the network--working in a fully unsupervised way (i.e., using unlabeled data without a predefined number of clusters)--to automatically generate collections of multiprototypes that are able to represent a broad range of clusters in data sets. First, the operation of the proposed approach is illustrated on some synthetic data sets. Then, this technique is tested using several real-life, complex, and multidimensional benchmark data sets available from the University of California at Irvine (UCI) Machine Learning repository and the Knowledge Extraction based on Evolutionary Learning data set repository. A sensitivity analysis of our approach to changes in control parameters and a comparative analysis with an alternative approach are also performed.
Singh, Kushpal; Nagaraj, Anup; Yousuf, Asif; Ganta, Shravani; Pareek, Sonia; Vishnani, Preeti
2016-01-01
Cell phones use electromagnetic, nonionizing radiations in the microwave range, which some believe may be harmful to human health. The present study aimed to determine the effect of electromagnetic radiations (EMRs) on unstimulated/stimulated salivary flow rate and other health-related problems between the general populations residing in proximity to and far away from mobile phone base stations. A total of four mobile base stations were randomly selected from four zones of Jaipur, Rajasthan, India. Twenty individuals who were residing in proximity to the selected mobile phone towers were taken as the case group and the other 20 individuals (control group) who were living nearly 1 km away in the periphery were selected for salivary analysis. Questions related to sleep disturbances were measured using Pittsburgh Sleep Quality Index (PSQI) and other health problems were included in the questionnaire. Chi-square test was used for statistical analysis. It was unveiled that a majority of the subjects who were residing near the mobile base station complained of sleep disturbances, headache, dizziness, irritability, concentration difficulties, and hypertension. A majority of the study subjects had significantly lesser stimulated salivary secretion (P < 0.01) as compared to the control subjects. The effects of prolonged exposure to EMRs from mobile phone base stations on the health and well-being of the general population cannot be ruled out. Further studies are warranted to evaluate the effect of electromagnetic fields (EMFs) on general health and more specifically on oral health.
Optimal analytic method for the nonlinear Hasegawa-Mima equation
NASA Astrophysics Data System (ADS)
Baxter, Mathew; Van Gorder, Robert A.; Vajravelu, Kuppalapalle
2014-05-01
The Hasegawa-Mima equation is a nonlinear partial differential equation that describes the electric potential due to a drift wave in a plasma. In the present paper, we apply the method of homotopy analysis to a slightly more general Hasegawa-Mima equation, which accounts for hyper-viscous damping or viscous dissipation. First, we outline the method for the general initial/boundary value problem over a compact rectangular spatial domain. We use a two-stage method, where both the convergence control parameter and the auxiliary linear operator are optimally selected to minimize the residual error due to the approximation. To do the latter, we consider a family of operators parameterized by a constant which gives the decay rate of the solutions. After outlining the general method, we consider a number of concrete examples in order to demonstrate the utility of this approach. The results enable us to study properties of the initial/boundary value problem for the generalized Hasegawa-Mima equation. In several cases considered, we are able to obtain solutions with extremely small residual errors after relatively few iterations are computed (residual errors on the order of 10-15 are found in multiple cases after only three iterations). The results demonstrate that selecting a parameterized auxiliary linear operator can be extremely useful for minimizing residual errors when used concurrently with the optimal homotopy analysis method, suggesting that this approach can prove useful for a number of nonlinear partial differential equations arising in physics and nonlinear mechanics.
A Unified Development of Basis Reduction Methods for Rotor Blade Analysis
NASA Technical Reports Server (NTRS)
Ruzicka, Gene C.; Hodges, Dewey H.; Rutkowski, Michael (Technical Monitor)
2001-01-01
The axial foreshortening effect plays a key role in rotor blade dynamics, but approximating it accurately in reduced basis models has long posed a difficult problem for analysts. Recently, though, several methods have been shown to be effective in obtaining accurate,reduced basis models for rotor blades. These methods are the axial elongation method,the mixed finite element method, and the nonlinear normal mode method. The main objective of this paper is to demonstrate the close relationships among these methods, which are seemingly disparate at first glance. First, the difficulties inherent in obtaining reduced basis models of rotor blades are illustrated by examining the modal reduction accuracy of several blade analysis formulations. It is shown that classical, displacement-based finite elements are ill-suited for rotor blade analysis because they can't accurately represent the axial strain in modal space, and that this problem may be solved by employing the axial force as a variable in the analysis. It is shown that the mixed finite element method is a convenient means for accomplishing this, and the derivation of a mixed finite element for rotor blade analysis is outlined. A shortcoming of the mixed finite element method is that is that it increases the number of variables in the analysis. It is demonstrated that this problem may be rectified by solving for the axial displacements in terms of the axial forces and the bending displacements. Effectively, this procedure constitutes a generalization of the widely used axial elongation method to blades of arbitrary topology. The procedure is developed first for a single element, and then extended to an arbitrary assemblage of elements of arbitrary type. Finally, it is shown that the generalized axial elongation method is essentially an approximate solution for an invariant manifold that can be used as the basis for a nonlinear normal mode.
MSC products for the simulation of tire behavior
NASA Technical Reports Server (NTRS)
Muskivitch, John C.
1995-01-01
The modeling of tires and the simulation of tire behavior are complex problems. The MacNeal-Schwendler Corporation (MSC) has a number of finite element analysis products that can be used to address the complexities of tire modeling and simulation. While there are many similarities between the products, each product has a number of capabilities that uniquely enable it to be used for a specific aspect of tire behavior. This paper discusses the following programs: (1) MSC/NASTRAN - general purpose finite element program for linear and nonlinear static and dynamic analysis; (2) MSC/ADAQUS - nonlinear statics and dynamics finite element program; (3) MSC/PATRAN AFEA (Advanced Finite Element Analysis) - general purpose finite element program with a subset of linear and nonlinear static and dynamic analysis capabilities with an integrated version of MSC/PATRAN for pre- and post-processing; and (4) MSC/DYTRAN - nonlinear explicit transient dynamics finite element program.
Variational asymptotic modeling of composite dimensionally reducible structures
NASA Astrophysics Data System (ADS)
Yu, Wenbin
A general framework to construct accurate reduced models for composite dimensionally reducible structures (beams, plates and shells) was formulated based on two theoretical foundations: decomposition of the rotation tensor and the variational asymptotic method. Two engineering software systems, Variational Asymptotic Beam Sectional Analysis (VABS, new version) and Variational Asymptotic Plate and Shell Analysis (VAPAS), were developed. Several restrictions found in previous work on beam modeling were removed in the present effort. A general formulation of Timoshenko-like cross-sectional analysis was developed, through which the shear center coordinates and a consistent Vlasov model can be obtained. Recovery relations are given to recover the asymptotic approximations for the three-dimensional field variables. A new version of VABS has been developed, which is a much improved program in comparison to the old one. Numerous examples are given for validation. A Reissner-like model being as asymptotically correct as possible was obtained for composite plates and shells. After formulating the three-dimensional elasticity problem in intrinsic form, the variational asymptotic method was used to systematically reduce the dimensionality of the problem by taking advantage of the smallness of the thickness. The through-the-thickness analysis is solved by a one-dimensional finite element method to provide the stiffnesses as input for the two-dimensional nonlinear plate or shell analysis as well as recovery relations to approximately express the three-dimensional results. The known fact that there exists more than one theory that is asymptotically correct to a given order is adopted to cast the refined energy into a Reissner-like form. A two-dimensional nonlinear shell theory consistent with the present modeling process was developed. The engineering computer code VAPAS was developed and inserted into DYMORE to provide an efficient and accurate analysis of composite plates and shells. Numerical results are compared with the exact solutions, and the excellent agreement proves that one can use VAPAS to analyze composite plates and shells efficiently and accurately. In conclusion, rigorous modeling approaches were developed for composite beams, plates and shells within a general framework. No such consistent and general treatment is found in the literature. The associated computer programs VABS and VAPAS are envisioned to have many applications in industry.
Comprehensive rotorcraft analysis methods
NASA Technical Reports Server (NTRS)
Stephens, Wendell B.; Austin, Edward E.
1988-01-01
The development and application of comprehensive rotorcraft analysis methods in the field of rotorcraft technology are described. These large scale analyses and the resulting computer programs are intended to treat the complex aeromechanical phenomena that describe the behavior of rotorcraft. They may be used to predict rotor aerodynamics, acoustic, performance, stability and control, handling qualities, loads and vibrations, structures, dynamics, and aeroelastic stability characteristics for a variety of applications including research, preliminary and detail design, and evaluation and treatment of field problems. The principal comprehensive methods developed or under development in recent years and generally available to the rotorcraft community because of US Army Aviation Research and Technology Activity (ARTA) sponsorship of all or part of the software systems are the Rotorcraft Flight Simulation (C81), Dynamic System Coupler (DYSCO), Coupled Rotor/Airframe Vibration Analysis Program (SIMVIB), Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics (CAMRAD), General Rotorcraft Aeromechanical Stability Program (GRASP), and Second Generation Comprehensive Helicopter Analysis System (2GCHAS).
Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery.
Feng, Yunlong; Lv, Shao-Gao; Hang, Hanyuan; Suykens, Johan A K
2016-03-01
Kernelized elastic net regularization (KENReg) is a kernelization of the well-known elastic net regularization (Zou & Hastie, 2005). The kernel in KENReg is not required to be a Mercer kernel since it learns from a kernelized dictionary in the coefficient space. Feng, Yang, Zhao, Lv, and Suykens (2014) showed that KENReg has some nice properties including stability, sparseness, and generalization. In this letter, we continue our study on KENReg by conducting a refined learning theory analysis. This letter makes the following three main contributions. First, we present refined error analysis on the generalization performance of KENReg. The main difficulty of analyzing the generalization error of KENReg lies in characterizing the population version of its empirical target function. We overcome this by introducing a weighted Banach space associated with the elastic net regularization. We are then able to conduct elaborated learning theory analysis and obtain fast convergence rates under proper complexity and regularity assumptions. Second, we study the sparse recovery problem in KENReg with fixed design and show that the kernelization may improve the sparse recovery ability compared to the classical elastic net regularization. Finally, we discuss the interplay among different properties of KENReg that include sparseness, stability, and generalization. We show that the stability of KENReg leads to generalization, and its sparseness confidence can be derived from generalization. Moreover, KENReg is stable and can be simultaneously sparse, which makes it attractive theoretically and practically.
Aircraft interior noise reduction by alternate resonance tuning
NASA Technical Reports Server (NTRS)
Gottwald, James A.; Bliss, Donald B.
1990-01-01
The focus is on a noise control method which considers aircraft fuselages lined with panels alternately tuned to frequencies above and below the frequency that must be attenuated. An interior noise reduction called alternate resonance tuning (ART) is described both theoretically and experimentally. Problems dealing with tuning single paneled wall structures for optimum noise reduction using the ART methodology are presented, and three theoretical problems are analyzed. The first analysis is a three dimensional, full acoustic solution for tuning a panel wall composed of repeating sections with four different panel tunings within that section, where the panels are modeled as idealized spring-mass-damper systems. The second analysis is a two dimensional, full acoustic solution for a panel geometry influenced by the effect of a propagating external pressure field such as that which might be associated with propeller passage by a fuselage. To reduce the analysis complexity, idealized spring-mass-damper panels are again employed. The final theoretical analysis presents the general four panel problem with real panel sections, where the effect of higher structural modes is discussed. Results from an experimental program highlight real applications of the ART concept and show the effectiveness of the tuning on real structures.
Use of multicriteria analysis (MCA) for sustainable hydropower planning and management.
Vassoney, Erica; Mammoliti Mochet, Andrea; Comoglio, Claudio
2017-07-01
Multicriteria analysis (MCA) is a decision-making tool applied to a wide range of environmental management problems, including renewable energy planning and management. An interesting field of application of MCA is the evaluation and analysis of the conflicting aspects of hydropower (HP) exploitation, affecting the three pillars of sustainability and involving several different stakeholders. The present study was aimed at reviewing the state of the art of MCA applications to sustainable hydropower production and related decision-making problems, based on a detailed analysis of the scientific papers published over the last 15 years on this topic. The papers were analysed and compared, focusing on the specific features of the MCA methods applied in the described case studies, highlighting the general aspects of the MCA application (purpose, spatial scale, software used, stakeholders, etc.) and the specific operational/technical features of the selected MCA technique (methodology, criteria, evaluation, approach, sensitivity, etc.). Some specific limitations of the analysed case studies were identified and a set of "quality indexes" of an exhaustive MCA application were suggested as potential improvements for more effectively support decision-making processes in sustainable HP planning and management problems. Copyright © 2017 Elsevier Ltd. All rights reserved.
Optimal guidance law development for an advanced launch system
NASA Technical Reports Server (NTRS)
Calise, Anthony J.; Hodges, Dewey H.
1990-01-01
A regular perturbation analysis is presented. Closed-loop simulations were performed with a first order correction including all of the atmospheric terms. In addition, a method was developed for independently checking the accuracy of the analysis and the rather extensive programming required to implement the complete first order correction with all of the aerodynamic effects included. This amounted to developing an equivalent Hamiltonian computed from the first order analysis. A second order correction was also completed for the neglected spherical Earth and back-pressure effects. Finally, an analysis was begun on a method for dealing with control inequality constraints. The results on including higher order corrections do show some improvement for this application; however, it is not known at this stage if significant improvement will result when the aerodynamic forces are included. The weak formulation for solving optimal problems was extended in order to account for state inequality constraints. The formulation was tested on three example problems and numerical results were compared to the exact solutions. Development of a general purpose computational environment for the solution of a large class of optimal control problems is under way. An example, along with the necessary input and the output, is given.
Heat transfer evaluation in a plasma core reactor
NASA Technical Reports Server (NTRS)
Smith, D. E.; Smith, T. M.; Stoenescu, M. L.
1976-01-01
Numerical evaluations of heat transfer in a fissioning uranium plasma core reactor cavity, operating with seeded hydrogen propellant, was performed. A two-dimensional analysis is based on an assumed flow pattern and cavity wall heat exchange rate. Various iterative schemes were required by the nature of the radiative field and by the solid seed vaporization. Approximate formulations of the radiative heat flux are generally used, due to the complexity of the solution of a rigorously formulated problem. The present work analyzes the sensitivity of the results with respect to approximations of the radiative field, geometry, seed vaporization coefficients and flow pattern. The results present temperature, heat flux, density and optical depth distributions in the reactor cavity, acceptable simplifying assumptions, and iterative schemes. The present calculations, performed in cartesian and spherical coordinates, are applicable to any most general heat transfer problem.
Inference of relativistic electron spectra from measurements of inverse Compton radiation
NASA Astrophysics Data System (ADS)
Craig, I. J. D.; Brown, J. C.
1980-07-01
The inference of relativistic electron spectra from spectral measurement of inverse Compton radiation is discussed for the case where the background photon spectrum is a Planck function. The problem is formulated in terms of an integral transform that relates the measured spectrum to the unknown electron distribution. A general inversion formula is used to provide a quantitative assessment of the information content of the spectral data. It is shown that the observations must generally be augmented by additional information if anything other than a rudimentary two or three parameter model of the source function is to be derived. It is also pointed out that since a similar equation governs the continuum spectra emitted by a distribution of black-body radiators, the analysis is relevant to the problem of stellar population synthesis from galactic spectra.
Quantum optimization for training support vector machines.
Anguita, Davide; Ridella, Sandro; Rivieccio, Fabio; Zunino, Rodolfo
2003-01-01
Refined concepts, such as Rademacher estimates of model complexity and nonlinear criteria for weighting empirical classification errors, represent recent and promising approaches to characterize the generalization ability of Support Vector Machines (SVMs). The advantages of those techniques lie in both improving the SVM representation ability and yielding tighter generalization bounds. On the other hand, they often make Quadratic-Programming algorithms no longer applicable, and SVM training cannot benefit from efficient, specialized optimization techniques. The paper considers the application of Quantum Computing to solve the problem of effective SVM training, especially in the case of digital implementations. The presented research compares the behavioral aspects of conventional and enhanced SVMs; experiments in both a synthetic and real-world problems support the theoretical analysis. At the same time, the related differences between Quadratic-Programming and Quantum-based optimization techniques are considered.
Scaling cosmology with variable dark-energy equation of state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castro, David R.; Velten, Hermano; Zimdahl, Winfried, E-mail: drodriguez-ufes@hotmail.com, E-mail: velten@physik.uni-bielefeld.de, E-mail: winfried.zimdahl@pq.cnpq.br
2012-06-01
Interactions between dark matter and dark energy which result in a power-law behavior (with respect to the cosmic scale factor) of the ratio between the energy densities of the dark components (thus generalizing the ΛCDM model) have been considered as an attempt to alleviate the cosmic coincidence problem phenomenologically. We generalize this approach by allowing for a variable equation of state for the dark energy within the CPL-parametrization. Based on analytic solutions for the Hubble rate and using the Constitution and Union2 SNIa sets, we present a statistical analysis and classify different interacting and non-interacting models according to the Akaikemore » (AIC) and the Bayesian (BIC) information criteria. We do not find noticeable evidence for an alleviation of the coincidence problem with the mentioned type of interaction.« less
Moore, Sophie E; Norman, Rosana E; Suetani, Shuichi; Thomas, Hannah J; Sly, Peter D; Scott, James G
2017-01-01
AIM To identify health and psychosocial problems associated with bullying victimization and conduct a meta-analysis summarizing the causal evidence. METHODS A systematic review was conducted using PubMed, EMBASE, ERIC and PsycINFO electronic databases up to 28 February 2015. The study included published longitudinal and cross-sectional articles that examined health and psychosocial consequences of bullying victimization. All meta-analyses were based on quality-effects models. Evidence for causality was assessed using Bradford Hill criteria and the grading system developed by the World Cancer Research Fund. RESULTS Out of 317 articles assessed for eligibility, 165 satisfied the predetermined inclusion criteria for meta-analysis. Statistically significant associations were observed between bullying victimization and a wide range of adverse health and psychosocial problems. The evidence was strongest for causal associations between bullying victimization and mental health problems such as depression, anxiety, poor general health and suicidal ideation and behaviours. Probable causal associations existed between bullying victimization and tobacco and illicit drug use. CONCLUSION Strong evidence exists for a causal relationship between bullying victimization, mental health problems and substance use. Evidence also exists for associations between bullying victimization and other adverse health and psychosocial problems, however, there is insufficient evidence to conclude causality. The strong evidence that bullying victimization is causative of mental illness highlights the need for schools to implement effective interventions to address bullying behaviours. PMID:28401049
Frimpong, Joseph Asamoah; Amo-Addae, Maame Pokuah; Adewuyi, Peter Adebayo; Hall, Casey Daniel; Park, Meeyoung Mattie; Nagbe, Thomas Knue
2017-01-01
The laboratory plays a major role in surveillance, including confirming the start and end of an outbreak. Knowing the causative agent for an outbreak informs the development of response strategies and management plans for a public health event. However, issues and challenges may arise that limit the effectiveness or efficiency of laboratories in surveillance. This case study applies a systematic approach to analyse gaps in laboratory surveillance, thereby improving the ability to mitigate these gaps. Although this case study concentrates on factors resulting in poor feedback from the laboratory, practise of this general approach to problem analysis will confer skills required in analysing most public health issues. This case study was developed based on a report submitted by the district surveillance officer in Grand Bassa County, Liberia, as a resident of the Liberian Frontline Field Epidemiology Training Program in 2016. This case study will serve as a training tool to reinforce lectures on surveillance problem analysis using the fishbone approach. It is designed for public health training in a classroom setting and can be completed within 2 hours 30 minutes.
A New Approach for Solving the Generalized Traveling Salesman Problem
NASA Astrophysics Data System (ADS)
Pop, P. C.; Matei, O.; Sabo, C.
The generalized traveling problem (GTSP) is an extension of the classical traveling salesman problem. The GTSP is known to be an NP-hard problem and has many interesting applications. In this paper we present a local-global approach for the generalized traveling salesman problem. Based on this approach we describe a novel hybrid metaheuristic algorithm for solving the problem using genetic algorithms. Computational results are reported for Euclidean TSPlib instances and compared with the existing ones. The obtained results point out that our hybrid algorithm is an appropriate method to explore the search space of this complex problem and leads to good solutions in a reasonable amount of time.
Improving stability and strength characteristics of framed structures with nonlinear behavior
NASA Technical Reports Server (NTRS)
Pezeshk, Shahram
1990-01-01
In this paper an optimal design procedure is introduced to improve the overall performance of nonlinear framed structures. The design methodology presented here is a multiple-objective optimization procedure whose objective functions involve the buckling eigenvalues and eigenvectors of the structure. A constant volume with bounds on the design variables is used in conjunction with an optimality criterion approach. The method provides a general tool for solving complex design problems and generally leads to structures with better limit strength and stability. Many algorithms have been developed to improve the limit strength of structures. In most applications geometrically linear analysis is employed with the consequence that overall strength of the design is overestimated. Directly optimizing the limit load of the structure would require a full nonlinear analysis at each iteration which would be prohibitively expensive. The objective of this paper is to develop an algorithm that can improve the limit-load of geometrically nonlinear framed structures while avoiding the nonlinear analysis. One of the novelties of the new design methodology is its ability to efficiently model and design structures under multiple loading conditions. These loading conditions can be different factored loads or any kind of loads that can be applied to the structure simultaneously or independently. Attention is focused on optimal design of space framed structures. Three-dimensional design problems are more complicated to carry out, but they yield insight into real behavior of the structure and can help avoiding some of the problems that might appear in planar design procedure such as the need for out-of-plane buckling constraint. Although researchers in the field of structural engineering generally agree that optimum design of three-dimension building frames especially in the seismic regions would be beneficial, methods have been slow to emerge. Most of the research in this area has dealt with the optimization of truss and plane frame structures.
Optimization, Monotonicity and the Determination of Nash Equilibria — An Algorithmic Analysis
NASA Astrophysics Data System (ADS)
Lozovanu, D.; Pickl, S. W.; Weber, G.-W.
2004-08-01
This paper is concerned with the optimization of a nonlinear time-discrete model exploiting the special structure of the underlying cost game and the property of inverse matrices. The costs are interlinked by a system of linear inequalities. It is shown that, if the players cooperate, i.e., minimize the sum of all the costs, they achieve a Nash equilibrium. In order to determine Nash equilibria, the simplex method can be applied with respect to the dual problem. An introduction into the TEM model and its relationship to an economic Joint Implementation program is given. The equivalence problem is presented. The construction of the emission cost game and the allocation problem is explained. The assumption of inverse monotony for the matrices leads to a new result in the area of such allocation problems. A generalization of such problems is presented.
"Fast" Is Not "Real-Time": Designing Effective Real-Time AI Systems
NASA Astrophysics Data System (ADS)
O'Reilly, Cindy A.; Cromarty, Andrew S.
1985-04-01
Realistic practical problem domains (such as robotics, process control, and certain kinds of signal processing) stand to benefit greatly from the application of artificial intelligence techniques. These problem domains are of special interest because they are typified by complex dynamic environments in which the ability to select and initiate a proper response to environmental events in real time is a strict prerequisite to effective environmental interaction. Artificial intelligence systems developed to date have been sheltered from this real-time requirement, however, largely by virtue of their use of simplified problem domains or problem representations. The plethora of colloquial and (in general) mutually inconsistent interpretations of the term "real-time" employed by workers in each of these domains further exacerbates the difficul-ties in effectively applying state-of-the-art problem solving tech-niques to time-critical problems. Indeed, the intellectual waters are by now sufficiently muddied that the pursuit of a rigorous treatment of intelligent real-time performance mandates the redevelopment of proper problem perspective on what "real-time" means, starting from first principles. We present a simple but nonetheless formal definition of real-time performance. We then undertake an analysis of both conventional techniques and AI technology with respect to their ability to meet substantive real-time performance criteria. This analysis provides a basis for specification of problem-independent design requirements for systems that would claim real-time performance. Finally, we discuss the application of these design principles to a pragmatic problem in real-time signal understanding.
Coupled Structural, Thermal, Phase-change and Electromagnetic Analysis for Superconductors, Volume 2
NASA Technical Reports Server (NTRS)
Felippa, C. A.; Farhat, C.; Park, K. C.; Militello, C.; Schuler, J. J.
1996-01-01
Described are the theoretical development and computer implementation of reliable and efficient methods for the analysis of coupled mechanical problems that involve the interaction of mechanical, thermal, phase-change and electromag subproblems. The focus application has been the modeling of superconductivity and associated quantum-state phase change phenomena. In support of this objective the work has addressed the following issues: (1) development of variational principles for finite elements, (2) finite element modeling of the electromagnetic problem, (3) coupling of thermel and mechanical effects, and (4) computer implementation and solution of the superconductivity transition problem. The main accomplishments have been: (1) the development of the theory of parametrized and gauged variational principles, (2) the application of those principled to the construction of electromagnetic, thermal and mechanical finite elements, and (3) the coupling of electromagnetic finite elements with thermal and superconducting effects, and (4) the first detailed finite element simulations of bulk superconductors, in particular the Meissner effect and the nature of the normal conducting boundary layer. The theoretical development is described in two volumes. Volume 1 describes mostly formulation specific problems. Volume 2 describes generalization of those formulations.
Experimentation in machine discovery
NASA Technical Reports Server (NTRS)
Kulkarni, Deepak; Simon, Herbert A.
1990-01-01
KEKADA, a system that is capable of carrying out a complex series of experiments on problems from the history of science, is described. The system incorporates a set of experimentation strategies that were extracted from the traces of the scientists' behavior. It focuses on surprises to constrain its search, and uses its strategies to generate hypotheses and to carry out experiments. Some strategies are domain independent, whereas others incorporate knowledge of a specific domain. The domain independent strategies include magnification, determining scope, divide and conquer, factor analysis, and relating different anomalous phenomena. KEKADA represents an experiment as a set of independent and dependent entities, with apparatus variables and a goal. It represents a theory either as a sequence of processes or as abstract hypotheses. KEKADA's response is described to a particular problem in biochemistry. On this and other problems, the system is capable of carrying out a complex series of experiments to refine domain theories. Analysis of the system and its behavior on a number of different problems has established its generality, but it has also revealed the reasons why the system would not be a good experimental scientist.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, F.; Banks, J. W.; Henshaw, W. D.
We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less
Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J
2015-01-01
A generalized linear modeling framework to the analysis of responses and response times is outlined. In this framework, referred to as bivariate generalized linear item response theory (B-GLIRT), separate generalized linear measurement models are specified for the responses and the response times that are subsequently linked by cross-relations. The cross-relations can take various forms. Here, we focus on cross-relations with a linear or interaction term for ability tests, and cross-relations with a curvilinear term for personality tests. In addition, we discuss how popular existing models from the psychometric literature are special cases in the B-GLIRT framework depending on restrictions in the cross-relation. This allows us to compare existing models conceptually and empirically. We discuss various extensions of the traditional models motivated by practical problems. We also illustrate the applicability of our approach using various real data examples, including data on personality and cognitive ability.
Multimedia Analysis plus Visual Analytics = Multimedia Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chinchor, Nancy; Thomas, James J.; Wong, Pak C.
2010-10-01
Multimedia analysis has focused on images, video, and to some extent audio and has made progress in single channels excluding text. Visual analytics has focused on the user interaction with data during the analytic process plus the fundamental mathematics and has continued to treat text as did its precursor, information visualization. The general problem we address in this tutorial is the combining of multimedia analysis and visual analytics to deal with multimedia information gathered from different sources, with different goals or objectives, and containing all media types and combinations in common usage.
Asymptotic analysis of dissipative waves with applications to their numerical simulation
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1990-01-01
Various problems involving the interplay of asymptotics and numerics in the analysis of wave propagation in dissipative systems are studied. A general approach to the asymptotic analysis of linear, dissipative waves is developed. It was applied to the derivation of asymptotic boundary conditions for numerical solutions on unbounded domains. Applications include the Navier-Stokes equations. Multidimensional traveling wave solutions to reaction-diffusion equations are also considered. A preliminary numerical investigation of a thermo-diffusive model of flame propagation in a channel with heat loss at the walls is presented.
From Quantum Fields to Local Von Neumann Algebras
NASA Astrophysics Data System (ADS)
Borchers, H. J.; Yngvason, Jakob
The subject of the paper is an old problem of the general theory of quantized fields: When can the unbounded operators of a Wightman field theory be associated with local algebras of bounded operators in the sense of Haag? The paper reviews and extends previous work on this question, stressing its connections with a noncommutive generalization of the classical Hamburger moment problem. Necessary and sufficient conditions for the existence of a local net of von Neumann algebras corresponding to a given Wightman field are formulated in terms of strengthened versions of the usual positivity property of Wightman functionals. The possibility that the local net has to be defined in an enlarged Hilbert space cannot be ruled out in general. Under additional hypotheses, e.g., if the field operators obey certain energy bounds, such an extension of the Hilbert space is not necessary, however. In these cases a fairly simple condition for the existence of a local net can be given involving the concept of “central positivity” introduced by Powers. The analysis presented here applies to translationally covariant fields with an arbitrary number of components, whereas Lorentz covariance is not needed. The paper contains also a brief discussion of an approach to noncommutative moment problems due to Dubois-Violette, and concludes with some remarks on modular theory for algebras of unbounded operators.
Online learning control using adaptive critic designs with sparse kernel machines.
Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo
2013-05-01
In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.
NASA Technical Reports Server (NTRS)
Vos, R. G.; Beste, D. L.; Gregg, J.
1984-01-01
The User Manual for the Integrated Analysis Capability (IAC) Level 1 system is presented. The IAC system currently supports the thermal, structures, controls and system dynamics technologies, and its development is influenced by the requirements for design/analysis of large space systems. The system has many features which make it applicable to general problems in engineering, and to management of data and software. Information includes basic IAC operation, executive commands, modules, solution paths, data organization and storage, IAC utilities, and module implementation.
Considerations for Micro- and Nano-scale Space Payloads
NASA Technical Reports Server (NTRS)
Altemir, David A.
1995-01-01
This paper collects and summarizes many of the issues associated with the design, analysis, and flight of space payloads. However, highly miniaturized experimental packages are highly susceptible to the deleterious effects of induced contamination and charged particles when they are directly exposed to the space environment. These two problem areas are addressed and a general discussion of space environments, applicable design and analysis practices (with extensive references to the open literature) and programmatic considerations are presented.
Probabilistic structural analysis methods and applications
NASA Technical Reports Server (NTRS)
Cruse, T. A.; Wu, Y.-T.; Dias, B.; Rajagopal, K. R.
1988-01-01
An advanced algorithm for simulating the probabilistic distribution of structural responses due to statistical uncertainties in loads, geometry, material properties, and boundary conditions is reported. The method effectively combines an advanced algorithm for calculating probability levels for multivariate problems (fast probability integration) together with a general-purpose finite-element code for stress, vibration, and buckling analysis. Application is made to a space propulsion system turbine blade for which the geometry and material properties are treated as random variables.
Problems in particle theory. Technical report - 1993--1994
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adler, S.L.; Wilczek, F.
This report is a progress report on the work of two principal investigators in the broad area of particle physics theory, covering their personal work, that of their coworkers, and their proposed work for the future. One author has worked in the past on various topics in field theory and particle physics, among them current algebras, the physics of neutrino induced reactions, quantum electrodynamics (including strong magnetic field processes), the theory of the axial-vector current anomaly, topics in quantum gravity, and nonlinear models for quark confinement. While much of his work has been analytical, all of the projects listed abovemore » (except for the work on gravity) had phases which required considerable computer work as well. Over the next several years, he proposes to continue or initiate research on the following problems: (1) Acceleration algorithms for the Monte Carlo analysis of lattice field and gauge theories, and more generally, new research in computational neuroscience and pattern recognition. (2) Construction of quaternionic generalizations of complex quantum mechanics and field theory, and their application to composite models of quarks and leptons, and to the problem of unifying quantum theories of matter with general relativity. One author has worked on problems in exotic quantum statistics and its applications to condensed matter systems. His work has also continued on the quantum theory of black holes. This has evolved toward understanding properties of quantum field theory and string theory in incomplete regions of flat space.« less
Problem Solving in a Natural Language Environment.
1979-07-21
another mapping that can map the "values" of those slots onto each other. 11.2 Kowledge Reoresentation Systems Several general knowledge...Hirach Frames The problem solving frames are general descriptions of problems (and solutions). Much more power could be milked from the concept of...general and powerful matching routines can be seen if the problem solving frames are going to work. The matcher must find matches between an element
NASA Astrophysics Data System (ADS)
Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara
2012-10-01
Inverse problems can usually be modelled as operator equations in infinite-dimensional spaces with a forward operator acting between Hilbert or Banach spaces—a formulation which quite often also serves as the basis for defining and analyzing solution methods. The additional amount of structure and geometric interpretability provided by the concept of an inner product has rendered these methods amenable to a convergence analysis, a fact which has led to a rigorous and comprehensive study of regularization methods in Hilbert spaces over the last three decades. However, for numerous problems such as x-ray diffractometry, certain inverse scattering problems and a number of parameter identification problems in PDEs, the reasons for using a Hilbert space setting seem to be based on conventions rather than an appropriate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, non-Hilbertian regularization and data fidelity terms incorporating a priori information on solution and noise, such as general Lp-norms, TV-type norms, or the Kullback-Leibler divergence, have recently become very popular. These facts have motivated intensive investigations on regularization methods in Banach spaces, a topic which has emerged as a highly active research field within the area of inverse problems. Meanwhile some of the most well-known regularization approaches, such as Tikhonov-type methods requiring the solution of extremal problems, and iterative ones like the Landweber method, the Gauss-Newton method, as well as the approximate inverse method, have been investigated for linear and nonlinear operator equations in Banach spaces. Convergence with rates has been proven and conditions on the solution smoothness and on the structure of nonlinearity have been formulated. Still, beyond the existing results a large number of challenging open questions have arisen, due to the more involved handling of general Banach spaces and the larger variety of concrete instances with special properties. The aim of this special section is to provide a forum for highly topical ongoing work in the area of regularization in Banach spaces, its numerics and its applications. Indeed, we have been lucky enough to obtain a number of excellent papers both from colleagues who have previously been contributing to this topic and from researchers entering the field due to its relevance in practical inverse problems. We would like to thank all contributers for enabling us to present a high quality collection of papers on topics ranging from various aspects of regularization via efficient numerical solution to applications in PDE models. We give a brief overview of the contributions included in this issue (here ordered alphabetically by first author). In their paper, Iterative regularization with general penalty term—theory and application to L1 and TV regularization, Radu Bot and Torsten Hein provide an extension of the Landweber iteration for linear operator equations in Banach space to general operators in place of the inverse duality mapping, which corresponds to the use of general regularization functionals in variational regularization. The L∞ topology in data space corresponds to the frequently occuring situation of uniformly distributed data noise. A numerically efficient solution of the resulting Tikhonov regularization problem via a Moreau-Yosida appriximation and a semismooth Newton method, along with a δ-free regularization parameter choice rule, is the topic of the paper L∞ fitting for inverse problems with uniform noise by Christian Clason. Extension of convergence rates results from classical source conditions to their generalization via variational inequalities with a priori and a posteriori stopping rules is the main contribution of the paper Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities by Klaus Frick and Markus Grasmair, again in the context of some iterative method. A powerful tool for proving convergence rates of Tikhonov type but also other regularization methods in Banach spaces are assumptions of the type of variational inequalities that combine conditions on solution smoothness (i.e., source conditions in the Hilbert space case) and nonlinearity of the forward operator. In Parameter choice in Banach space regularization under variational inequalities, Bernd Hofmann and Peter Mathé provide results with general error measures and especially study the question of regularization parameter choice. Daijun Jiang, Hui Feng, and Jun Zou consider an application of Banach space ideas in the context of an application problem in their paper Convergence rates of Tikhonov regularizations for parameter identifiation in a parabolic-elliptic system, namely the identification of a distributed diffusion coefficient in a coupled elliptic-parabolic system. In particular, they show convergence rates of Lp-H1 (variational) regularization for the application under consideration via the use and verification of certain source and nonlinearity conditions. In computational practice, the Lp norm with p close to one is often used as a substitute for the actually sparsity promoting L1 norm. In Norm sensitivity of sparsity regularization with respect to p, Kamil S Kazimierski, Peter Maass and Robin Strehlow consider the question of how sensitive the Tikhonov regularized solution is with respect to p. They do so by computing the derivative via the implicit function theorem, particularly at the crucial value, p=1. Another iterative regularization method in Banach space is considered by Qinian Jin and Linda Stals in Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces. Using a variational formulation and under some smoothness and convexity assumption on the preimage space, they extend the convergence analysis of the well-known iterative Tikhonov method for linear problems in Hilbert space to a more general Banach space framework. Systems of linear or nonlinear operators can be efficiently treated by cyclic iterations, thus several variants of gradient and Newton-type Kaczmarz methods have already been studied in the Hilbert space setting. Antonio Leitão and M Marques Alves in their paper On Landweber---Kaczmarz methods for regularizing systems of ill-posed equations in Banach spaces carry out an extension to Banach spaces for the fundamental Landweber version. The impact of perturbations in the evaluation of the forward operator and its derivative on the convergence behaviour of regularization methods is a practically and highly relevant issue. It is treated in the paper Convergence rates analysis of Tikhonov regularization for nonlinear ill-posed problems with noisy operators by Shuai Lu and Jens Flemming for variational regularization of nonlinear problems in Banach spaces. In The approximate inverse in action: IV. Semi-discrete equations in a Banach space setting, Thomas Schuster, Andreas Rieder and Frank Schöpfer extend the concept of approximate inverse to the practically and highly relevant situation of finitely many measurements and a general smooth and convex Banach space as preimage space. They devise two approaches for computing the reconstruction kernels required in the method and provide convergence and regularization results. Frank Werner and Thorsten Hohage in Convergence rates in expectation for Tikhonov-type regularization of inverse problems with Poisson data prove convergence rates results for variational regularization with general convex regularization term and the Kullback-Leibler distance as data fidelity term by combining a new result on Poisson distributed data with a deterministic rates analysis. Finally, we would like to thank the Inverse Problems team, especially Joanna Evangelides and Chris Wileman, for their extraordinary smooth and productive cooperation, as well as Alfred K Louis for his kind support of our initiative.
Efficient Analysis of Complex Structures
NASA Technical Reports Server (NTRS)
Kapania, Rakesh K.
2000-01-01
Last various accomplishments achieved during this project are : (1) A Survey of Neural Network (NN) applications using MATLAB NN Toolbox on structural engineering especially on equivalent continuum models (Appendix A). (2) Application of NN and GAs to simulate and synthesize substructures: 1-D and 2-D beam problems (Appendix B). (3) Development of an equivalent plate-model analysis method (EPA) for static and vibration analysis of general trapezoidal built-up wing structures composed of skins, spars and ribs. Calculation of all sorts of test cases and comparison with measurements or FEA results. (Appendix C). (4) Basic work on using second order sensitivities on simulating wing modal response, discussion of sensitivity evaluation approaches, and some results (Appendix D). (5) Establishing a general methodology of simulating the modal responses by direct application of NN and by sensitivity techniques, in a design space composed of a number of design points. Comparison is made through examples using these two methods (Appendix E). (6) Establishing a general methodology of efficient analysis of complex wing structures by indirect application of NN: the NN-aided Equivalent Plate Analysis. Training of the Neural Networks for this purpose in several cases of design spaces, which can be applicable for actual design of complex wings (Appendix F).
Hengartner, M P; Ajdacic-Gross, V; Rodgers, S; Müller, M; Rössler, W
2013-10-01
Various studies have reported a positive relationship between child maltreatment and personality disorders (PDs). However, few studies included all DSM-IV PDs and even fewer adjusted for other forms of childhood adversity, e.g. bullying or family problems. We analyzed questionnaires completed by 512 participants of the ZInEP epidemiology survey, a comprehensive psychiatric survey of the general population in Zurich, Switzerland. Associations between childhood adversity and PDs were analyzed bivariately via simple regression analyses and multivariately via multiple path analysis. The bivariate analyses revealed that all PD dimensions were significantly related to various forms of family and school problems as well as child abuse. In contrast, according to the multivariate analysis only school problems and emotional abuse were associated with various PDs. Poverty was uniquely associated with schizotypal PD, conflicts with parents with obsessive-compulsive PD, physical abuse with antisocial PD, and physical neglect with narcissistic PD. Sexual abuse was statistically significantly associated with schizotypal and borderline PD, but corresponding effect sizes were small. Childhood adversity has a serious impact on PDs. Bullying and violence in schools and emotional abuse appear to be more salient markers of general personality pathology than other forms of childhood adversity. Associations with sexual abuse were negligible when adjusted for other forms of adversity. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
48 CFR 2929.101 - Resolving tax problems.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Resolving tax problems. 2929.101 Section 2929.101 Federal Acquisition Regulations System DEPARTMENT OF LABOR GENERAL CONTRACTING REQUIREMENTS TAXES General 2929.101 Resolving tax problems. Contract tax problems or questions...
Building Hierarchical Representations for Oracle Character and Sketch Recognition.
Jun Guo; Changhu Wang; Roman-Rangel, Edgar; Hongyang Chao; Yong Rui
2016-01-01
In this paper, we study oracle character recognition and general sketch recognition. First, a data set of oracle characters, which are the oldest hieroglyphs in China yet remain a part of modern Chinese characters, is collected for analysis. Second, typical visual representations in shape- and sketch-related works are evaluated. We analyze the problems suffered when addressing these representations and determine several representation design criteria. Based on the analysis, we propose a novel hierarchical representation that combines a Gabor-related low-level representation and a sparse-encoder-related mid-level representation. Extensive experiments show the effectiveness of the proposed representation in both oracle character recognition and general sketch recognition. The proposed representation is also complementary to convolutional neural network (CNN)-based models. We introduce a solution to combine the proposed representation with CNN-based models, and achieve better performances over both approaches. This solution has beaten humans at recognizing general sketches.
NASA Astrophysics Data System (ADS)
Prudêncio, Filipa R.; Matos, Sérgio A.; Paiva, Carlos R.
2014-11-01
The concept of a perfect electromagnetic conductor (PEMC) was introduced to generalize and unify two well-known and apparently disjoint concepts in electromagnetics: the perfect electric conductor (PEC) and the perfect magnetic conductor (PMC). Although the PEMC has proven a fertile tool in electromagnetic analyses dealing with new and complex boundaries, its corresponding definition as a medium has, nevertheless, raised several problems. In fact, according to its initial 3D definition, the PEMC cannot be considered a unique and well-defined medium: it leads to extraneous fields without physical meaning. By using a previously published generalization of a PEMC that regards this concept both as a boundary and as a medium - which was dubbed an MIM (Minkowskian isotropic medium) and acts, in practice, as an actual electromagnetic conductor (EMC) - it is herein presented a straightforward analysis of waveguides containing PEMCs that readily and systematically follows from the general framework of waveguides containing EMCs.
Operator priming and generalization of practice in adults' simple arithmetic.
Chen, Yalin; Campbell, Jamie I D
2016-04-01
There is a renewed debate about whether educated adults solve simple addition problems (e.g., 2 + 3) by direct fact retrieval or by fast, automatic counting-based procedures. Recent research testing adults' simple addition and multiplication showed that a 150-ms preview of the operator (+ or ×) facilitated addition, but not multiplication, suggesting that a general addition procedure was primed by the + sign. In Experiment 1 (n = 36), we applied this operator-priming paradigm to rule-based problems (0 + N = N, 1 × N = N, 0 × N = 0) and 1 + N problems with N ranging from 0 to 9. For the rule-based problems, we found both operator-preview facilitation and generalization of practice (e.g., practicing 0 + 3 sped up unpracticed 0 + 8), the latter being a signature of procedure use; however, we also found operator-preview facilitation for 1 + N in the absence of generalization, which implies the 1 + N problems were solved by fact retrieval but nonetheless were facilitated by an operator preview. Thus, the operator preview effect does not discriminate procedure use from fact retrieval. Experiment 2 (n = 36) investigated whether a population with advanced mathematical training-engineering and computer science students-would show generalization of practice for nonrule-based simple addition problems (e.g., 1 + 4, 4 + 7). The 0 + N problems again presented generalization, whereas no nonzero problem type did; but all nonzero problems sped up when the identical problems were retested, as predicted by item-specific fact retrieval. The results pose a strong challenge to the generality of the proposal that skilled adults' simple addition is based on fast procedural algorithms, and instead support a fact-retrieval model of fast addition performance. (c) 2016 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Ibrahim, A. H.; Tiwari, S. N.; Smith, R. E.
1997-01-01
Variational methods (VM) sensitivity analysis employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.
An uncertainty analysis of the flood-stage upstream from a bridge.
Sowiński, M
2006-01-01
The paper begins with the formulation of the problem in the form of a general performance function. Next the Latin hypercube sampling (LHS) technique--a modified version of the Monte Carlo method is briefly described. The essential uncertainty analysis of the flood-stage upstream from a bridge starts with a description of the hydraulic model. This model concept is based on the HEC-RAS model developed for subcritical flow under a bridge without piers in which the energy equation is applied. The next section contains the characteristic of the basic variables including a specification of their statistics (means and variances). Next the problem of correlated variables is discussed and assumptions concerning correlation among basic variables are formulated. The analysis of results is based on LHS ranking lists obtained from the computer package UNCSAM. Results fot two examples are given: one for independent and the other for correlated variables.
Applying behavior analysis to clinical problems: review and analysis of habit reversal.
Miltenberger, R G; Fuqua, R W; Woods, D W
1998-01-01
This article provides a review and analysis of habit reversal, a multicomponent procedure developed by Azrin and Nunn (1973, 1974) for the treatment of nervous habits, tics, and stuttering. The article starts with a discussion of the behaviors treated with habit reversal, behavioral covariation among habits, and functional analysis and assessment of habits. Research on habit reversal and simplified versions of the procedure is then described. Next the article discusses the limitations of habit reversal and the evidence for its generality. The article concludes with an analysis of the behavioral processes involved in habit reversal and suggestions for future research. PMID:9757583
Shayan, Zahra; Pourmovahed, Zahra; Najafipour, Fatemeh; Abdoli, Ali Mohammad; Mohebpour, Fatemeh; Najafipour, Sedighe
2015-12-01
Nowadays, infertility problems have become a social concern, and are associated with multiple psychological and social problems. Also, it affects the interpersonal communication between the individual, familial, and social characteristics. Since women are exposed to stressors of physical, mental, social factors, and treatment of infertility, providing a psychometric screening tool is necessary for disorders of this group. The aim of this study was to determine the factor structure of the general health questionnaire-28 to discover mental disorders in infertile women. In this study, 220 infertile women undergoing treatment of infertility were selected from the Yazd Research and Clinical Center for Infertility with convenience sampling in 2011. After completing the general health questionnaire by the project manager, validity and, reliability of the questionnaire were calculated by confirmatory factor structure and Cronbach's alpha, respectively. Four factors, including anxiety and insomnia, social dysfunction, depression, and physical symptoms were extracted from the factor structure. 50.12% of the total variance was explained by four factors. The reliability coefficient of the questionnaire was obtained 0.90. Analysis of the factor structure and reliability of General Health Questionnaire-28 showed that it is suitable as a screening instrument for assessing general health of infertile women.
Non-Newtonian Hele-Shaw Flow and the Saffman-Taylor Instability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kondic, L.; Shelley, M.J.; Palffy-Muhoray, P.
We explore the Saffman-Taylor instability of a gas bubble expanding into a shear thinning liquid in a radial Hele-Shaw cell. Using Darcy{close_quote}s law generalized for non-Newtonian fluids, we perform simulations of the full dynamical problem. The simulations show that shear thinning significantly influences the developing interfacial patterns. Shear thinning can suppress tip splitting, and produce fingers which oscillate during growth and shed side branches. Emergent length scales show reasonable agreement with a general linear stability analysis. {copyright} {ital 1998} {ital The American Physical Society}
Langenbucher, Frieder
2005-01-01
A linear system comprising n compartments is completely defined by the rate constants between any of the compartments and the initial condition in which compartment(s) the drug is present at the beginning. The generalized solution is the time profiles of drug amount in each compartment, described by polyexponential equations. Based on standard matrix operations, an Excel worksheet computes the rate constants and the coefficients, finally the full time profiles for a specified range of time values.
A unified development of several techniques for the representation of random vectors and data sets
NASA Technical Reports Server (NTRS)
Bundick, W. T.
1973-01-01
Linear vector space theory is used to develop a general representation of a set of data vectors or random vectors by linear combinations of orthonormal vectors such that the mean squared error of the representation is minimized. The orthonormal vectors are shown to be the eigenvectors of an operator. The general representation is applied to several specific problems involving the use of the Karhunen-Loeve expansion, principal component analysis, and empirical orthogonal functions; and the common properties of these representations are developed.
Gradient optimization and nonlinear control
NASA Technical Reports Server (NTRS)
Hasdorff, L.
1976-01-01
The book represents an introduction to computation in control by an iterative, gradient, numerical method, where linearity is not assumed. The general language and approach used are those of elementary functional analysis. The particular gradient method that is emphasized and used is conjugate gradient descent, a well known method exhibiting quadratic convergence while requiring very little more computation than simple steepest descent. Constraints are not dealt with directly, but rather the approach is to introduce them as penalty terms in the criterion. General conjugate gradient descent methods are developed and applied to problems in control.
Conditions for Symmetries in the Buckle Patterns of Laminated-Composite Plates
NASA Technical Reports Server (NTRS)
Nemeth, Michael P.
2012-01-01
Conditions for the existence of certain symmetries to exist in the buckle patterns of symmetrically laminated composite plates are presented. The plates considered have a general planform with cutouts, variable thickness and stiffnesses, and general support and loading conditions. The symmetry analysis is based on enforcing invariance of the corresponding eigenvalue problem for a group of coordinate transformations associated with buckle patterns commonly exhibited by symmetrically laminated plates. The buckle-pattern symmetries examined include a central point of inversion symmetry, one plane of reflective symmetry, and two planes of reflective symmetry.
PLANS: A finite element program for nonlinear analysis of structures. Volume 1: Theoretical manual
NASA Technical Reports Server (NTRS)
Pifko, A.; Levine, H. S.; Armen, H., Jr.
1975-01-01
The PLANS system is described which is a finite element program for nonlinear analysis. The system represents a collection of special purpose computer programs each associated with a distinct physical problem class. Modules of PLANS specifically referenced and described in detail include: (1) REVBY, for the plastic analysis of bodies of revolution; (2) OUT-OF-PLANE, for the plastic analysis of 3-D built-up structures where membrane effects are predominant; (3) BEND, for the plastic analysis of built-up structures where bending and membrane effects are significant; (4) HEX, for the 3-D elastic-plastic analysis of general solids; and (5) OUT-OF-PLANE-MG, for material and geometrically nonlinear analysis of built-up structures. The SATELLITE program for data debugging and plotting of input geometries is also described. The theoretical foundations upon which the analysis is based are presented. Discussed are the form of the governing equations, the methods of solution, plasticity theories available, a general system description and flow of the programs, and the elements available for use.
Enabling quaternion derivatives: the generalized HR calculus
Xu, Dongpo; Jahanchahi, Cyrus; Took, Clive C.; Mandic, Danilo P.
2015-01-01
Quaternion derivatives exist only for a very restricted class of analytic (regular) functions; however, in many applications, functions of interest are real-valued and hence not analytic, a typical case being the standard real mean square error objective function. The recent HR calculus is a step forward and provides a way to calculate derivatives and gradients of both analytic and non-analytic functions of quaternion variables; however, the HR calculus can become cumbersome in complex optimization problems due to the lack of rigorous product and chain rules, a consequence of the non-commutativity of quaternion algebra. To address this issue, we introduce the generalized HR (GHR) derivatives which employ quaternion rotations in a general orthogonal system and provide the left- and right-hand versions of the quaternion derivative of general functions. The GHR calculus also solves the long-standing problems of product and chain rules, mean-value theorem and Taylor's theorem in the quaternion field. At the core of the proposed GHR calculus is quaternion rotation, which makes it possible to extend the principle to other functional calculi in non-commutative settings. Examples in statistical learning theory and adaptive signal processing support the analysis. PMID:26361555