Graphical correlation of gaging-station records
Searcy, James K.
1960-01-01
A gaging-station record is a sample of the rate of flow of a stream at a given site. This sample can be used to estimate the magnitude and distribution of future flows if the record is long enough to be representative of the long-term flow of the stream. The reliability of a short-term record for estimating future flow characteristics can be improved through correlation with a long-term record. Correlation can be either numerical or graphical, but graphical correlation of gaging-station records has several advantages. The graphical correlation method is described in a step-by-step procedure with an illustrative problem of simple correlation, illustrative problems of three examples of multiple correlation--removing seasonal effect--and two examples of correlation of one record with two other records. Except in the problem on removal of seasonal effect, the same group of stations is used in the illustrative problems. The purpose of the problems is to illustrate the method--not to show the improvement that can result from multiple correlation as compared with simple correlation. Hydrologic factors determine whether a usable relation exists between gaging-station records. Statistics is only a tool for evaluating and using an existing relation, and the investigator must be guided by a knowledge of hydrology.
Computerized Clinical Simulations.
ERIC Educational Resources Information Center
Reinecker, Lynn
1985-01-01
Describes technique involved in designing a clinical simulation problem for the allied health field of respiratory therapy; discusses the structure, content, and scoring categories of the simulation; and provides a sample program which illustrates a programming technique in BASIC, including a program listing and a sample flowchart. (MBR)
Analysis of Iron in Lawn Fertilizer: A Sampling Study
ERIC Educational Resources Information Center
Jeannot, Michael A.
2006-01-01
An experiment is described which uses a real-world sample of lawn fertilizer in a simple exercise to illustrate problems associated with the sampling step of a chemical analysis. A mixed-particle fertilizer containing discrete particles of iron oxide (magnetite, Fe[subscript 3]O[subscript 4]) mixed with other particles provides an excellent…
Fourier Theory Explanation for the Sampling Theorem Demonstrated by a Laboratory Experiment.
ERIC Educational Resources Information Center
Sharma, A.; And Others
1996-01-01
Describes a simple experiment that uses a CCD video camera, a display monitor, and a laser-printed bar pattern to illustrate signal sampling problems that produce aliasing or moiri fringes in images. Uses the Fourier transform to provide an appropriate and elegant means to explain the sampling theorem and the aliasing phenomenon in CCD-based…
Mean estimation in highly skewed samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pederson, S P
The problem of inference for the mean of a highly asymmetric distribution is considered. Even with large sample sizes, usual asymptotics based on normal theory give poor answers, as the right-hand tail of the distribution is often under-sampled. This paper attempts to improve performance in two ways. First, modifications of the standard confidence interval procedure are examined. Second, diagnostics are proposed to indicate whether or not inferential procedures are likely to be valid. The problems are illustrated with data simulated from an absolute value Cauchy distribution. 4 refs., 2 figs., 1 tab.
New prior sampling methods for nested sampling - Development and testing
NASA Astrophysics Data System (ADS)
Stokes, Barrie; Tuyl, Frank; Hudson, Irene
2017-06-01
Nested Sampling is a powerful algorithm for fitting models to data in the Bayesian setting, introduced by Skilling [1]. The nested sampling algorithm proceeds by carrying out a series of compressive steps, involving successively nested iso-likelihood boundaries, starting with the full prior distribution of the problem parameters. The "central problem" of nested sampling is to draw at each step a sample from the prior distribution whose likelihood is greater than the current likelihood threshold, i.e., a sample falling inside the current likelihood-restricted region. For both flat and informative priors this ultimately requires uniform sampling restricted to the likelihood-restricted region. We present two new methods of carrying out this sampling step, and illustrate their use with the lighthouse problem [2], a bivariate likelihood used by Gregory [3] and a trivariate Gaussian mixture likelihood. All the algorithm development and testing reported here has been done with Mathematica® [4].
Randomly Sampled-Data Control Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Han, Kuoruey
1990-01-01
The purpose is to solve the Linear Quadratic Regulator (LQR) problem with random time sampling. Such a sampling scheme may arise from imperfect instrumentation as in the case of sampling jitter. It can also model the stochastic information exchange among decentralized controllers to name just a few. A practical suboptimal controller is proposed with the nice property of mean square stability. The proposed controller is suboptimal in the sense that the control structure is limited to be linear. Because of i. i. d. assumption, this does not seem unreasonable. Once the control structure is fixed, the stochastic discrete optimal control problem is transformed into an equivalent deterministic optimal control problem with dynamics described by the matrix difference equation. The N-horizon control problem is solved using the Lagrange's multiplier method. The infinite horizon control problem is formulated as a classical minimization problem. Assuming existence of solution to the minimization problem, the total system is shown to be mean square stable under certain observability conditions. Computer simulations are performed to illustrate these conditions.
Huh, Joonsuk; Yung, Man-Hong
2017-08-07
Molecular vibroic spectroscopy, where the transitions involve non-trivial Bosonic correlation due to the Duschinsky Rotation, is strongly believed to be in a similar complexity class as Boson Sampling. At finite temperature, the problem is represented as a Boson Sampling experiment with correlated Gaussian input states. This molecular problem with temperature effect is intimately related to the various versions of Boson Sampling sharing the similar computational complexity. Here we provide a full description to this relation in the context of Gaussian Boson Sampling. We find a hierarchical structure, which illustrates the relationship among various Boson Sampling schemes. Specifically, we show that every instance of Gaussian Boson Sampling with an initial correlation can be simulated by an instance of Gaussian Boson Sampling without initial correlation, with only a polynomial overhead. Since every Gaussian state is associated with a thermal state, our result implies that every sampling problem in molecular vibronic transitions, at any temperature, can be simulated by Gaussian Boson Sampling associated with a product of vacuum modes. We refer such a generalized Gaussian Boson Sampling motivated by the molecular sampling problem as Vibronic Boson Sampling.
NASA Technical Reports Server (NTRS)
Huffman, S.
1977-01-01
Detailed instructions on the use of two computer-aided-design programs for designing the energy storage inductor for single winding and two winding dc to dc converters are provided. Step by step procedures are given to illustrate the formatting of user input data. The procedures are illustrated by eight sample design problems which include the user input and the computer program output.
Practical Tools for Designing and Weighting Survey Samples
ERIC Educational Resources Information Center
Valliant, Richard; Dever, Jill A.; Kreuter, Frauke
2013-01-01
Survey sampling is fundamentally an applied field. The goal in this book is to put an array of tools at the fingertips of practitioners by explaining approaches long used by survey statisticians, illustrating how existing software can be used to solve survey problems, and developing some specialized software where needed. This book serves at least…
Using Microcomputers to Teach Non-Linear Equations at Sixth Form Level.
ERIC Educational Resources Information Center
Cheung, Y. L.
1984-01-01
Promotes the use of the microcomputer in mathematics instruction, reviewing approaches to teaching nonlinear equations. Examples of computer diagrams are illustrated and compared to textbook samples. An example of a problem-solving program is included. (ML)
NASA Astrophysics Data System (ADS)
Amsallem, David; Tezaur, Radek; Farhat, Charbel
2016-12-01
A comprehensive approach for real-time computations using a database of parametric, linear, projection-based reduced-order models (ROMs) based on arbitrary underlying meshes is proposed. In the offline phase of this approach, the parameter space is sampled and linear ROMs defined by linear reduced operators are pre-computed at the sampled parameter points and stored. Then, these operators and associated ROMs are transformed into counterparts that satisfy a certain notion of consistency. In the online phase of this approach, a linear ROM is constructed in real-time at a queried but unsampled parameter point by interpolating the pre-computed linear reduced operators on matrix manifolds and therefore computing an interpolated linear ROM. The proposed overall model reduction framework is illustrated with two applications: a parametric inverse acoustic scattering problem associated with a mockup submarine, and a parametric flutter prediction problem associated with a wing-tank system. The second application is implemented on a mobile device, illustrating the capability of the proposed computational framework to operate in real-time.
Welding As Science: Applying Basic Engineering Principles to the Discipline
NASA Technical Reports Server (NTRS)
Nunes, A. C., Jr.
2010-01-01
This Technical Memorandum provides sample problems illustrating ways in which basic engineering science has been applied to the discipline of welding. Perhaps inferences may be drawn regarding optimal approaches to particular welding problems, as well as for the optimal education for welding engineers. Perhaps also some readers may be attracted to the science(s) of welding and may make worthwhile contributions to the discipline.
A variational theorem for creep with applications to plates and columns
NASA Technical Reports Server (NTRS)
Sanders, J Lyell, Jr; Mccomb, Harvey G , Jr; Schlechte, Floyd R
1958-01-01
A variational theorem is presented for a body undergoing creep. Solutions to problems of the creep behavior of plates, columns, beams, and shells can be obtained by means of the direct methods of the calculus of variations in conjunction with the stated theorem. The application of the theorem is illustrated for plates and columns by the solution of two sample problems.
NASA Astrophysics Data System (ADS)
Longhurst, G. R.
1991-04-01
Gas evolution from spherical solids or liquids where no convective processes are active is analyzed. Three problem classes are considered: (1) constant concentration boundary, (2) Henry's law (first order) boundary, and (3) Sieverts' law (second order) boundary. General expressions are derived for dimensionless times and transport parameters appropriate to each of the classes considered. However, in the second order case, the non-linearities of the problem require the presence of explicit dimensional variables in the solution. Sample problems are solved to illustrate the method.
Teaching Critical Thinking in the Business Mathematics Course.
ERIC Educational Resources Information Center
Rosenbaum, Roberta
1986-01-01
Appropriate strategies for teaching students to interpret and understand quantitative data in marketing, management, accounting, and data processing are described. Accompanying figures illustrate samples of percentage markups, trade discounts, gross earning, gross commissions, accounting entries, balance sheet entries, and percentage problems. (CT)
Characterisation of a reference site for quantifying uncertainties related to soil sampling.
Barbizzi, Sabrina; de Zorzi, Paolo; Belli, Maria; Pati, Alessandra; Sansone, Umberto; Stellato, Luisa; Barbina, Maria; Deluisa, Andrea; Menegon, Sandro; Coletti, Valter
2004-01-01
The paper reports a methodology adopted to face problems related to quality assurance in soil sampling. The SOILSAMP project, funded by the Environmental Protection Agency of Italy (APAT), is aimed at (i) establishing protocols for soil sampling in different environments; (ii) assessing uncertainties associated with different soil sampling methods in order to select the "fit-for-purpose" method; (iii) qualifying, in term of trace elements spatial variability, a reference site for national and international inter-comparison exercises. Preliminary results and considerations are illustrated.
Hoffmann, Jörn; Leake, S.A.; Galloway, D.L.; Wilson, Alicia M.
2003-01-01
This report documents a computer program, the Subsidence and Aquifer-System Compaction (SUB) Package, to simulate aquifer-system compaction and land subsidence using the U.S. Geological Survey modular finite-difference ground-water flow model, MODFLOW-2000. The SUB Package simulates elastic (recoverable) compaction and expansion, and inelastic (permanent) compaction of compressible fine-grained beds (interbeds) within the aquifers. The deformation of the interbeds is caused by head or pore-pressure changes, and thus by changes in effective stress, within the interbeds. If the stress is less than the preconsolidation stress of the sediments, the deformation is elastic; if the stress is greater than the preconsolidation stress, the deformation is inelastic. The propagation of head changes within the interbeds is defined by a transient, one-dimensional (vertical) diffusion equation. This equation accounts for delayed release of water from storage or uptake of water into storage in the interbeds. Properties that control the timing of the storage changes are vertical hydraulic diffusivity and interbed thickness. The SUB Package supersedes the Interbed Storage Package (IBS1) for MODFLOW, which assumes that water is released from or taken into storage with changes in head in the aquifer within a single model time step and, therefore, can be reasonably used to simulate only thin interbeds. The SUB Package relaxes this assumption and can be used to simulate time-dependent drainage and compaction of thick interbeds and confining units. The time-dependent drainage can be turned off, in which case the SUB Package gives results identical to those from IBS1. Three sample problems illustrate the usefulness of the SUB Package. One sample problem verifies that the package works correctly. This sample problem simulates the drainage of a thick interbed in response to a step change in head in the adjacent aquifer and closely matches the analytical solution. A second sample problem illustrates the effects of seasonally varying discharge and recharge to an aquifer system with a thick interbed. A third sample problem simulates a multilayered regional ground-water basin. Model input files for the third sample problem are included in the appendix.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, N.S.V.
The classical Nadaraya-Watson estimator is shown to solve a generic sensor fusion problem where the underlying sensor error densities are not known but a sample is available. By employing Haar kernels this estimator is shown to yield finite sample guarantees and also to be efficiently computable. Two simulation examples, and a robotics example involving the detection of a door using arrays of ultrasonic and infrared sensors, are presented to illustrate the performance.
Models for Rational Number Bases
ERIC Educational Resources Information Center
Pedersen, Jean J.; Armbruster, Frank O.
1975-01-01
This article extends number bases to negative integers, then to positive rationals and finally to negative rationals. Methods and rules for operations in positive and negative rational bases greater than one or less than negative one are summarized in tables. Sample problems are explained and illustrated. (KM)
Extended Importance Sampling for Reliability Analysis under Evidence Theory
NASA Astrophysics Data System (ADS)
Yuan, X. K.; Chen, B.; Zhang, B. Q.
2018-05-01
In early engineering practice, the lack of data and information makes uncertainty difficult to deal with. However, evidence theory has been proposed to handle uncertainty with limited information as an alternative way to traditional probability theory. In this contribution, a simulation-based approach, called ‘Extended importance sampling’, is proposed based on evidence theory to handle problems with epistemic uncertainty. The proposed approach stems from the traditional importance sampling for reliability analysis under probability theory, and is developed to handle the problem with epistemic uncertainty. It first introduces a nominal instrumental probability density function (PDF) for every epistemic uncertainty variable, and thus an ‘equivalent’ reliability problem under probability theory is obtained. Then the samples of these variables are generated in a way of importance sampling. Based on these samples, the plausibility and belief (upper and lower bounds of probability) can be estimated. It is more efficient than direct Monte Carlo simulation. Numerical and engineering examples are given to illustrate the efficiency and feasible of the proposed approach.
Error behavior of multistep methods applied to unstable differential systems
NASA Technical Reports Server (NTRS)
Brown, R. L.
1977-01-01
The problem of modeling a dynamic system described by a system of ordinary differential equations which has unstable components for limited periods of time is discussed. It is shown that the global error in a multistep numerical method is the solution to a difference equation initial value problem, and the approximate solution is given for several popular multistep integration formulas. Inspection of the solution leads to the formulation of four criteria for integrators appropriate to unstable problems. A sample problem is solved numerically using three popular formulas and two different stepsizes to illustrate the appropriateness of the criteria.
Emotional Relationships between Mothers* and Infants: Knowns, Unknowns, and Unknown Unknowns
Bornstein, Marc H.; Suwalsky, Joan T. D.; Breakstone, Dana A.
2012-01-01
An overview of the literature pertaining to the construct of emotional availability is presented, illustrated by a sampling of relevant studies. Methodological, statistical, and conceptual problems in the existing corpus of research are discussed, and suggestions for improving future investigations of this important construct are offered. PMID:22292998
The Physics of Kicking a Football.
ERIC Educational Resources Information Center
Brancazio, Peter J.
1985-01-01
A physicist's view of the problems involved in kicking a football is described through the principles of projectile motion and aerodynamics. Sample equations, statistical summaries of kickoffs and punts, and calculation of launch parameters are presented along with discussion to clarify concepts of physics illustrated by kicking a football. (JN)
An Empirical Evaluation of Factor Reliability.
ERIC Educational Resources Information Center
Jackson, Douglas N.; Morf, Martin E.
The psychometric reliability of a factor, defined as its generalizability across samples drawn from the same population of tests, is considered as a necessary precondition for the scientific meaningfulness of factor analytic results. A solution to the problem of generalizability is illustrated empirically on data from a set of tests designed to…
Ash, A; Schwartz, M; Payne, S M; Restuccia, J D
1990-11-01
Medical record review is increasing in importance as the need to identify and monitor utilization and quality of care problems grow. To conserve resources, reviews are usually performed on a subset of cases. If judgment is used to identify subgroups for review, this raises the following questions: How should subgroups be determined, particularly since the locus of problems can change over time? What standard of comparison should be used in interpreting rates of problems found in subgroups? How can population problem rates be estimated from observed subgroup rates? How can the bias be avoided that arises because reviewers know that selected cases are suspected of having problems? How can changes in problem rates over time be interpreted when evaluating intervention programs? Simple random sampling, an alternative to subgroup review, overcomes the problems implied by these questions but is inefficient. The Self-Adapting Focused Review System (SAFRS), introduced and described here, provides an adaptive approach to record selection that is based upon model-weighted probability sampling. It retains the desirable inferential properties of random sampling while allowing reviews to be concentrated on cases currently thought most likely to be problematic. Model development and evaluation are illustrated using hospital data to predict inappropriate admissions.
Extending religion-health research to secular minorities: issues and concerns.
Hwang, Karen; Hammer, Joseph H; Cragun, Ryan T
2011-09-01
Claims about religion's beneficial effects on physical and psychological health have received substantial attention in popular media, but empirical support for these claims is mixed. Many of these claims are tenuous because they fail to address basic methodological issues relating to construct validity, sampling methods or analytical problems. A more conceptual problem has to do with the near universal lack of atheist control samples. While many studies include samples of individuals classified as "low spirituality" or religious "nones", these groups are heterogeneous and contain only a fraction of members who would be considered truly secular. We illustrate the importance of including an atheist control group whenever possible in the religiosity/spirituality and health research and discuss areas for further investigation.
Problems and Limitations in Studies on Screening for Language Delay
ERIC Educational Resources Information Center
Eriksson, Marten; Westerlund, Monica; Miniscalco, Carmela
2010-01-01
This study discusses six common methodological limitations in screening for language delay (LD) as illustrated in 11 recent studies. The limitations are (1) whether the studies define a target population, (2) whether the recruitment procedure is unbiased, (3) attrition, (4) verification bias, (5) small sample size and (6) inconsistencies in choice…
NASA Technical Reports Server (NTRS)
Holdeman, J. D.
1979-01-01
Three analytical problems in estimating the frequency at which commercial airline flights will encounter high cabin ozone levels are formulated and solved: namely, estimating flight-segment mean levels, estimating maximum-per-flight levels, and estimating the maximum average level over a specified flight interval. For each problem, solution procedures are given for different levels of input information - from complete cabin ozone data, which provides a direct solution, to limited ozone information, such as ambient ozone means and standard deviations, with which several assumptions are necessary to obtain the required estimates. Each procedure is illustrated by an example case calculation that uses simultaneous cabin and ambient ozone data obtained by the NASA Global Atmospheric Sampling Program. Critical assumptions are discussed and evaluated, and the several solutions for each problem are compared. Example calculations are also performed to illustrate how variations in lattitude, altitude, season, retention ratio, flight duration, and cabin ozone limits affect the estimated probabilities.
NASA Technical Reports Server (NTRS)
Bittker, David A.; Radhakrishnan, Krishnan
1994-01-01
LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 3 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 3 explains the kinetics and kinetics-plus-sensitivity analysis problems supplied with LSENS and presents sample results. These problems illustrate the various capabilities of, and reaction models that can be solved by, the code and may provide a convenient starting point for the user to construct the problem data file required to execute LSENS. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.
Advances in the p and h-p Versions of the Finite Element Method. A survey
1988-01-01
p versions is the code PROBE which was developed by NOETIC Technologies, St. Louis, MO [49] [60]. PROBE solves two dimensional problems of linear...p and h-p versions of the finite element method was studied in detail from various point of view. We will mention here some essential illustrative...49] PROBE - Sample Problems. Series of reports, Noetic Technologies, St. Louis, MO 63117. [50] Rank, E., Babu’ka, I., An expert system for the
The flying hot wire and related instrumentation
NASA Technical Reports Server (NTRS)
Coles, D.; Cantnell, B.; Wadcock, A.
1978-01-01
A flying hot-wire technique is proposed for studies of separated turbulent flow in wind tunnels. The technique avoids the problem of signal rectification in regions of high turbulence level by moving the probe rapidly through the flow on the end of a rotating arm. New problems which arise include control of effects of torque variation on rotor speed, avoidance of interference from the wake of the moving arms, and synchronization of data acquisition with rotation. Solutions for these problems are described. The self-calibrating feature of the technique is illustrated by a sample X-array calibration.
Petroleum accounting principles, procedures, and issues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brock, H.R.; Klingstedt, J.P.; Jones, D.M.
1985-01-01
This book begins with the basics and leads one through the complexities of accounting and reporting for the industry. It presents the material one needs as an accountant in the petroleum industry. Examples deal with real problems and issues. It also includes numerous illustrations and examples, as well as sample forms, lease agreements, and industry and governmental regulations.
Comparison of Optimal Design Methods in Inverse Problems
Banks, H. T.; Holm, Kathleen; Kappel, Franz
2011-01-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762
Predicting sample lifetimes in creep fracture of heterogeneous materials
NASA Astrophysics Data System (ADS)
Koivisto, Juha; Ovaska, Markus; Miksic, Amandine; Laurson, Lasse; Alava, Mikko J.
2016-08-01
Materials flow—under creep or constant loads—and, finally, fail. The prediction of sample lifetimes is an important and highly challenging problem because of the inherently heterogeneous nature of most materials that results in large sample-to-sample lifetime fluctuations, even under the same conditions. We study creep deformation of paper sheets as one heterogeneous material and thus show how to predict lifetimes of individual samples by exploiting the "universal" features in the sample-inherent creep curves, particularly the passage to an accelerating creep rate. Using simulations of a viscoelastic fiber bundle model, we illustrate how deformation localization controls the shape of the creep curve and thus the degree of lifetime predictability.
VARIABLE SELECTION IN NONPARAMETRIC ADDITIVE MODELS
Huang, Jian; Horowitz, Joel L.; Wei, Fengrong
2010-01-01
We consider a nonparametric additive model of a conditional mean function in which the number of variables and additive components may be larger than the sample size but the number of nonzero additive components is “small” relative to the sample size. The statistical problem is to determine which additive components are nonzero. The additive components are approximated by truncated series expansions with B-spline bases. With this approximation, the problem of component selection becomes that of selecting the groups of coefficients in the expansion. We apply the adaptive group Lasso to select nonzero components, using the group Lasso to obtain an initial estimator and reduce the dimension of the problem. We give conditions under which the group Lasso selects a model whose number of components is comparable with the underlying model, and the adaptive group Lasso selects the nonzero components correctly with probability approaching one as the sample size increases and achieves the optimal rate of convergence. The results of Monte Carlo experiments show that the adaptive group Lasso procedure works well with samples of moderate size. A data example is used to illustrate the application of the proposed method. PMID:21127739
ERIC Educational Resources Information Center
Askham, Janet; Gilhooly, Mary; Parkatti, Terttu; Vega, Jose-Luis
2007-01-01
Postgraduate education in gerontology is now widespread within European universities, but, even so, such developments remain very uneven. This paper outlines the variety of provision by describing Master's programmes in a sample of countries: England, Scotland, Finland, and Spain. These programmes illustrate some of the common problems: lack of…
ERIC Educational Resources Information Center
Mundia, Lawrence
2011-01-01
The survey investigated the problems of social desirability (SD), non-response bias (NRB) and reliability in the Minnesota Multiphasic Personality Inventory--Revised (MMPI-2) self-report inventory administered to Brunei student teachers. Bruneians scored higher on all the validity scales than the normative US sample, thereby threatening the…
User's manual for a computer program for simulating intensively managed allowable cut.
Robert W. Sassaman; Ed Holt; Karl Bergsvik
1972-01-01
Detailed operating instructions are described for SIMAC, a computerized forest simulation model which calculates the allowable cut assuming volume regulation for forests with intensively managed stands. A sample problem illustrates the required inputs and expected output. SIMAC is written in FORTRAN IV and runs on a CDC 6400 computer with a SCOPE 3.3 operating system....
Risk-Based Sampling: I Don't Want to Weight in Vain.
Powell, Mark R
2015-12-01
Recently, there has been considerable interest in developing risk-based sampling for food safety and animal and plant health for efficient allocation of inspection and surveillance resources. The problem of risk-based sampling allocation presents a challenge similar to financial portfolio analysis. Markowitz (1952) laid the foundation for modern portfolio theory based on mean-variance optimization. However, a persistent challenge in implementing portfolio optimization is the problem of estimation error, leading to false "optimal" portfolios and unstable asset weights. In some cases, portfolio diversification based on simple heuristics (e.g., equal allocation) has better out-of-sample performance than complex portfolio optimization methods due to estimation uncertainty. Even for portfolios with a modest number of assets, the estimation window required for true optimization may imply an implausibly long stationary period. The implications for risk-based sampling are illustrated by a simple simulation model of lot inspection for a small, heterogeneous group of producers. © 2015 Society for Risk Analysis.
Adaptive Importance Sampling for Control and Inference
NASA Astrophysics Data System (ADS)
Kappen, H. J.; Ruiz, H. C.
2016-03-01
Path integral (PI) control problems are a restricted class of non-linear control problems that can be solved formally as a Feynman-Kac PI and can be estimated using Monte Carlo sampling. In this contribution we review PI control theory in the finite horizon case. We subsequently focus on the problem how to compute and represent control solutions. We review the most commonly used methods in robotics and control. Within the PI theory, the question of how to compute becomes the question of importance sampling. Efficient importance samplers are state feedback controllers and the use of these requires an efficient representation. Learning and representing effective state-feedback controllers for non-linear stochastic control problems is a very challenging, and largely unsolved, problem. We show how to learn and represent such controllers using ideas from the cross entropy method. We derive a gradient descent method that allows to learn feed-back controllers using an arbitrary parametrisation. We refer to this method as the path integral cross entropy method or PICE. We illustrate this method for some simple examples. The PI control methods can be used to estimate the posterior distribution in latent state models. In neuroscience these problems arise when estimating connectivity from neural recording data using EM. We demonstrate the PI control method as an accurate alternative to particle filtering.
Deep Learning for Flow Sculpting: Insights into Efficient Learning using Scientific Simulation Data
NASA Astrophysics Data System (ADS)
Stoecklein, Daniel; Lore, Kin Gwn; Davies, Michael; Sarkar, Soumik; Ganapathysubramanian, Baskar
2017-04-01
A new technique for shaping microfluid flow, known as flow sculpting, offers an unprecedented level of passive fluid flow control, with potential breakthrough applications in advancing manufacturing, biology, and chemistry research at the microscale. However, efficiently solving the inverse problem of designing a flow sculpting device for a desired fluid flow shape remains a challenge. Current approaches struggle with the many-to-one design space, requiring substantial user interaction and the necessity of building intuition, all of which are time and resource intensive. Deep learning has emerged as an efficient function approximation technique for high-dimensional spaces, and presents a fast solution to the inverse problem, yet the science of its implementation in similarly defined problems remains largely unexplored. We propose that deep learning methods can completely outpace current approaches for scientific inverse problems while delivering comparable designs. To this end, we show how intelligent sampling of the design space inputs can make deep learning methods more competitive in accuracy, while illustrating their generalization capability to out-of-sample predictions.
Computer analysis of multicircuit shells of revolution by the field method
NASA Technical Reports Server (NTRS)
Cohen, G. A.
1975-01-01
The field method, presented previously for the solution of even-order linear boundary value problems defined on one-dimensional open branch domains, is extended to boundary value problems defined on one-dimensional domains containing circuits. This method converts the boundary value problem into two successive numerically stable initial value problems, which may be solved by standard forward integration techniques. In addition, a new method for the treatment of singular boundary conditions is presented. This method, which amounts to a partial interchange of the roles of force and displacement variables, is problem independent with respect to both accuracy and speed of execution. This method was implemented in a computer program to calculate the static response of ring stiffened orthotropic multicircuit shells of revolution to asymmetric loads. Solutions are presented for sample problems which illustrate the accuracy and efficiency of the method.
McGrath, L M; Mustanski, B; Metzger, A; Pine, D S; Kistner-Griffin, E; Cook, E; Wakschlag, L S
2012-08-01
This study illustrates the application of a latent modeling approach to genotype-phenotype relationships and gene × environment interactions, using a novel, multidimensional model of adult female problem behavior, including maternal prenatal smoking. The gene of interest is the monoamine oxidase A (MAOA) gene which has been well studied in relation to antisocial behavior. Participants were adult women (N = 192) who were sampled from a prospective pregnancy cohort of non-Hispanic, white individuals recruited from a neighborhood health clinic. Structural equation modeling was used to model a female problem behavior phenotype, which included conduct problems, substance use, impulsive-sensation seeking, interpersonal aggression, and prenatal smoking. All of the female problem behavior dimensions clustered together strongly, with the exception of prenatal smoking. A main effect of MAOA genotype and a MAOA × physical maltreatment interaction were detected with the Conduct Problems factor. Our phenotypic model showed that prenatal smoking is not simply a marker of other maternal problem behaviors. The risk variant in the MAOA main effect and interaction analyses was the high activity MAOA genotype, which is discrepant from consensus findings in male samples. This result contributes to an emerging literature on sex-specific interaction effects for MAOA.
Nonuniform depth grids in parabolic equation solutions.
Sanders, William M; Collins, Michael D
2013-04-01
The parabolic wave equation is solved using a finite-difference solution in depth that involves a nonuniform grid. The depth operator is discretized using Galerkin's method with asymmetric hat functions. Examples are presented to illustrate that this approach can be used to improve efficiency for problems in ocean acoustics and seismo-acoustics. For shallow water problems, accuracy is sensitive to the precise placement of the ocean bottom interface. This issue is often addressed with the inefficient approach of using a fine grid spacing over all depth. Efficiency may be improved by using a relatively coarse grid with nonuniform sampling to precisely position the interface. Efficiency may also be improved by reducing the sampling in the sediment and in an absorbing layer that is used to truncate the computational domain. Nonuniform sampling may also be used to improve the implementation of a single-scattering approximation for sloping fluid-solid interfaces.
NASA Technical Reports Server (NTRS)
Deepak, A.; Fluellen, A.
1978-01-01
An efficient numerical method of multiple quadratures, the Conroy method, is applied to the problem of computing multiple scattering contributions in the radiative transfer through realistic planetary atmospheres. A brief error analysis of the method is given and comparisons are drawn with the more familiar Monte Carlo method. Both methods are stochastic problem-solving models of a physical or mathematical process and utilize the sampling scheme for points distributed over a definite region. In the Monte Carlo scheme the sample points are distributed randomly over the integration region. In the Conroy method, the sample points are distributed systematically, such that the point distribution forms a unique, closed, symmetrical pattern which effectively fills the region of the multidimensional integration. The methods are illustrated by two simple examples: one, of multidimensional integration involving two independent variables, and the other, of computing the second order scattering contribution to the sky radiance.
NASA Astrophysics Data System (ADS)
Zhu, Baolong; Zhang, Zhiping; Zhou, Ding; Ma, Jie; Li, Shunli
2017-08-01
This paper investigates the H∞ control problem of the attitude stabilisation of a rigid spacecraft with external disturbances using prediction-based sampled-data control strategy. Aiming to achieve a 'virtual' closed-loop system, a type of parameterised sampled-data controller is designed by introducing a prediction mechanism. The resultant closed-loop system is equivalent to a hybrid system featured by a continuous-time and an impulsive differential system. By using a time-varying Lyapunov functional, a generalised bounded real lemma (GBRL) is first established for a kind of impulsive differential system. Based on this GBRL and Lyapunov functional approach, a sufficient condition is derived to guarantee the closed-loop system to be asymptotically stable and to achieve a prescribed H∞ performance. In addition, the controller parameter tuning is cast into a convex optimisation problem. Simulation and comparative results are provided to illustrate the effectiveness of the developed control scheme.
Extremes in ecology: Avoiding the misleading effects of sampling variation in summary analyses
Link, W.A.; Sauer, J.R.
1996-01-01
Surveys such as the North American Breeding Bird Survey (BBS) produce large collections of parameter estimates. One's natural inclination when confronted with lists of parameter estimates is to look for the extreme values: in the BBS, these correspond to the species that appear to have the greatest changes in population size through time. Unfortunately, extreme estimates are liable to correspond to the most poorly estimated parameters. Consequently, the most extreme parameters may not match up with the most extreme parameter estimates. The ranking of parameter values on the basis of their estimates are a difficult statistical problem. We use data from the BBS and simulations to illustrate the potential misleading effects of sampling variation in rankings of parameters. We describe empirical Bayes and constrained empirical Bayes procedures which provide partial solutions to the problem of ranking in the presence of sampling variation.
A Method for Assessing Change in Attitude: The McNemar Test.
ERIC Educational Resources Information Center
Ciechalski, Joseph C.; Pinkney, James W.; Weaver, Florence S.
This paper illustrates the use of the McNemar Test, using a hypothetical problem. The McNemar Test is a nonparametric statistical test that is a type of chi square test using dependent, rather than independent, samples to assess before-after designs in which each subject is used as his or her own control. Results of the McNemar test make it…
A Random Variable Approach to Nuclear Targeting and Survivability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Undem, Halvor A.
We demonstrate a common mathematical formalism for analyzing problems in nuclear survivability and targeting. This formalism, beginning with a random variable approach, can be used to interpret past efforts in nuclear-effects analysis, including targeting analysis. It can also be used to analyze new problems brought about by the post Cold War Era, such as the potential effects of yield degradation in a permanently untested nuclear stockpile. In particular, we illustrate the formalism through four natural case studies or illustrative problems, linking these to actual past data, modeling, and simulation, and suggesting future uses. In the first problem, we illustrate themore » case of a deterministically modeled weapon used against a deterministically responding target. Classic "Cookie Cutter" damage functions result. In the second problem, we illustrate, with actual target test data, the case of a deterministically modeled weapon used against a statistically responding target. This case matches many of the results of current nuclear targeting modeling and simulation tools, including the result of distance damage functions as complementary cumulative lognormal functions in the range variable. In the third problem, we illustrate the case of a statistically behaving weapon used against a deterministically responding target. In particular, we show the dependence of target damage on weapon yield for an untested nuclear stockpile experiencing yield degradation. Finally, and using actual unclassified weapon test data, we illustrate in the fourth problem the case of a statistically behaving weapon used against a statistically responding target.« less
Solution procedure of dynamical contact problems with friction
NASA Astrophysics Data System (ADS)
Abdelhakim, Lotfi
2017-07-01
Dynamical contact is one of the common research topics because of its wide applications in the engineering field. The main goal of this work is to develop a time-stepping algorithm for dynamic contact problems. We propose a finite element approach for elastodynamics contact problems [1]. Sticking, sliding and frictional contact can be taken into account. Lagrange multipliers are used to enforce non-penetration condition. For the time discretization, we propose a scheme equivalent to the explicit Newmark scheme. Each time step requires solving a nonlinear problem similar to a static friction problem. The nonlinearity of the system of equation needs an iterative solution procedure based on Uzawa's algorithm [2][3]. The applicability of the algorithm is illustrated by selected sample numerical solutions to static and dynamic contact problems. Results obtained with the model have been compared and verified with results from an independent numerical method.
The effects of particle loading on turbulence structure and modelling
NASA Technical Reports Server (NTRS)
Squires, Kyle D.; Eaton, J. K.
1989-01-01
The objective of the present research was to extend the Direct Numerical Simulation (DNS) approach to particle-laden turbulent flows using a simple model of particle/flow interaction. The program addressed the simplest type of flow, homogeneous, isotropic turbulence, and examined interactions between the particles and gas phase turbulence. The specific range of problems examined include those in which the particle is much smaller than the smallest length scales of the turbulence yet heavy enough to slip relative to the flow. The particle mass loading is large enough to have a significant impact on the turbulence, while the volume loading was small enough such that particle-particle interactions could be neglected. Therefore, these simulations are relevant to practical problems involving small, dense particles conveyed by turbulent gas flows at moderate loadings. A sample of the results illustrating modifications of the particle concentration field caused by the turbulence structure is presented and attenuation of turbulence by the particle cloud is also illustrated.
Alternative Energy and Propulsion Power for Today’s US Military
2009-05-05
the Problem ........................................................................................4 Real Illustrations of the Problem ...taken to reduce its grip on fossil fuels. Further Defining the Problem In 2006 testimony before the US Congress, a DoD representative stated that...necessities of our military.12 Real Illustrations of the Problem Warfighting commanders in the field are requesting alternatives to petroleum based energy. In
Pure phase encode magnetic field gradient monitor.
Han, Hui; MacGregor, Rodney P; Balcom, Bruce J
2009-12-01
Numerous methods have been developed to measure MRI gradient waveforms and k-space trajectories. The most promising new strategy appears to be magnetic field monitoring with RF microprobes. Multiple RF microprobes may record the magnetic field evolution associated with a wide variety of imaging pulse sequences. The method involves exciting one or more test samples and measuring the time evolution of magnetization through the FIDs. Two critical problems remain. The gradient waveform duration is limited by the sample T(2)*, while the k-space maxima are limited by gradient dephasing. The method presented is based on pure phase encode FIDs and solves the above two problems in addition to permitting high strength gradient measurement. A small doped water phantom (1-3 mm droplet, T(1), T(2), T(2)* < 100 micros) within a microprobe is excited by a series of closely spaced broadband RF pulses each followed by FID single point acquisition. Two trial gradient waveforms have been chosen to illustrate the technique, neither of which could be measured by the conventional RF microprobe measurement. The first is an extended duration gradient waveform while the other illustrates the new method's ability to measure gradient waveforms with large net area and/or high amplitude. The new method is a point monitor with simple implementation and low cost hardware requirements.
NASA Astrophysics Data System (ADS)
Liu, Yongfang; Zhao, Yu; Chen, Guanrong
2016-11-01
This paper studies the distributed consensus and containment problems for a group of harmonic oscillators with a directed communication topology. First, for consensus without a leader, a class of distributed consensus protocols is designed by using motion planning and Pontryagin's principle. The proposed protocol only requires relative information measurements at the sampling instants, without requiring information exchange over the sampled interval. By using stability theory and the properties of stochastic matrices, it is proved that the distributed consensus problem can be solved in the motion planning framework. Second, for the case with multiple leaders, a class of distributed containment protocols is developed for followers such that their positions and velocities can ultimately converge to the convex hull formed by those of the leaders. Compared with the existing consensus algorithms, a remarkable advantage of the proposed sampled-data-based protocols is that the sampling periods, communication topologies and control gains are all decoupled and can be separately designed, which relaxes many restrictions in controllers design. Finally, some numerical examples are given to illustrate the effectiveness of the analytical results.
Introductory Level Problems Illustrating Concepts in Pharmaceutical Engineering
ERIC Educational Resources Information Center
McIver, Keith; Whitaker, Kathryn; De Delva, Vladimir; Farrell, Stephanie; Savelski, Mariano J.; Slater, C. Stewart
2012-01-01
Textbook style problems including detailed solutions introducing pharmaceutical topics at the level of an introductory chemical engineering course have been created. The problems illustrate and teach subjects which students would learn if they were to pursue a career in pharmaceutical engineering, including the unique terminology of the field,…
Deep Learning for Flow Sculpting: Insights into Efficient Learning using Scientific Simulation Data
Stoecklein, Daniel; Lore, Kin Gwn; Davies, Michael; Sarkar, Soumik; Ganapathysubramanian, Baskar
2017-01-01
A new technique for shaping microfluid flow, known as flow sculpting, offers an unprecedented level of passive fluid flow control, with potential breakthrough applications in advancing manufacturing, biology, and chemistry research at the microscale. However, efficiently solving the inverse problem of designing a flow sculpting device for a desired fluid flow shape remains a challenge. Current approaches struggle with the many-to-one design space, requiring substantial user interaction and the necessity of building intuition, all of which are time and resource intensive. Deep learning has emerged as an efficient function approximation technique for high-dimensional spaces, and presents a fast solution to the inverse problem, yet the science of its implementation in similarly defined problems remains largely unexplored. We propose that deep learning methods can completely outpace current approaches for scientific inverse problems while delivering comparable designs. To this end, we show how intelligent sampling of the design space inputs can make deep learning methods more competitive in accuracy, while illustrating their generalization capability to out-of-sample predictions. PMID:28402332
Mustanski, B.; Metzger, A.; Pine, D. S.; Kistner-Griffin, E.; Cook, E.; Wakschlag, L. S.
2013-01-01
This study illustrates the application of a latent modeling approach to genotype–phenotype relationships and gene×environment interactions, using a novel, multidimensional model of adult female problem behavior, including maternal prenatal smoking. The gene of interest is the mono-amine oxidase A (MAOA) gene which has been well studied in relation to antisocial behavior. Participants were adult women (N=192) who were sampled from a prospective pregnancy cohort of non-Hispanic, white individuals recruited from a neighborhood health clinic. Structural equation modeling was used to model a female problem behavior phenotype, which included conduct problems, substance use, impulsive-sensation seeking, interpersonal aggression, and prenatal smoking. All of the female problem behavior dimensions clustered together strongly, with the exception of prenatal smoking. A main effect of MAOA genotype and a MAOA× physical maltreatment interaction were detected with the Conduct Problems factor. Our phenotypic model showed that prenatal smoking is not simply a marker of other maternal problem behaviors. The risk variant in the MAOA main effect and interaction analyses was the high activity MAOA genotype, which is discrepant from consensus findings in male samples. This result contributes to an emerging literature on sex-specific interaction effects for MAOA. PMID:22610759
TemperSAT: A new efficient fair-sampling random k-SAT solver
NASA Astrophysics Data System (ADS)
Fang, Chao; Zhu, Zheng; Katzgraber, Helmut G.
The set membership problem is of great importance to many applications and, in particular, database searches for target groups. Recently, an approach to speed up set membership searches based on the NP-hard constraint-satisfaction problem (random k-SAT) has been developed. However, the bottleneck of the approach lies in finding the solution to a large SAT formula efficiently and, in particular, a large number of independent solutions is needed to reduce the probability of false positives. Unfortunately, traditional random k-SAT solvers such as WalkSAT are biased when seeking solutions to the Boolean formulas. By porting parallel tempering Monte Carlo to the sampling of binary optimization problems, we introduce a new algorithm (TemperSAT) whose performance is comparable to current state-of-the-art SAT solvers for large k with the added benefit that theoretically it can find many independent solutions quickly. We illustrate our results by comparing to the currently fastest implementation of WalkSAT, WalkSATlm.
Future Lunar Sampling Missions: Big Returns on Small Samples
NASA Astrophysics Data System (ADS)
Shearer, C. K.; Borg, L.
2002-01-01
The next sampling missions to the Moon will result in the return of sample mass (100g to 1 kg) substantially smaller than those returned by the Apollo missions (380 kg). Lunar samples to be returned by these missions are vital for: (1) calibrating the late impact history of the inner solar system that can then be extended to other planetary surfaces; (2) deciphering the effects of catastrophic impacts on a planetary body (i.e. Aitken crater); (3) understanding the very late-stage thermal and magmatic evolution of a cooling planet; (4) exploring the interior of a planet; and (5) examining volatile reservoirs and transport on an airless planetary body. Can small lunar samples be used to answer these and other pressing questions concerning important solar system processes? Two potential problems with small, robotically collected samples are placing them in a geologic context and extracting robust planetary information. Although geologic context will always be a potential problem with any planetary sample, new lunar samples can be placed within the context of the important Apollo - Luna collections and the burgeoning planet-scale data sets for the lunar surface and interior. Here we illustrate the usefulness of applying both new or refined analytical approaches in deciphering information locked in small lunar samples.
An investigation of the use of temporal decomposition in space mission scheduling
NASA Technical Reports Server (NTRS)
Bullington, Stanley E.; Narayanan, Venkat
1994-01-01
This research involves an examination of techniques for solving scheduling problems in long-duration space missions. The mission timeline is broken up into several time segments, which are then scheduled incrementally. Three methods are presented for identifying the activities that are to be attempted within these segments. The first method is a mathematical model, which is presented primarily to illustrate the structure of the temporal decomposition problem. Since the mathematical model is bound to be computationally prohibitive for realistic problems, two heuristic assignment procedures are also presented. The first heuristic method is based on dispatching rules for activity selection, and the second heuristic assigns performances of a model evenly over timeline segments. These heuristics are tested using a sample Space Station mission and a Spacelab mission. The results are compared with those obtained by scheduling the missions without any problem decomposition. The applicability of this approach to large-scale mission scheduling problems is also discussed.
NASA Astrophysics Data System (ADS)
Yan, Yifang; Yang, Chunyu; Ma, Xiaoping; Zhou, Linna
2018-02-01
In this paper, sampled-data H∞ filtering problem is considered for Markovian jump singularly perturbed systems with time-varying delay and missing measurements. The sampled-data system is represented by a time-delay system, and the missing measurement phenomenon is described by an independent Bernoulli random process. By constructing an ɛ-dependent stochastic Lyapunov-Krasovskii functional, delay-dependent sufficient conditions are derived such that the filter error system satisfies the prescribed H∞ performance for all possible missing measurements. Then, an H∞ filter design method is proposed in terms of linear matrix inequalities. Finally, numerical examples are given to illustrate the feasibility and advantages of the obtained results.
The focal plane reception pattern calculation for a paraboloidal antenna with a nearby fence
NASA Technical Reports Server (NTRS)
Schmidt, Richard F.; Cheng, Hwai-Soon; Kao, Michael W.
1987-01-01
A computer simulation program is described which is used to estimate the effects of a proximate diffraction fence on the performance of paraboloid antennas. The computer program is written in FORTRAN. The physical problem, mathematical formulation and coordinate references are described. The main control structure of the program and the function of the individual subroutines are discussed. The Job Control Language set-up and program instruction are provided in the user's instruction to help users execute the present program. A sample problem with an appropriate output listing is made available as an illustration of the usage of the program.
Buchman-Schmitt, Jennifer M; Brislin, Sarah J; Venables, Noah C; Joiner, Thomas E; Patrick, Christopher J
2017-07-01
The RDoC matrix framework calls for investigation of mental health problems through analysis of core biobehavioral processes quantified and studied across multiple domains of measurement. Critics have raised concerns about RDoC, including overemphasis on biological concepts/measures and disregard for the principle of multifinality, which holds that identical biological predispositions can give rise to differing behavioral outcomes. The current work illustrates an ontogenetic process approach to addressing these concerns, focusing on biobehavioral traits corresponding to RDoC constructs as predictors, and suicidal behavior as the outcome variable. Data were collected from a young adult sample (N=105), preselected to enhance rates of suicidality. Participants completed self-report measures of traits (threat sensitivity, response inhibition) and suicide-specific processes. We show that previously reported associations for traits of threat sensitivity and weak inhibitory control with suicidal behavior are mediated by more specific suicide-promoting processes-namely, thwarted belongingness, perceived burdensomeness, and capability for suicide. The sample was relatively small and the data were cross-sectional, limiting conclusions that can be drawn from the mediation analyses. Given prior research documenting neurophysiological as well as psychological bases to these trait dispositions, the current work sets the stage for an intensive RDoC-oriented investigation of suicidal tendencies in which both traits and suicide-promoting processes are quantified using indicators from different domains of measurement. More broadly, this work illustrates how an RDoC research approach can contribute to a nuanced understanding of specific clinical problems, through consideration of how general biobehavioral liabilities interface with distinct problem-promoting processes. Copyright © 2016 Elsevier B.V. All rights reserved.
Teaching helix and problems connected with helix using GeoGebra
NASA Astrophysics Data System (ADS)
Bímová, Daniela
2017-12-01
The contribution presents the dynamic applets created in GeoGebra that show the origin and main properties of a helix and it also presents some constructive problems connected with the helix. There are created the step by step algorithms of some constructions in the chosen applets. Three-dimensional applets include illustrative helix samples and spatial animations that help students better see problems concerning the helix spatially. There is mentioned the website in the contribution on which there is situated GeoGebra book dedicated to the topic "Helix" and containing the mentioned applets. The created applets and materials of the GeoGebra book "Helix" help in teaching and studying the course Constructive Geometry determined for the students of the Faculty of Mechanical Engineering of the Technical University of Liberec.
Methods for trend analysis: Examples with problem/failure data
NASA Technical Reports Server (NTRS)
Church, Curtis K.
1989-01-01
Statistics are emphasized as an important role in quality control and reliability. Consequently, Trend Analysis Techniques recommended a variety of statistical methodologies that could be applied to time series data. The major goal of the working handbook, using data from the MSFC Problem Assessment System, is to illustrate some of the techniques in the NASA standard, some different techniques, and to notice patterns of data. Techniques for trend estimation used are: regression (exponential, power, reciprocal, straight line) and Kendall's rank correlation coefficient. The important details of a statistical strategy for estimating a trend component are covered in the examples. However, careful analysis and interpretation is necessary because of small samples and frequent zero problem reports in a given time period. Further investigations to deal with these issues are being conducted.
Rubin, Kenneth H; Burgess, Kim B; Dwyer, Kathleen M; Hastings, Paul D
2003-01-01
Rarely have researchers elucidated early childhood precursors of externalizing behaviors for boys and girls from a normative sample. Toddlers (N = 104; 52 girls) were observed interacting with a same-sex peer and their mothers, and indices of conflict-aggression, emotion and behavior dysregulation, parenting, and child externalizing problems were obtained. Results indicated that boys initiated more conflictual-aggressive interactions as toddlers and had more externalizing difficulties 2 years later, yet girls' (not boys') conflict-aggressive initiations at age 2 were related to subsequent externalizing problems. When such initiations were controlled for, emotional-behavioral undercontrol at age 2 also independently predicted externalizing problems at age 4. Moreover, the relation between conflict-aggressive initiations at age 2 and externalizing problems at age 4 was strongest for dysregulated toddlers. Finally, the relation between age 2 conflict-aggressive initiations and age 4 externalizing problems was strongest for those toddlers who incurred high levels of maternal negativity. These findings illustrate temperament by parenting connections in the development of externalizing problems.
NASA Technical Reports Server (NTRS)
Boyce, Lola; Lovelace, Thomas B.
1989-01-01
FORTRAN program RANDOM2 is presented in the form of a user's manual. RANDOM2 is based on fracture mechanics using a probabilistic fatigue crack growth model. It predicts the random lifetime of an engine component to reach a given crack size. Details of the theoretical background, input data instructions, and a sample problem illustrating the use of the program are included.
NASA Technical Reports Server (NTRS)
Boyce, Lola; Lovelace, Thomas B.
1989-01-01
FORTRAN programs RANDOM3 and RANDOM4 are documented in the form of a user's manual. Both programs are based on fatigue strength reduction, using a probabilistic constitutive model. The programs predict the random lifetime of an engine component to reach a given fatigue strength. The theoretical backgrounds, input data instructions, and sample problems illustrating the use of the programs are included.
Stabilization for sampled-data neural-network-based control systems.
Zhu, Xun-Lin; Wang, Youyi
2011-02-01
This paper studies the problem of stabilization for sampled-data neural-network-based control systems with an optimal guaranteed cost. Unlike previous works, the resulting closed-loop system with variable uncertain sampling cannot simply be regarded as an ordinary continuous-time system with a fast-varying delay in the state. By defining a novel piecewise Lyapunov functional and using a convex combination technique, the characteristic of sampled-data systems is captured. A new delay-dependent stabilization criterion is established in terms of linear matrix inequalities such that the maximal sampling interval and the minimal guaranteed cost control performance can be obtained. It is shown that the newly proposed approach can lead to less conservative and less complex results than the existing ones. Application examples are given to illustrate the effectiveness and the benefits of the proposed method.
System reliability of randomly vibrating structures: Computational modeling and laboratory testing
NASA Astrophysics Data System (ADS)
Sundar, V. S.; Ammanagi, S.; Manohar, C. S.
2015-09-01
The problem of determination of system reliability of randomly vibrating structures arises in many application areas of engineering. We discuss in this paper approaches based on Monte Carlo simulations and laboratory testing to tackle problems of time variant system reliability estimation. The strategy we adopt is based on the application of Girsanov's transformation to the governing stochastic differential equations which enables estimation of probability of failure with significantly reduced number of samples than what is needed in a direct simulation study. Notably, we show that the ideas from Girsanov's transformation based Monte Carlo simulations can be extended to conduct laboratory testing to assess system reliability of engineering structures with reduced number of samples and hence with reduced testing times. Illustrative examples include computational studies on a 10-degree of freedom nonlinear system model and laboratory/computational investigations on road load response of an automotive system tested on a four-post test rig.
Fatigue crack growth model RANDOM2 user manual, appendix 1
NASA Technical Reports Server (NTRS)
Boyce, Lola; Lovelace, Thomas B.
1989-01-01
The FORTRAN program RANDOM2 is documented. RANDOM2 is based on fracture mechanics using a probabilistic fatigue crack growth model. It predicts the random lifetime of an engine component to reach a given crack size. Included in this user manual are details regarding the theoretical background of RANDOM2, input data, instructions and a sample problem illustrating the use of RANDOM2. Appendix A gives information on the physical quantities, their symbols, FORTRAN names, and both SI and U.S. Customary units. Appendix B includes photocopies of the actual computer printout corresponding to the sample problem. Appendices C and D detail the IMSL, Ver. 10(1), subroutines and functions called by RANDOM2 and a SAS/GRAPH(2) program that can be used to plot both the probability density function (p.d.f.) and the cumulative distribution function (c.d.f.).
Asymptotics of empirical eigenstructure for high dimensional spiked covariance.
Wang, Weichen; Fan, Jianqing
2017-06-01
We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.
Asymptotics of empirical eigenstructure for high dimensional spiked covariance
Wang, Weichen
2017-01-01
We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies. PMID:28835726
Optimisation algorithms for ECG data compression.
Haugland, D; Heber, J G; Husøy, J H
1997-07-01
The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.
NASA Astrophysics Data System (ADS)
Obuchowski, Nancy A.; Bullen, Jennifer A.
2018-04-01
Receiver operating characteristic (ROC) analysis is a tool used to describe the discrimination accuracy of a diagnostic test or prediction model. While sensitivity and specificity are the basic metrics of accuracy, they have many limitations when characterizing test accuracy, particularly when comparing the accuracies of competing tests. In this article we review the basic study design features of ROC studies, illustrate sample size calculations, present statistical methods for measuring and comparing accuracy, and highlight commonly used ROC software. We include descriptions of multi-reader ROC study design and analysis, address frequently seen problems of verification and location bias, discuss clustered data, and provide strategies for testing endpoints in ROC studies. The methods are illustrated with a study of transmission ultrasound for diagnosing breast lesions.
Scientific data interpolation with low dimensional manifold model
NASA Astrophysics Data System (ADS)
Zhu, Wei; Wang, Bao; Barnard, Richard; Hauck, Cory D.; Jenko, Frank; Osher, Stanley
2018-01-01
We propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace-Beltrami operator in the Euler-Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.
A minimum distance estimation approach to the two-sample location-scale problem.
Zhang, Zhiyi; Yu, Qiqing
2002-09-01
As reported by Kalbfleisch and Prentice (1980), the generalized Wilcoxon test fails to detect a difference between the lifetime distributions of the male and female mice died from Thymic Leukemia. This failure is a result of the test's inability to detect a distributional difference when a location shift and a scale change exist simultaneously. In this article, we propose an estimator based on the minimization of an average distance between two independent quantile processes under a location-scale model. Large sample inference on the proposed estimator, with possible right-censorship, is discussed. The mouse leukemia data are used as an example for illustration purpose.
An inverse model for a free-boundary problem with a contact line: Steady case
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volkov, Oleg; Protas, Bartosz
2009-07-20
This paper reformulates the two-phase solidification problem (i.e., the Stefan problem) as an inverse problem in which a cost functional is minimized with respect to the position of the interface and subject to PDE constraints. An advantage of this formulation is that it allows for a thermodynamically consistent treatment of the interface conditions in the presence of a contact point involving a third phase. It is argued that such an approach in fact represents a closure model for the original system and some of its key properties are investigated. We describe an efficient iterative solution method for the Stefan problemmore » formulated in this way which uses shape differentiation and adjoint equations to determine the gradient of the cost functional. Performance of the proposed approach is illustrated with sample computations concerning 2D steady solidification phenomena.« less
Empirical likelihood method for non-ignorable missing data problems.
Guan, Zhong; Qin, Jing
2017-01-01
Missing response problem is ubiquitous in survey sampling, medical, social science and epidemiology studies. It is well known that non-ignorable missing is the most difficult missing data problem where the missing of a response depends on its own value. In statistical literature, unlike the ignorable missing data problem, not many papers on non-ignorable missing data are available except for the full parametric model based approach. In this paper we study a semiparametric model for non-ignorable missing data in which the missing probability is known up to some parameters, but the underlying distributions are not specified. By employing Owen (1988)'s empirical likelihood method we can obtain the constrained maximum empirical likelihood estimators of the parameters in the missing probability and the mean response which are shown to be asymptotically normal. Moreover the likelihood ratio statistic can be used to test whether the missing of the responses is non-ignorable or completely at random. The theoretical results are confirmed by a simulation study. As an illustration, the analysis of a real AIDS trial data shows that the missing of CD4 counts around two years are non-ignorable and the sample mean based on observed data only is biased.
Sampling design optimization for spatial functions
Olea, R.A.
1984-01-01
A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.
Nonlinear Finite Element Analysis of Shells with Large Aspect Ratio
NASA Technical Reports Server (NTRS)
Chang, T. Y.; Sawamiphakdi, K.
1984-01-01
A higher order degenerated shell element with nine nodes was selected for large deformation and post-buckling analysis of thick or thin shells. Elastic-plastic material properties are also included. The post-buckling analysis algorithm is given. Using a square plate, it was demonstrated that the none-node element does not have shear locking effect even if its aspect ratio was increased to the order 10 to the 8th power. Two sample problems are given to illustrate the analysis capability of the shell element.
A hyperbolastic type-I diffusion process: Parameter estimation by means of the firefly algorithm.
Barrera, Antonio; Román-Román, Patricia; Torres-Ruiz, Francisco
2018-01-01
A stochastic diffusion process, whose mean function is a hyperbolastic curve of type I, is presented. The main characteristics of the process are studied and the problem of maximum likelihood estimation for the parameters of the process is considered. To this end, the firefly metaheuristic optimization algorithm is applied after bounding the parametric space by a stagewise procedure. Some examples based on simulated sample paths and real data illustrate this development. Copyright © 2017 Elsevier B.V. All rights reserved.
Link, W.A.
2003-01-01
Heterogeneity in detection probabilities has long been recognized as problematic in mark-recapture studies, and numerous models developed to accommodate its effects. Individual heterogeneity is especially problematic, in that reasonable alternative models may predict essentially identical observations from populations of substantially different sizes. Thus even with very large samples, the analyst will not be able to distinguish among reasonable models of heterogeneity, even though these yield quite distinct inferences about population size. The problem is illustrated with models for closed and open populations.
Transport and dispersion of pollutants in surface impoundments: a finite difference model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeh, G.T.
1980-07-01
A surface impoundment model by finite-difference (SIMFD) has been developed. SIMFD computes the flow rate, velocity field, and the concentration distribution of pollutants in surface impoundments with any number of islands located within the region of interest. Theoretical derivations and numerical algorithm are described in detail. Instructions for the application of SIMFD and listings of the FORTRAN IV source program are provided. Two sample problems are given to illustrate the application and validity of the model.
Is comprehension of problem solutions resistant to misleading heuristic cues?
Ackerman, Rakefet; Leiser, David; Shpigelman, Maya
2013-05-01
Previous studies in the domain of metacomprehension judgments have primarily used expository texts. When these texts include illustrations, even uninformative ones, people were found to judge that they understand their content better. The present study aimed to delineate the metacognitive processes involved in understanding problem solutions - a text type often perceived as allowing reliable judgments regarding understanding, and was not previously considered from a metacognitive perspective. Undergraduate students faced difficult problems. They then studied solution explanations with or without uninformative illustrations and provided judgments of comprehension (JCOMPs). Learning was assessed by application to near-transfer problems in an open-book test format. As expected, JCOMPs were polarized - they tended to reflect good or poor understanding. Yet, JCOMPs were higher for the illustrated solutions and even high certainty did not ensure resistance to this effect. Moreover, success in the transfer problems was lower in the presence of illustrations, demonstrating a bias stronger than that found with expository texts. Previous studies have suggested that weak learners are especially prone to being misled by superficial cues. In the present study, matching the difficulty of the task to the ability of the target population revealed that even highly able participants were not immune to misleading cues. The study extends previous findings regarding potential detrimental effects of illustrations and highlights aspects of the metacomprehension process that have not been considered before. Copyright © 2013 Elsevier B.V. All rights reserved.
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
Li, Weixuan; Lin, Guang
2015-03-21
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Weixuan; Lin, Guang, E-mail: guanglin@purdue.edu
2015-08-01
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes' rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less
Cheek, Cheryl; Piercy, Kathleen W; Kohlenberg, Meranda
2015-01-01
This study examined the ways in which individuals over 50 years old solved problems while volunteering in intensive humanitarian and disaster relief service. Thirty-seven men and women in the sample were sponsored by three religious organizations well known for providing humanitarian and disaster relief service. Semistructured interviews yielded data that were analyzed qualitatively, using McCracken's five-step process for analysis. We found that volunteers used three different abilities to solve problems: drawing upon experience to create strategies, maintaining emotional stability in the midst of trying circumstances, and applying strategies in a context-sensitive manner. These findings illustrate that these factors, which are comparable to those used in solving everyday problems, are unique in the way they are applied to intensive volunteering. The volunteers' sharing of knowledge, experience, and support with each other were also noticeable in their accounts of their service. This sharing contributed strongly to their sense of emotional stability and effectiveness in solving problems. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Meng, Su; Chen, Jie; Sun, Jian
2017-10-01
This paper investigates the problem of observer-based output feedback control for networked control systems with non-uniform sampling and time-varying transmission delay. The sampling intervals are assumed to vary within a given interval. The transmission delay belongs to a known interval. A discrete-time model is first established, which contains time-varying delay and norm-bounded uncertainties coming from non-uniform sampling intervals. It is then converted to an interconnection of two subsystems in which the forward channel is delay-free. The scaled small gain theorem is used to derive the stability condition for the closed-loop system. Moreover, the observer-based output feedback controller design method is proposed by utilising a modified cone complementary linearisation algorithm. Finally, numerical examples illustrate the validity and superiority of the proposed method.
Toward the automated analysis of plasma physics problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mynick, H.E.
1989-04-01
A program (CALC) is described, which carries out nontrivial plasma physics calculations, in a manner intended to emulate the approach of a human theorist. This includes the initial process of gathering the relevant equations from a plasma knowledge base, and then determining how to solve them. Solution of the sets of equations governing physics problems, which in general have a nonuniform,irregular structure, not amenable to solution by standardized algorithmic procedures, is facilitated by an analysis of the structure of the equations and the relations among them. This often permits decompositions of the full problem into subproblems, and other simplifications inmore » form, which renders the resultant subsystems soluble by more standardized tools. CALC's operation is illustrated by a detailed description of its treatment of a sample plasma calculation. 5 refs., 3 figs.« less
Salvatore, Jessica E; Aliev, Fazil; Edwards, Alexis C; Evans, David M; Macleod, John; Hickman, Matthew; Lewis, Glyn; Kendler, Kenneth S; Loukola, Anu; Korhonen, Tellervo; Latvala, Antti; Rose, Richard J; Kaprio, Jaakko; Dick, Danielle M
2014-04-10
Alcohol problems represent a classic example of a complex behavioral outcome that is likely influenced by many genes of small effect. A polygenic approach, which examines aggregate measured genetic effects, can have predictive power in cases where individual genes or genetic variants do not. In the current study, we first tested whether polygenic risk for alcohol problems-derived from genome-wide association estimates of an alcohol problems factor score from the age 18 assessment of the Avon Longitudinal Study of Parents and Children (ALSPAC; n = 4304 individuals of European descent; 57% female)-predicted alcohol problems earlier in development (age 14) in an independent sample (FinnTwin12; n = 1162; 53% female). We then tested whether environmental factors (parental knowledge and peer deviance) moderated polygenic risk to predict alcohol problems in the FinnTwin12 sample. We found evidence for both polygenic association and for additive polygene-environment interaction. Higher polygenic scores predicted a greater number of alcohol problems (range of Pearson partial correlations 0.07-0.08, all p-values ≤ 0.01). Moreover, genetic influences were significantly more pronounced under conditions of low parental knowledge or high peer deviance (unstandardized regression coefficients (b), p-values (p), and percent of variance (R2) accounted for by interaction terms: b = 1.54, p = 0.02, R2 = 0.33%; b = 0.94, p = 0.04, R2 = 0.30%, respectively). Supplementary set-based analyses indicated that the individual top single nucleotide polymorphisms (SNPs) contributing to the polygenic scores were not individually enriched for gene-environment interaction. Although the magnitude of the observed effects are small, this study illustrates the usefulness of polygenic approaches for understanding the pathways by which measured genetic predispositions come together with environmental factors to predict complex behavioral outcomes.
Meijer, Rob R; Egberink, Iris J L; Emons, Wilco H M; Sijtsma, Klaas
2008-05-01
We illustrate the usefulness of person-fit methodology for personality assessment. For this purpose, we use person-fit methods from item response theory. First, we give a nontechnical introduction to existing person-fit statistics. Second, we analyze data from Harter's (1985) Self-Perception Profile for Children (Harter, 1985) in a sample of children ranging from 8 to 12 years of age (N = 611) and argue that for some children, the scale scores should be interpreted with care and caution. Combined information from person-fit indexes and from observation, interviews, and self-concept theory showed that similar score profiles may have a different interpretation. For some children in the sample, item scores did not adequately reflect their trait level. Based on teacher interviews, this was found to be due most likely to a less developed self-concept and/or problems understanding the meaning of the questions. We recommend investigating the scalability of score patterns when using self-report inventories to help the researcher interpret respondents' behavior correctly.
Comparison of optimal design methods in inverse problems
NASA Astrophysics Data System (ADS)
Banks, H. T.; Holm, K.; Kappel, F.
2011-07-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).
Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining.
Hero, Alfred O; Rajaratnam, Bala
2016-01-01
When can reliable inference be drawn in fue "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data". Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.
External Standards or Standard Addition? Selecting and Validating a Method of Standardization
NASA Astrophysics Data System (ADS)
Harvey, David T.
2002-05-01
A common feature of many problem-based laboratories in analytical chemistry is a lengthy independent project involving the analysis of "real-world" samples. Students research the literature, adapting and developing a method suitable for their analyte, sample matrix, and problem scenario. Because these projects encompass the complete analytical process, students must consider issues such as obtaining a representative sample, selecting a method of analysis, developing a suitable standardization, validating results, and implementing appropriate quality assessment/quality control practices. Most textbooks and monographs suitable for an undergraduate course in analytical chemistry, however, provide only limited coverage of these important topics. The need for short laboratory experiments emphasizing important facets of method development, such as selecting a method of standardization, is evident. The experiment reported here, which is suitable for an introductory course in analytical chemistry, illustrates the importance of matrix effects when selecting a method of standardization. Students also learn how a spike recovery is used to validate an analytical method, and obtain a practical experience in the difference between performing an external standardization and a standard addition.
Classification of weld defect based on information fusion technology for radiographic testing system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Hongquan; Liang, Zeming, E-mail: heavenlzm@126.com; Gao, Jianmin
Improving the efficiency and accuracy of weld defect classification is an important technical problem in developing the radiographic testing system. This paper proposes a novel weld defect classification method based on information fusion technology, Dempster–Shafer evidence theory. First, to characterize weld defects and improve the accuracy of their classification, 11 weld defect features were defined based on the sub-pixel level edges of radiographic images, four of which are presented for the first time in this paper. Second, we applied information fusion technology to combine different features for weld defect classification, including a mass function defined based on the weld defectmore » feature information and the quartile-method-based calculation of standard weld defect class which is to solve a sample problem involving a limited number of training samples. A steam turbine weld defect classification case study is also presented herein to illustrate our technique. The results show that the proposed method can increase the correct classification rate with limited training samples and address the uncertainties associated with weld defect classification.« less
Jiang, Hongquan; Liang, Zeming; Gao, Jianmin; Dang, Changying
2016-03-01
Improving the efficiency and accuracy of weld defect classification is an important technical problem in developing the radiographic testing system. This paper proposes a novel weld defect classification method based on information fusion technology, Dempster-Shafer evidence theory. First, to characterize weld defects and improve the accuracy of their classification, 11 weld defect features were defined based on the sub-pixel level edges of radiographic images, four of which are presented for the first time in this paper. Second, we applied information fusion technology to combine different features for weld defect classification, including a mass function defined based on the weld defect feature information and the quartile-method-based calculation of standard weld defect class which is to solve a sample problem involving a limited number of training samples. A steam turbine weld defect classification case study is also presented herein to illustrate our technique. The results show that the proposed method can increase the correct classification rate with limited training samples and address the uncertainties associated with weld defect classification.
Requirements for a geometry programming language for CFD applications
NASA Technical Reports Server (NTRS)
Gentry, Arvel E.
1992-01-01
A number of typical problems faced by the aerodynamicist in using computational fluid dynamics are presented to illustrate the need for a geometry programming language. The overall requirements for such a language are illustrated by examples from the Boeing Aero Grid and Paneling System (AGPS). Some of the problems in building such a system are also reviewed along with suggestions as to what to look for when evaluating new software problems.
Scientific data interpolation with low dimensional manifold model
Zhu, Wei; Wang, Bao; Barnard, Richard C.; ...
2017-09-28
Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on datamore » compression and interpolation from both regular and irregular samplings.« less
Scientific data interpolation with low dimensional manifold model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Wei; Wang, Bao; Barnard, Richard C.
Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on datamore » compression and interpolation from both regular and irregular samplings.« less
Pressure-Volume Work Exercises Illustrating the First and Second Laws.
ERIC Educational Resources Information Center
Hoover, William G.; Moran, Bill
1979-01-01
Presented are two problem exercises involving rapid compression and expansion of ideal gases which illustrate the first and second laws of thermodynamics. The first problem involves the conversion of gravitational energy into heat through mechanical work. The second involves the mutual interaction of two gases through an adiabatic piston. (BT)
Hensman, James; Lawrence, Neil D; Rattray, Magnus
2013-08-20
Time course data from microarrays and high-throughput sequencing experiments require simple, computationally efficient and powerful statistical models to extract meaningful biological signal, and for tasks such as data fusion and clustering. Existing methodologies fail to capture either the temporal or replicated nature of the experiments, and often impose constraints on the data collection process, such as regularly spaced samples, or similar sampling schema across replications. We propose hierarchical Gaussian processes as a general model of gene expression time-series, with application to a variety of problems. In particular, we illustrate the method's capacity for missing data imputation, data fusion and clustering.The method can impute data which is missing both systematically and at random: in a hold-out test on real data, performance is significantly better than commonly used imputation methods. The method's ability to model inter- and intra-cluster variance leads to more biologically meaningful clusters. The approach removes the necessity for evenly spaced samples, an advantage illustrated on a developmental Drosophila dataset with irregular replications. The hierarchical Gaussian process model provides an excellent statistical basis for several gene-expression time-series tasks. It has only a few additional parameters over a regular GP, has negligible additional complexity, is easily implemented and can be integrated into several existing algorithms. Our experiments were implemented in python, and are available from the authors' website: http://staffwww.dcs.shef.ac.uk/people/J.Hensman/.
The Same or Not the Same: Equivalence as an Issue in Educational Research
NASA Astrophysics Data System (ADS)
Lewis, Scott E.; Lewis, Jennifer E.
2005-09-01
In educational research, particularly in the sciences, a common research design calls for the establishment of a control and experimental group to determine the effectiveness of an intervention. As part of this design, it is often desirable to illustrate that the two groups were equivalent at the start of the intervention, based on measures such as standardized cognitive tests or student grades in prior courses. In this article we use SAT and ACT scores to illustrate a more robust way of testing equivalence. The method incorporates two one-sided t tests evaluating two null hypotheses, providing a stronger claim for equivalence than the standard method, which often does not address the possible problem of low statistical power. The two null hypotheses are based on the construction of an equivalence interval particular to the data, so the article also provides a rationale for and illustration of a procedure for constructing equivalence intervals. Our consideration of equivalence using this method also underscores the need to include sample sizes, standard deviations, and group means in published quantitative studies.
Mental hospital depopulation in Canada: patient perspectives.
Herman, N J; Smith, C M
1989-06-01
This paper reviews briefly the history of mental health depopulation in Canada over the past 30 years. The term "deinstitutionalization" is often used but is unsatisfactory. Using an exploratory, qualitative, methodological approach, data were collected on the problems encountered by a disproportionate, stratified random sample of 139 formerly institutionalized patients living in various geographical locales in Eastern Canada. Adopting a symbolic interactionist theoretical approach, this study, in an effort to fill a neglect in the literature, attempted to discover what the everyday world(s) of Canadian ex-mental patients was really like. Problems encountered related to stigma, poor housing, lack of back living skills, poverty, unemployment and aftercare. Quotations from patients are provided to illustrate such themes. The findings are discussed.
Life insurance risk assessment using a fuzzy logic expert system
NASA Technical Reports Server (NTRS)
Carreno, Luis A.; Steel, Roy A.
1992-01-01
In this paper, we present a knowledge based system that combines fuzzy processing with rule-based processing to form an improved decision aid for evaluating risk for life insurance. This application illustrates the use of FuzzyCLIPS to build a knowledge based decision support system possessing fuzzy components to improve user interactions and KBS performance. The results employing FuzzyCLIPS are compared with the results obtained from the solution of the problem using traditional numerical equations. The design of the fuzzy solution consists of a CLIPS rule-based system for some factors combined with fuzzy logic rules for others. This paper describes the problem, proposes a solution, presents the results, and provides a sample output of the software product.
NASA Astrophysics Data System (ADS)
Nakamura, Gen; Wang, Haibing
2017-05-01
Consider the problem of reconstructing unknown Robin inclusions inside a heat conductor from boundary measurements. This problem arises from active thermography and is formulated as an inverse boundary value problem for the heat equation. In our previous works, we proposed a sampling-type method for reconstructing the boundary of the Robin inclusion and gave its rigorous mathematical justification. This method is non-iterative and based on the characterization of the solution to the so-called Neumann- to-Dirichlet map gap equation. In this paper, we give a further investigation of the reconstruction method from both the theoretical and numerical points of view. First, we clarify the solvability of the Neumann-to-Dirichlet map gap equation and establish a relation of its solution to the Green function associated with an initial-boundary value problem for the heat equation inside the Robin inclusion. This naturally provides a way of computing this Green function from the Neumann-to-Dirichlet map and explains what is the input for the linear sampling method. Assuming that the Neumann-to-Dirichlet map gap equation has a unique solution, we also show the convergence of our method for noisy measurements. Second, we give the numerical implementation of the reconstruction method for two-dimensional spatial domains. The measurements for our inverse problem are simulated by solving the forward problem via the boundary integral equation method. Numerical results are presented to illustrate the efficiency and stability of the proposed method. By using a finite sequence of transient input over a time interval, we propose a new sampling method over the time interval by single measurement which is most likely to be practical.
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Rankin, Charles C.
2006-01-01
This document summarizes the STructural Analysis of General Shells (STAGS) development effort, STAGS performance for selected demonstration problems, and STAGS application problems illustrating selected advanced features available in the STAGS Version 5.0. Each problem is discussed including selected background information and reference solutions when available. The modeling and solution approach for each problem is described and illustrated. Numerical results are presented and compared with reference solutions, test data, and/or results obtained from mesh refinement studies. These solutions provide an indication of the overall capabilities of the STAGS nonlinear finite element analysis tool and provide users with representative cases, including input files, to explore these capabilities that may then be tailored to other applications.
Optimal sample sizes for the design of reliability studies: power consideration.
Shieh, Gwowen
2014-09-01
Intraclass correlation coefficients are used extensively to measure the reliability or degree of resemblance among group members in multilevel research. This study concerns the problem of the necessary sample size to ensure adequate statistical power for hypothesis tests concerning the intraclass correlation coefficient in the one-way random-effects model. In view of the incomplete and problematic numerical results in the literature, the approximate sample size formula constructed from Fisher's transformation is reevaluated and compared with an exact approach across a wide range of model configurations. These comprehensive examinations showed that the Fisher transformation method is appropriate only under limited circumstances, and therefore it is not recommended as a general method in practice. For advance design planning of reliability studies, the exact sample size procedures are fully described and illustrated for various allocation and cost schemes. Corresponding computer programs are also developed to implement the suggested algorithms.
Shou, Wilson Z; Naidong, Weng
2003-01-01
It has become increasingly popular in drug development to conduct discovery pharmacokinetic (PK) studies in order to evaluate important PK parameters of new chemical entities (NCEs) early in the discovery process. In these studies, dosing vehicles are typically employed in high concentrations to dissolve the test compounds in dose formulations. This can pose significant problems for the liquid chromatography/tandem mass spectrometric (LC/MS/MS) analysis of incurred samples due to potential signal suppression of the analytes caused by the vehicles. In this paper, model test compounds in rat plasma were analyzed using a generic fast gradient LC/MS/MS method. Commonly used dosing vehicles, including poly(ethylene glycol) 400 (PEG 400), polysorbate 80 (Tween 80), hydroxypropyl beta-cyclodextrin, and N,N-dimethylacetamide, were fortified into rat plasma at 5 mg/mL before extraction. Their effects on the sample analysis results were evaluated by the method of post-column infusion. Results thus obtained indicated that polymeric vehicles such as PEG 400 and Tween 80 caused significant suppression (> 50%, compared with results obtained from plasma samples free from vehicles) to certain analytes, when minimum sample cleanup was used and the analytes happened to co-elute with the vehicles. Effective means to minimize this 'dosing vehicle effect' included better chromatographic separations, better sample cleanup, and alternative ionization methods. Finally, a real-world example is given to illustrate the suppression problem posed by high levels of PEG 400 in sample analysis, and to discuss steps taken in overcoming the problem. A simple but effective means of identifying a 'dosing vehicle effect' is also proposed. Copyright 2003 John Wiley & Sons, Ltd.
Advisory Algorithm for Scheduling Open Sectors, Operating Positions, and Workstations
NASA Technical Reports Server (NTRS)
Bloem, Michael; Drew, Michael; Lai, Chok Fung; Bilimoria, Karl D.
2012-01-01
Air traffic controller supervisors configure available sector, operating position, and work-station resources to safely and efficiently control air traffic in a region of airspace. In this paper, an algorithm for assisting supervisors with this task is described and demonstrated on two sample problem instances. The algorithm produces configuration schedule advisories that minimize a cost. The cost is a weighted sum of two competing costs: one penalizing mismatches between configurations and predicted air traffic demand and another penalizing the effort associated with changing configurations. The problem considered by the algorithm is a shortest path problem that is solved with a dynamic programming value iteration algorithm. The cost function contains numerous parameters. Default values for most of these are suggested based on descriptions of air traffic control procedures and subject-matter expert feedback. The parameter determining the relative importance of the two competing costs is tuned by comparing historical configurations with corresponding algorithm advisories. Two sample problem instances for which appropriate configuration advisories are obvious were designed to illustrate characteristics of the algorithm. Results demonstrate how the algorithm suggests advisories that appropriately utilize changes in airspace configurations and changes in the number of operating positions allocated to each open sector. The results also demonstrate how the advisories suggest appropriate times for configuration changes.
Automatic alignment for three-dimensional tomographic reconstruction
NASA Astrophysics Data System (ADS)
van Leeuwen, Tristan; Maretzke, Simon; Joost Batenburg, K.
2018-02-01
In tomographic reconstruction, the goal is to reconstruct an unknown object from a collection of line integrals. Given a complete sampling of such line integrals for various angles and directions, explicit inverse formulas exist to reconstruct the object. Given noisy and incomplete measurements, the inverse problem is typically solved through a regularized least-squares approach. A challenge for both approaches is that in practice the exact directions and offsets of the x-rays are only known approximately due to, e.g. calibration errors. Such errors lead to artifacts in the reconstructed image. In the case of sufficient sampling and geometrically simple misalignment, the measurements can be corrected by exploiting so-called consistency conditions. In other cases, such conditions may not apply and we have to solve an additional inverse problem to retrieve the angles and shifts. In this paper we propose a general algorithmic framework for retrieving these parameters in conjunction with an algebraic reconstruction technique. The proposed approach is illustrated by numerical examples for both simulated data and an electron tomography dataset.
A transient laboratory method for determining the hydraulic properties of 'tight' rocks-I. Theory
Hsieh, P.A.; Tracy, J.V.; Neuzil, C.E.; Bredehoeft, J.D.; Silliman, Stephen E.
1981-01-01
Transient pulse testing has been employed increasingly in the laboratory to measure the hydraulic properties of rock samples with low permeability. Several investigators have proposed a mathematical model in terms of an initial-boundary value problem to describe fluid flow in a transient pulse test. However, the solution of this problem has not been available. In analyzing data from the transient pulse test, previous investigators have either employed analytical solutions that are derived with the use of additional, restrictive assumptions, or have resorted to numerical methods. In Part I of this paper, a general, analytical solution for the transient pulse test is presented. This solution is graphically illustrated by plots of dimensionless variables for several cases of interest. The solution is shown to contain, as limiting cases, the more restrictive analytical solutions that the previous investigators have derived. A method of computing both the permeability and specific storage of the test sample from experimental data will be presented in Part II. ?? 1981.
Catanzaro, Daniele; Schäffer, Alejandro A.; Schwartz, Russell
2016-01-01
Ductal Carcinoma In Situ (DCIS) is a precursor lesion of Invasive Ductal Carcinoma (IDC) of the breast. Investigating its temporal progression could provide fundamental new insights for the development of better diagnostic tools to predict which cases of DCIS will progress to IDC. We investigate the problem of reconstructing a plausible progression from single-cell sampled data of an individual with Synchronous DCIS and IDC. Specifically, by using a number of assumptions derived from the observation of cellular atypia occurring in IDC, we design a possible predictive model using integer linear programming (ILP). Computational experiments carried out on a preexisting data set of 13 patients with simultaneous DCIS and IDC show that the corresponding predicted progression models are classifiable into categories having specific evolutionary characteristics. The approach provides new insights into mechanisms of clonal progression in breast cancers and helps illustrate the power of the ILP approach for similar problems in reconstructing tumor evolution scenarios under complex sets of constraints. PMID:26353381
Reinforcement learning or active inference?
Friston, Karl J; Daunizeau, Jean; Kiebel, Stefan J
2009-07-29
This paper questions the need for reinforcement learning or control theory when optimising behaviour. We show that it is fairly simple to teach an agent complicated and adaptive behaviours using a free-energy formulation of perception. In this formulation, agents adjust their internal states and sampling of the environment to minimize their free-energy. Such agents learn causal structure in the environment and sample it in an adaptive and self-supervised fashion. This results in behavioural policies that reproduce those optimised by reinforcement learning and dynamic programming. Critically, we do not need to invoke the notion of reward, value or utility. We illustrate these points by solving a benchmark problem in dynamic programming; namely the mountain-car problem, using active perception or inference under the free-energy principle. The ensuing proof-of-concept may be important because the free-energy formulation furnishes a unified account of both action and perception and may speak to a reappraisal of the role of dopamine in the brain.
Catanzaro, Daniele; Shackney, Stanley E; Schaffer, Alejandro A; Schwartz, Russell
2016-01-01
Ductal Carcinoma In Situ (DCIS) is a precursor lesion of Invasive Ductal Carcinoma (IDC) of the breast. Investigating its temporal progression could provide fundamental new insights for the development of better diagnostic tools to predict which cases of DCIS will progress to IDC. We investigate the problem of reconstructing a plausible progression from single-cell sampled data of an individual with synchronous DCIS and IDC. Specifically, by using a number of assumptions derived from the observation of cellular atypia occurring in IDC, we design a possible predictive model using integer linear programming (ILP). Computational experiments carried out on a preexisting data set of 13 patients with simultaneous DCIS and IDC show that the corresponding predicted progression models are classifiable into categories having specific evolutionary characteristics. The approach provides new insights into mechanisms of clonal progression in breast cancers and helps illustrate the power of the ILP approach for similar problems in reconstructing tumor evolution scenarios under complex sets of constraints.
Creating targeted initial populations for genetic product searches in heterogeneous markets
NASA Astrophysics Data System (ADS)
Foster, Garrett; Turner, Callaway; Ferguson, Scott; Donndelinger, Joseph
2014-12-01
Genetic searches often use randomly generated initial populations to maximize diversity and enable a thorough sampling of the design space. While many of these initial configurations perform poorly, the trade-off between population diversity and solution quality is typically acceptable for small-scale problems. Navigating complex design spaces, however, often requires computationally intelligent approaches that improve solution quality. This article draws on research advances in market-based product design and heuristic optimization to strategically construct 'targeted' initial populations. Targeted initial designs are created using respondent-level part-worths estimated from discrete choice models. These designs are then integrated into a traditional genetic search. Two case study problems of differing complexity are presented to illustrate the benefits of this approach. In both problems, targeted populations lead to computational savings and product configurations with improved market share of preferences. Future research efforts to tailor this approach and extend it towards multiple objectives are also discussed.
Bayesian-information-gap decision theory with an application to CO 2 sequestration
O'Malley, D.; Vesselinov, V. V.
2015-09-04
Decisions related to subsurface engineering problems such as groundwater management, fossil fuel production, and geologic carbon sequestration are frequently challenging because of an overabundance of uncertainties (related to conceptualizations, parameters, observations, etc.). Because of the importance of these problems to agriculture, energy, and the climate (respectively), good decisions that are scientifically defensible must be made despite the uncertainties. We describe a general approach to making decisions for challenging problems such as these in the presence of severe uncertainties that combines probabilistic and non-probabilistic methods. The approach uses Bayesian sampling to assess parametric uncertainty and Information-Gap Decision Theory (IGDT) to addressmore » model inadequacy. The combined approach also resolves an issue that frequently arises when applying Bayesian methods to real-world engineering problems related to the enumeration of possible outcomes. In the case of zero non-probabilistic uncertainty, the method reduces to a Bayesian method. Lastly, to illustrate the approach, we apply it to a site-selection decision for geologic CO 2 sequestration.« less
NASA Technical Reports Server (NTRS)
Hague, D. S.; Woodbury, N. W.
1975-01-01
The Mars system is a tool for rapid prediction of aircraft or engine characteristics based on correlation-regression analysis of past designs stored in the data bases. An example of output obtained from the MARS system, which involves derivation of an expression for gross weight of subsonic transport aircraft in terms of nine independent variables is given. The need is illustrated for careful selection of correlation variables and for continual review of the resulting estimation equations. For Vol. 1, see N76-10089.
Neutron physics with accelerators
NASA Astrophysics Data System (ADS)
Colonna, N.; Gunsing, F.; Käppeler, F.
2018-07-01
Neutron-induced nuclear reactions are of key importance for a variety of applications in basic and applied science. Apart from nuclear reactors, accelerator-based neutron sources play a major role in experimental studies, especially for the determination of reaction cross sections over a wide energy span from sub-thermal to GeV energies. After an overview of present and upcoming facilities, this article deals with state-of-the-art detectors and equipment, including the often difficult sample problem. These issues are illustrated at selected examples of measurements for nuclear astrophysics and reactor technology with emphasis on their intertwined relations.
Probabilistic generation of random networks taking into account information on motifs occurrence.
Bois, Frederic Y; Gayraud, Ghislaine
2015-01-01
Because of the huge number of graphs possible even with a small number of nodes, inference on network structure is known to be a challenging problem. Generating large random directed graphs with prescribed probabilities of occurrences of some meaningful patterns (motifs) is also difficult. We show how to generate such random graphs according to a formal probabilistic representation, using fast Markov chain Monte Carlo methods to sample them. As an illustration, we generate realistic graphs with several hundred nodes mimicking a gene transcription interaction network in Escherichia coli.
Probabilistic Generation of Random Networks Taking into Account Information on Motifs Occurrence
Bois, Frederic Y.
2015-01-01
Abstract Because of the huge number of graphs possible even with a small number of nodes, inference on network structure is known to be a challenging problem. Generating large random directed graphs with prescribed probabilities of occurrences of some meaningful patterns (motifs) is also difficult. We show how to generate such random graphs according to a formal probabilistic representation, using fast Markov chain Monte Carlo methods to sample them. As an illustration, we generate realistic graphs with several hundred nodes mimicking a gene transcription interaction network in Escherichia coli. PMID:25493547
Towards Robust Multiagent Plans
2016-01-20
corre- sponding to the core UCT algorithm, and Figure 2 illustrates the UCT tree construction, with n denoting the number of state-space samples. 3...is, A(sk) = ∅, or when k = H. Each node/action pair (s, a) is associated with a counter n (s, a) and a value accumulator Q̂(s, a). Both n (s, a) and Q̂...multi-armed bandit (MAB) problems (Robbins, 1952): If n (si, a) > 0 for all a ∈ A(si), then ai+1 = argmax a [ Q̂(si, a) + c √ log n (si) n (si, a) ] , (1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Zhang, Jiangjiang; Li, Weixuan
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less
Jurek, Anne M; Maldonado, George; Greenland, Sander
2013-03-01
Special care must be taken when adjusting for outcome misclassification in case-control data. Basic adjustment formulas using either sensitivity and specificity or predictive values (as with external validation data) do not account for the fact that controls are sampled from a much larger pool of potential controls. A parallel problem arises in surveys and cohort studies in which participation or loss is outcome related. We review this problem and provide simple methods to adjust for outcome misclassification in case-control studies, and illustrate the methods in a case-control birth certificate study of cleft lip/palate and maternal cigarette smoking during pregnancy. Adjustment formulas for outcome misclassification that ignore case-control sampling can yield severely biased results. In the data we examined, the magnitude of error caused by not accounting for sampling is small when population sensitivity and specificity are high, but increases as (1) population sensitivity decreases, (2) population specificity decreases, and (3) the magnitude of the differentiality increases. Failing to account for case-control sampling can result in an odds ratio adjusted for outcome misclassification that is either too high or too low. One needs to account for outcome-related selection (such as case-control sampling) when adjusting for outcome misclassification using external information. Copyright © 2013 Elsevier Inc. All rights reserved.
Ideational conflict: the key to promoting creative activity in the workplace.
Dagostino, Lorraine
1999-01-01
This article defines the concept of ideational conflict as it applies to the process of identifying a problem and developing a plan of action for resolving the problem. Then the article examines and illustrates how the ideational conflict that is generated by brainstorming can lead to creative thinking that resolves disparate points of view. The illustration extends the generally accepted view of brainstorming and applies it to identifying a problem related to the university/college work environment. The problem situation is that of the loss of high ability faculty and sutdents to other institutions.
Thin-Layer Chromatography Experiments That Illustrate General Problems in Chromatography.
ERIC Educational Resources Information Center
Lederer, M.; Leipzig-Pagani, E.
1996-01-01
Describes experiments that illustrate a number of general principles such as pattern identification, displacement chromatography, and salting-out adsorption, plus an experiment that demonstrates that identification by chromatography alone is impossible. Illustrates that chromatography is still possible with quite simple means, notwithstanding the…
Distribution of Heavy Metal Pollution in Surface Soil Samples in China: A Graphical Review.
Duan, Qiannan; Lee, Jianchao; Liu, Yansong; Chen, Han; Hu, Huanyu
2016-09-01
Soil pollution in China is one of most wide and severe in the world. Although environmental researchers are well aware of the acuteness of soil pollution in China, a precise and comprehensive mapping system of soil pollution has never been released. By compiling, integrating and processing nearly a decade of soil pollution data, we have created cornerstone maps that illustrate the distribution and concentration of cadmium, lead, zinc, arsenic, copper and chromium in surficial soil across the nation. These summarized maps and the integrated data provide precise geographic coordinates and heavy metal concentrations; they are also the first ones to provide such thorough and comprehensive details about heavy metal soil pollution in China. In this study, we focus on some of the most polluted areas to illustrate the severity of this pressing environmental problem and demonstrate that most developed and populous areas have been subjected to heavy metal pollution.
Paquet, Victor; Joseph, Caroline; D'Souza, Clive
2012-01-01
Anthropometric studies typically require a large number of individuals that are selected in a manner so that demographic characteristics that impact body size and function are proportionally representative of a user population. This sampling approach does not allow for an efficient characterization of the distribution of body sizes and functions of sub-groups within a population and the demographic characteristics of user populations can often change with time, limiting the application of the anthropometric data in design. The objective of this study is to demonstrate how demographically representative user populations can be developed from samples that are not proportionally representative in order to improve the application of anthropometric data in design. An engineering anthropometry problem of door width and clear floor space width is used to illustrate the value of the approach.
Statistical inferences with jointly type-II censored samples from two Pareto distributions
NASA Astrophysics Data System (ADS)
Abu-Zinadah, Hanaa H.
2017-08-01
In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.
Consensus for second-order multi-agent systems with position sampled data
NASA Astrophysics Data System (ADS)
Wang, Rusheng; Gao, Lixin; Chen, Wenhai; Dai, Dameng
2016-10-01
In this paper, the consensus problem with position sampled data for second-order multi-agent systems is investigated. The interaction topology among the agents is depicted by a directed graph. The full-order and reduced-order observers with position sampled data are proposed, by which two kinds of sampled data-based consensus protocols are constructed. With the provided sampled protocols, the consensus convergence analysis of a continuous-time multi-agent system is equivalently transformed into that of a discrete-time system. Then, by using matrix theory and a sampled control analysis method, some sufficient and necessary consensus conditions based on the coupling parameters, spectrum of the Laplacian matrix and sampling period are obtained. While the sampling period tends to zero, our established necessary and sufficient conditions are degenerated to the continuous-time protocol case, which are consistent with the existing result for the continuous-time case. Finally, the effectiveness of our established results is illustrated by a simple simulation example. Project supported by the Natural Science Foundation of Zhejiang Province, China (Grant No. LY13F030005) and the National Natural Science Foundation of China (Grant No. 61501331).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in
The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the secondmore » explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.« less
NASA Astrophysics Data System (ADS)
Akcay, Hakan; Yager, Robert
2010-10-01
The purpose of this study was to investigate the advantages of an approach to instruction using current problems and issues as curriculum organizers and illustrating how teaching must change to accomplish real learning. The study sample consisted of 41 preservice science teachers (13 males and 28 females) in a model science teacher education program. Both qualitative and quantitative research methods were used to determine success with science discipline-specific “Societal and Educational Applications” courses as one part of a total science teacher education program at a large Midwestern university. Students were involved with idea generation, consideration of multiple points of views, collaborative inquiries, and problem solving. All of these factors promoted grounded instruction using constructivist perspectives that situated science with actual experiences in the lives of students.
NASA Astrophysics Data System (ADS)
Shayanfar, Mohsen Ali; Barkhordari, Mohammad Ali; Roudak, Mohammad Amin
2017-06-01
Monte Carlo simulation (MCS) is a useful tool for computation of probability of failure in reliability analysis. However, the large number of required random samples makes it time-consuming. Response surface method (RSM) is another common method in reliability analysis. Although RSM is widely used for its simplicity, it cannot be trusted in highly nonlinear problems due to its linear nature. In this paper, a new efficient algorithm, employing the combination of importance sampling, as a class of MCS, and RSM is proposed. In the proposed algorithm, analysis starts with importance sampling concepts and using a represented two-step updating rule of design point. This part finishes after a small number of samples are generated. Then RSM starts to work using Bucher experimental design, with the last design point and a represented effective length as the center point and radius of Bucher's approach, respectively. Through illustrative numerical examples, simplicity and efficiency of the proposed algorithm and the effectiveness of the represented rules are shown.
Extended Phase-Space Methods for Enhanced Sampling in Molecular Simulations: A Review.
Fujisaki, Hiroshi; Moritsugu, Kei; Matsunaga, Yasuhiro; Morishita, Tetsuya; Maragliano, Luca
2015-01-01
Molecular Dynamics simulations are a powerful approach to study biomolecular conformational changes or protein-ligand, protein-protein, and protein-DNA/RNA interactions. Straightforward applications, however, are often hampered by incomplete sampling, since in a typical simulated trajectory the system will spend most of its time trapped by high energy barriers in restricted regions of the configuration space. Over the years, several techniques have been designed to overcome this problem and enhance space sampling. Here, we review a class of methods that rely on the idea of extending the set of dynamical variables of the system by adding extra ones associated to functions describing the process under study. In particular, we illustrate the Temperature Accelerated Molecular Dynamics (TAMD), Logarithmic Mean Force Dynamics (LogMFD), and Multiscale Enhanced Sampling (MSES) algorithms. We also discuss combinations with techniques for searching reaction paths. We show the advantages presented by this approach and how it allows to quickly sample important regions of the free-energy landscape via automatic exploration.
A Simple Organic Microscale Experiment Illustrating the Equilibrium Aspect of the Aldol Condensation
NASA Astrophysics Data System (ADS)
Harrison, Ernest A., Jr.
1998-05-01
A simple microscale experiment has been developed that illustrates the equilibrium aspect of the aldol condensation by using two versions of the standard preparation of tetraphenylcyclopentadienone (5) from benzil (1) and 1,3-diphenyl-2-propanone (2). In version (high base concentration) a mixture of 5 and the diastereomeric 4-hydroxy-2,3,4,5-tetraphenyl-2-cyclopenten-1-ones 3 and 4 are produced, while in the other (low base concentration) a mixture of 1, 2, 3, and 4 results. The experiment is typically carried out in conjunction with the previously reported preparation/dehydration of 3, thus the students provide themselves with authentic samples of 3 and 5. Using these, plus authentic samples of 1 and 2 which are made available, students are able to identify all of the components in the equilibrium mixtures, except 4, by TLC analysis. In the case of 4, students are expected to propose a reasonable structure for this compound based on the observed chemistry and the spectroscopic evidence which is provided (i.e., NMR, IR and mass spectra). The experiment lends itself nicely to either the traditional or problem-solving approach, and it also opens up opportunities for collaborative learning.
Replica approach to mean-variance portfolio optimization
NASA Astrophysics Data System (ADS)
Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre
2016-12-01
We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r = N/T < 1, where N is the dimension of the portfolio and T the length of the time series used to estimate the covariance matrix. At the critical point r = 1 a phase transition is taking place. The out of sample estimation error blows up at this point as 1/(1 - r), independently of the covariance matrix or the expected return, displaying the universality not only of the critical exponent, but also the critical point. As a conspicuous illustration of the dangers of in-sample estimates, the optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.
Drivers’ Visual Behavior-Guided RRT Motion Planner for Autonomous On-Road Driving
Du, Mingbo; Mei, Tao; Liang, Huawei; Chen, Jiajia; Huang, Rulin; Zhao, Pan
2016-01-01
This paper describes a real-time motion planner based on the drivers’ visual behavior-guided rapidly exploring random tree (RRT) approach, which is applicable to on-road driving of autonomous vehicles. The primary novelty is in the use of the guidance of drivers’ visual search behavior in the framework of RRT motion planner. RRT is an incremental sampling-based method that is widely used to solve the robotic motion planning problems. However, RRT is often unreliable in a number of practical applications such as autonomous vehicles used for on-road driving because of the unnatural trajectory, useless sampling, and slow exploration. To address these problems, we present an interesting RRT algorithm that introduces an effective guided sampling strategy based on the drivers’ visual search behavior on road and a continuous-curvature smooth method based on B-spline. The proposed algorithm is implemented on a real autonomous vehicle and verified against several different traffic scenarios. A large number of the experimental results demonstrate that our algorithm is feasible and efficient for on-road autonomous driving. Furthermore, the comparative test and statistical analyses illustrate that its excellent performance is superior to other previous algorithms. PMID:26784203
Drivers' Visual Behavior-Guided RRT Motion Planner for Autonomous On-Road Driving.
Du, Mingbo; Mei, Tao; Liang, Huawei; Chen, Jiajia; Huang, Rulin; Zhao, Pan
2016-01-15
This paper describes a real-time motion planner based on the drivers' visual behavior-guided rapidly exploring random tree (RRT) approach, which is applicable to on-road driving of autonomous vehicles. The primary novelty is in the use of the guidance of drivers' visual search behavior in the framework of RRT motion planner. RRT is an incremental sampling-based method that is widely used to solve the robotic motion planning problems. However, RRT is often unreliable in a number of practical applications such as autonomous vehicles used for on-road driving because of the unnatural trajectory, useless sampling, and slow exploration. To address these problems, we present an interesting RRT algorithm that introduces an effective guided sampling strategy based on the drivers' visual search behavior on road and a continuous-curvature smooth method based on B-spline. The proposed algorithm is implemented on a real autonomous vehicle and verified against several different traffic scenarios. A large number of the experimental results demonstrate that our algorithm is feasible and efficient for on-road autonomous driving. Furthermore, the comparative test and statistical analyses illustrate that its excellent performance is superior to other previous algorithms.
ERIC Educational Resources Information Center
Shumway, Richard J.
1989-01-01
Illustrated is the problem of solving equations and some different strategies students might employ when using available technology. Gives illustrations for: exact solutions, approximate solutions, and approximate solutions which are graphically generated. (RT)
NASA Technical Reports Server (NTRS)
Goodrich, Charles H.; Kurien, James; Clancy, Daniel (Technical Monitor)
2001-01-01
We present some diagnosis and control problems that are difficult to solve with discrete or purely qualitative techniques. We analyze the nature of the problems, classify them and explain why they are frequently encountered in systems with closed loop control. This paper illustrates the problem with several examples drawn from industrial and aerospace applications and presents detailed information on one important application: In-Situ Resource Utilization (ISRU) on Mars. The model for an ISRU plant is analyzed showing where qualitative techniques are inadequate to identify certain failure modes and to maintain control of the system in degraded environments. We show why the solution to the problem will result in significantly more robust and reliable control systems. Finally, we illustrate requirements for a solution to the problem by means of examples.
Tong, Dandan; Li, Wenfu; Tang, Chaoying; Yang, Wenjing; Tian, Yan; Zhang, Lei; Zhang, Meng; Qiu, Jiang; Liu, Yijun; Zhang, Qinglin
2015-07-01
Many scientific inventions (SI) throughout history were inspired by heuristic prototypes (HPs). For instance, an event or piece of knowledge similar to displaced water from a tub inspired Archimedes' principle. However, the neural mechanisms underlying this insightful problem solving are not very clear. Thus, the present study explored the neural correlates used to solve SI problems facilitated by HPs. Each HP had two versions: a literal description with an illustration (LDI) and a literal description with no illustration (LDNI). Thirty-two participants were divided randomly into these two groups. Blood oxygenation level-dependent fMRI contrasts between LDI and LDNI groups were measured. Greater activity in the right middle occipital gyrus (RMOG, BA19), right precentral gyrus (RPCG, BA4), and left middle frontal gyrus (LMFG, BA46) were found within the LDI group as compared to the LDNI group. We discuss these results in terms cognitive functions within these regions related to problem solving and memory retrieval. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Gazit, Avikam; Patkin, Dorit
2012-03-01
The article aims to check the way adults, some who are practicing mathematics teachers at elementary school, some who are academicians making a career change to mathematics teachers at junior high school and the rest who are pre-service mathematics teachers at elementary school, cope with the solution of everyday real-world problems of buying and selling. The findings show that even adults with mathematical background tend to make mistakes in solving everyday real-world problems. Only about 70% of the adults who have an orientation to mathematics solved the sample problem correctly. The lowest percentage of success was demonstrated by the academicians making a career change to junior high school mathematics teachers whereas the highest percentage of success was manifested by pre-service elementary school mathematics teachers. Moreover, the findings illustrate that life experience of the practicing mathematics teachers and, mainly, of the academicians making a career change, who were older than the pre-service teachers, did not facilitate the solution of such a real-world problem. Perhaps the reason resides in the process of mathematics teaching at school, which does not put an emphasis on the solution of everyday real-world problems.
A Surrogate-based Adaptive Sampling Approach for History Matching and Uncertainty Quantification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Weixuan; Zhang, Dongxiao; Lin, Guang
A critical procedure in reservoir simulations is history matching (or data assimilation in a broader sense), which calibrates model parameters such that the simulation results are consistent with field measurements, and hence improves the credibility of the predictions given by the simulations. Often there exist non-unique combinations of parameter values that all yield the simulation results matching the measurements. For such ill-posed history matching problems, Bayesian theorem provides a theoretical foundation to represent different solutions and to quantify the uncertainty with the posterior PDF. Lacking an analytical solution in most situations, the posterior PDF may be characterized with a samplemore » of realizations, each representing a possible scenario. A novel sampling algorithm is presented here for the Bayesian solutions to history matching problems. We aim to deal with two commonly encountered issues: 1) as a result of the nonlinear input-output relationship in a reservoir model, the posterior distribution could be in a complex form, such as multimodal, which violates the Gaussian assumption required by most of the commonly used data assimilation approaches; 2) a typical sampling method requires intensive model evaluations and hence may cause unaffordable computational cost. In the developed algorithm, we use a Gaussian mixture model as the proposal distribution in the sampling process, which is simple but also flexible to approximate non-Gaussian distributions and is particularly efficient when the posterior is multimodal. Also, a Gaussian process is utilized as a surrogate model to speed up the sampling process. Furthermore, an iterative scheme of adaptive surrogate refinement and re-sampling ensures sampling accuracy while keeping the computational cost at a minimum level. The developed approach is demonstrated with an illustrative example and shows its capability in handling the above-mentioned issues. Multimodal posterior of the history matching problem is captured and are used to give a reliable production prediction with uncertainty quantification. The new algorithm reveals a great improvement in terms of computational efficiency comparing previously studied approaches for the sample problem.« less
Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hero, Alfred O.; Rajaratnam, Bala
When can reliable inference be drawn in the ‘‘Big Data’’ context? This article presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large-scale inference. In large-scale data applications like genomics, connectomics, and eco-informatics, the data set is often variable rich but sample starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than the number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for ‘‘Big Data.’’ Sample complexity, however, hasmore » received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; and 3) the purely high-dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high-dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. We demonstrate various regimes of correlation mining based on the unifying perspective of high-dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.« less
Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining
Hero, Alfred O.; Rajaratnam, Bala
2015-01-01
When can reliable inference be drawn in fue “Big Data” context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for “Big Data”. Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks. PMID:27087700
Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining
Hero, Alfred O.; Rajaratnam, Bala
2015-12-09
When can reliable inference be drawn in the ‘‘Big Data’’ context? This article presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large-scale inference. In large-scale data applications like genomics, connectomics, and eco-informatics, the data set is often variable rich but sample starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than the number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for ‘‘Big Data.’’ Sample complexity, however, hasmore » received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; and 3) the purely high-dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high-dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. We demonstrate various regimes of correlation mining based on the unifying perspective of high-dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.« less
Geometric Reasoning about a Circle Problem
ERIC Educational Resources Information Center
Gonzalez, Gloriana; DeJarnette, Anna F.
2013-01-01
What does problem-based instruction do for students and teachers? The open-ended geometry problem presented in this article, along with examples of students' work on the problem, illustrates how problem-based instruction can help students develop their mathematical proficiency. Recent studies have shown that students who experience problem-based…
Some Student Problems: Bungi Jumping, Maglev Trains, and Misaligned Computer Monitors.
ERIC Educational Resources Information Center
Whineray, Scott
1991-01-01
Presented are three physics problems from the New Zealand Entrance Scholarship examinations which are generally attempted by more able students. Problem situations, illustrations, and solutions are detailed. (CW)
Research on Illustrations in Text: Issues and Perspectives.
ERIC Educational Resources Information Center
Duchastel, Philippe C.
1980-01-01
Explores the problems of research on the effects of illustrations in text and other teaching materials. Several research frameworks are described, and a functional approach is suggested as a method of improvement. (BK)
NASA Technical Reports Server (NTRS)
Box, M. A.; Deepak, A.
1981-01-01
The propagation of photons in a medium with strongly anisotropic scattering is a problem with a considerable history. Like the propagation of electrons in metal foils, it may be solved in the small-angle scattering approximation by the use of Fourier-transform techniques. In certain limiting cases, one may even obtain analytic expressions. This paper presents some of these results in a model-independent form and also illustrates them by the use of four different phase-function models. Sample calculations are provided for comparison purposes
Optimization principles for preparation methods and properties of fine ferrite materials
NASA Astrophysics Data System (ADS)
Borisova, N. M.; Golubenko, Z. V.; Kuz'micheva, T. G.; Ol'khovik, L. P.; Shabatin, V. P.
1992-08-01
The paper is devoted to the problems of development of fine materials based on Ba-ferrite for vertical magnetic recording in particular. Taking an analogue — BaFe 12-2 xCo xTe xO 19 — we have optimized the melt co-precipitation method and shown a new opportunity to provide chemical homogeneity of microcrystallites by means of cryotechnology. Magnetic characteristics of the magnetic tape experimental sample for digital video recording are presented. A series of principles of consistent control of ferrite powder properties are formulated and illustrated with specific developments.
A new estimator of the discovery probability.
Favaro, Stefano; Lijoi, Antonio; Prünster, Igor
2012-12-01
Species sampling problems have a long history in ecological and biological studies and a number of issues, including the evaluation of species richness, the design of sampling experiments, and the estimation of rare species variety, are to be addressed. Such inferential problems have recently emerged also in genomic applications, however, exhibiting some peculiar features that make them more challenging: specifically, one has to deal with very large populations (genomic libraries) containing a huge number of distinct species (genes) and only a small portion of the library has been sampled (sequenced). These aspects motivate the Bayesian nonparametric approach we undertake, since it allows to achieve the degree of flexibility typically needed in this framework. Based on an observed sample of size n, focus will be on prediction of a key aspect of the outcome from an additional sample of size m, namely, the so-called discovery probability. In particular, conditionally on an observed basic sample of size n, we derive a novel estimator of the probability of detecting, at the (n+m+1)th observation, species that have been observed with any given frequency in the enlarged sample of size n+m. Such an estimator admits a closed-form expression that can be exactly evaluated. The result we obtain allows us to quantify both the rate at which rare species are detected and the achieved sample coverage of abundant species, as m increases. Natural applications are represented by the estimation of the probability of discovering rare genes within genomic libraries and the results are illustrated by means of two expressed sequence tags datasets. © 2012, The International Biometric Society.
Writing for Distance Education. Samples Booklet.
ERIC Educational Resources Information Center
International Extension Coll., Cambridge (England).
Approaches to the format, design, and layout of printed instructional materials for distance education are illustrated in 36 samples designed to accompany the manual, "Writing for Distance Education." Each sample is presented on a single page with a note pointing out its key features. Features illustrated include use of typescript layout, a comic…
Annealed Importance Sampling Reversible Jump MCMC algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karagiannis, Georgios; Andrieu, Christophe
2013-03-20
It will soon be 20 years since reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithms have been proposed. They have significantly extended the scope of Markov chain Monte Carlo simulation methods, offering the promise to be able to routinely tackle transdimensional sampling problems, as encountered in Bayesian model selection problems for example, in a principled and flexible fashion. Their practical efficient implementation, however, still remains a challenge. A particular difficulty encountered in practice is in the choice of the dimension matching variables (both their nature and their distribution) and the reversible transformations which allow one to define the one-to-one mappingsmore » underpinning the design of these algorithms. Indeed, even seemingly sensible choices can lead to algorithms with very poor performance. The focus of this paper is the development and performance evaluation of a method, annealed importance sampling RJ-MCMC (aisRJ), which addresses this problem by mitigating the sensitivity of RJ-MCMC algorithms to the aforementioned poor design. As we shall see the algorithm can be understood as being an “exact approximation” of an idealized MCMC algorithm that would sample from the model probabilities directly in a model selection set-up. Such an idealized algorithm may have good theoretical convergence properties, but typically cannot be implemented, and our algorithms can approximate the performance of such idealized algorithms to an arbitrary degree while not introducing any bias for any degree of approximation. Our approach combines the dimension matching ideas of RJ-MCMC with annealed importance sampling and its Markov chain Monte Carlo implementation. We illustrate the performance of the algorithm with numerical simulations which indicate that, although the approach may at first appear computationally involved, it is in fact competitive.« less
The menu-setting problem and subsidized prices: drug formulary illustration.
Olmstead, T; Zeckhauser, R
1999-10-01
The menu-setting problem (MSP) determines the goods and services an institution offers and the prices charged. It appears widely in health care, from choosing the services an insurance arrangement offers, to selecting the health plans an employer proffers. The challenge arises because purchases are subsidized, and consumers (or their physician agents) may make cost-ineffective choices. The intuitively comprehensible MSP model--readily solved by computer using actual data--helps structure thinking and support decision making about such problems. The analysis uses drug formularies--lists of approved drugs in a plan or institution--to illustrate the framework.
NASA Astrophysics Data System (ADS)
Marusak, Piotr M.; Kuntanapreeda, Suwat
2018-01-01
The paper considers application of a neural network based implementation of a model predictive control (MPC) control algorithm to electromechanical plants. Properties of such control plants implicate that a relatively short sampling time should be used. However, in such a case, finding the control value numerically may be too time-consuming. Therefore, the current paper tests the solution based on transforming the MPC optimization problem into a set of differential equations whose solution is the same as that of the original optimization problem. This set of differential equations can be interpreted as a dynamic neural network. In such an approach, the constraints can be introduced into the optimization problem with relative ease. Moreover, the solution of the optimization problem can be obtained faster than when the standard numerical quadratic programming routine is used. However, a very careful tuning of the algorithm is needed to achieve this. A DC motor and an electrohydraulic actuator are taken as illustrative examples. The feasibility and effectiveness of the proposed approach are demonstrated through numerical simulations.
NASA Astrophysics Data System (ADS)
Kanjilal, Oindrila; Manohar, C. S.
2017-07-01
The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.
Robustness-Based Design Optimization Under Data Uncertainty
NASA Technical Reports Server (NTRS)
Zaman, Kais; McDonald, Mark; Mahadevan, Sankaran; Green, Lawrence
2010-01-01
This paper proposes formulations and algorithms for design optimization under both aleatory (i.e., natural or physical variability) and epistemic uncertainty (i.e., imprecise probabilistic information), from the perspective of system robustness. The proposed formulations deal with epistemic uncertainty arising from both sparse and interval data without any assumption about the probability distributions of the random variables. A decoupled approach is proposed in this paper to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency. The proposed methods are illustrated for the upper stage design problem of a two-stage-to-orbit (TSTO) vehicle, where the information on the random design inputs are only available as sparse point and/or interval data. As collecting more data reduces uncertainty but increases cost, the effect of sample size on the optimality and robustness of the solution is also studied. A method is developed to determine the optimal sample size for sparse point data that leads to the solutions of the design problem that are least sensitive to variations in the input random variables.
Tractable Experiment Design via Mathematical Surrogates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Brian J.
This presentation summarizes the development and implementation of quantitative design criteria motivated by targeted inference objectives for identifying new, potentially expensive computational or physical experiments. The first application is concerned with estimating features of quantities of interest arising from complex computational models, such as quantiles or failure probabilities. A sequential strategy is proposed for iterative refinement of the importance distributions used to efficiently sample the uncertain inputs to the computational model. In the second application, effective use of mathematical surrogates is investigated to help alleviate the analytical and numerical intractability often associated with Bayesian experiment design. This approach allows formore » the incorporation of prior information into the design process without the need for gross simplification of the design criterion. Illustrative examples of both design problems will be presented as an argument for the relevance of these research problems.« less
2014-01-01
Objective To offer a practical demonstration of receiver operating characteristic (ROC) analyses, diagnostic efficiency statistics, and their application to clinical decision making using a popular parent checklist to assess for potential mood disorder. Method Secondary analyses of data from 589 families seeking outpatient mental health services, completing the Child Behavior Checklist and semi-structured diagnostic interviews. Results Internalizing Problems raw scores discriminated mood disorders significantly better than did age- and gender-normed T scores, or an Affective Problems score. Internalizing scores <8 had a diagnostic likelihood ratio <0.3, and scores >30 had a diagnostic likelihood ratio of 7.4. Conclusions This study illustrates a series of steps in defining a clinical problem, operationalizing it, selecting a valid study design, and using ROC analyses to generate statistics that support clinical decisions. The ROC framework offers important advantages for clinical interpretation. Appendices include sample scripts using SPSS and R to check assumptions and conduct ROC analyses. PMID:23965298
Youngstrom, Eric A
2014-03-01
To offer a practical demonstration of receiver operating characteristic (ROC) analyses, diagnostic efficiency statistics, and their application to clinical decision making using a popular parent checklist to assess for potential mood disorder. Secondary analyses of data from 589 families seeking outpatient mental health services, completing the Child Behavior Checklist and semi-structured diagnostic interviews. Internalizing Problems raw scores discriminated mood disorders significantly better than did age- and gender-normed T scores, or an Affective Problems score. Internalizing scores <8 had a diagnostic likelihood ratio <0.3, and scores >30 had a diagnostic likelihood ratio of 7.4. This study illustrates a series of steps in defining a clinical problem, operationalizing it, selecting a valid study design, and using ROC analyses to generate statistics that support clinical decisions. The ROC framework offers important advantages for clinical interpretation. Appendices include sample scripts using SPSS and R to check assumptions and conduct ROC analyses.
Distributed Constrained Optimization with Semicoordinate Transformations
NASA Technical Reports Server (NTRS)
Macready, William; Wolpert, David
2006-01-01
Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve constrained optimization problems. This is done by translating the problem into an iterated game, where each agent controls a different variable of the problem, so that the joint probability distribution across the agents moves gives an expected value of the objective function. The dynamics of the agents is designed to minimize a Lagrangian function of that joint distribution. Here we illustrate how the updating of the Lagrange parameters in the Lagrangian is a form of automated annealing, which focuses the joint distribution more and more tightly about the joint moves that optimize the objective function. We then investigate the use of "semicoordinate" variable transformations. These separate the joint state of the agents from the variables of the optimization problem, with the two connected by an onto mapping. We present experiments illustrating the ability of such transformations to facilitate optimization. We focus on the special kind of transformation in which the statistically independent states of the agents induces a mixture distribution over the optimization variables. Computer experiment illustrate this for &sat constraint satisfaction problems and for unconstrained minimization of NK functions.
Decru, Eva; Moelants, Tuur; De Gelas, Koen; Vreven, Emmanuel; Verheyen, Erik; Snoeks, Jos
2016-01-01
This study evaluates the utility of DNA barcoding to traditional morphology-based species identifications for the fish fauna of the north-eastern Congo basin. We compared DNA sequences (COI) of 821 samples from 206 morphologically identified species. Best match, best close match and all species barcoding analyses resulted in a rather low identification success of 87.5%, 84.5% and 64.1%, respectively. The ratio 'nearest-neighbour distance/maximum intraspecific divergence' was lower than 1 for 26.1% of the samples, indicating possible taxonomic problems. In ten genera, belonging to six families, the number of species inferred from mtDNA data exceeded the number of species identified using morphological features; and in four cases indications of possible synonymy were detected. Finally, the DNA barcodes confirmed previously known identification problems within certain genera of the Clariidae, Cyprinidae and Mormyridae. Our results underscore the large number of taxonomic problems lingering in the taxonomy of the fish fauna of the Congo basin and illustrate why DNA barcodes will contribute to future efforts to compile a reliable taxonomic inventory of the Congo basin fish fauna. Therefore, the obtained barcodes were deposited in the reference barcode library of the Barcode of Life Initiative. © 2015 John Wiley & Sons Ltd.
Wells, Brittny A; Glueckauf, Robert L; Bernabe, Daniel; Kazmer, Michelle M; Schettini, Gabriel; Springer, Jane; Sharma, Dinesh; Meng, Hongdao; Willis, Floyd B; Graff-Radford, Neill
2017-02-01
The primary objectives of the present study were: (a) to develop the African American Dementia Caregiver Problem Inventory (DCPI-A) that assesses the types and frequency of problems reported by African American dementia caregivers seeking cognitive-behavioral intervention, (b) to evaluate the intercoder reliability of the DCPI-A, and (c) to measure the perceived severity of common problems reported by this caregiver population. The development of the DCPI-A was divided into 3 major steps: (a) creating an initial sample pool of caregiver problems derived from 2 parent randomized clinical trials, (b) formulating a preliminary version of the DCPI-A, and (c) finalizing the development of the DCPI-A that includes 20 problem categories with explicit coding rules, definitions, and illustrative examples. The most commonly reported caregiver problems fell into 5 major categories: (a) communication problems with care recipients, family members, and/or significant others, (b) problems with socialization, recreation, and personal enhancement time; (c) problems with physical health and health maintenance, (d) problems in managing care recipients' activities of daily living; and (e) problems with care recipients' difficult behaviors. Intercoder reliability was moderately high for both percent agreement and Cronbach's kappa. A similar positive pattern of results was obtained for the analysis of coder drift. The descriptive analysis of the types and frequency of problems of African American dementia caregivers coupled with the outcomes of the psychometric evaluation bode well for the adoption of the DCPI-A in clinical settings. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Towards a concept of sensible drinking and an illustration of measure.
Harburg, E; Gleiberman, L; Difranceisco, W; Peele, S
1994-07-01
The major focus of research on alcohol is not on the majority who drink without problems, but on the small minority who have extreme problems. Difficulty in conceiving, measuring, and analyzing non-problem drinking lies in the exclusively problem-drinking orientation of most drinking measures. Drawing on conventionally used scales (e.g. Short Michigan Alcoholism Screening Test) and other established concepts in the alcohol literature (e.g. craving, hangover), a set of 24 items was selected to classify all persons in a sample from Tecumseh, Michigan, as to their alcohol-related behaviors (N = 1266). A Sensible-Problem Drinking Classification (SPDC) was developed with five categories: very sensible, sensible, borderline, problem, and impaired. A variety of known alcohol and psychosocial variables were related monotonically across these categories in expected directions. Ethanol ounces per week was only modestly related to SPDC groups: R2 = 0.09 for women, R2 = 0.21 for men. The positive relationship of problem and non-problem SPDC groups to high and low blood pressure was P = 0.07, while ethanol (oz/week) was uncorrelated to blood pressure (mm Hg) in this subsample (N = 453). The development of SPDC requires additional items measuring self and group regulatory alcohol behavior. However, this initial analysis of no-problem subgroups has direct import for public health regulation of alcohol use by providing a model of a sensible view of alcohol use.
Dynamic optimization of chemical processes using ant colony framework.
Rajesh, J; Gupta, K; Kusumakar, H S; Jayaraman, V K; Kulkarni, B D
2001-11-01
Ant colony framework is illustrated by considering dynamic optimization of six important bench marking examples. This new computational tool is simple to implement and can tackle problems with state as well as terminal constraints in a straightforward fashion. It requires fewer grid points to reach the global optimum at relatively very low computational effort. The examples with varying degree of complexities, analyzed here, illustrate its potential for solving a large class of process optimization problems in chemical engineering.
Tarescavage, Anthony M; Corey, David M; Ben-Porath, Yossef S
2016-04-01
The purpose of the current study was to identify Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) correlates of police officer integrity violations and other problem behaviors in an archival database with original MMPI item responses and collateral information regarding integrity violations obtained for 417 male officers. In Study 1, we estimated MMPI-2-RF scores from the MMPI item pool (which includes approximately 80% of the MMPI-2-RF items) in a normative sample, a psychiatric inpatient sample, and a police officer sample, and conducted analyses that demonstrated the comparability of estimated and full scale scores for 41 of the 51 MMPI-2-RF scales. In Study 2, we correlated estimated MMPI-2-RF scores with information about subsequent integrity violations and problem behaviors from the integrity violation data set. Several meaningful associations were obtained, predominately with scales from the emotional, thought, and behavioral dysfunction domains of the MMPI-2-RF. Application of a correction for range restriction yielded substantially improved validity estimates. Finally, we calculated relative risk ratios for the statistically significant findings using cutoffs lower than 65T, which is traditionally used to identify clinically significant elevations, and found several meaningful relative risk ratios. © The Author(s) 2015.
The Place of Problem Solving in Contemporary Mathematics Curriculum Documents
ERIC Educational Resources Information Center
Stacey, Kaye
2005-01-01
This paper reviews the presentation of problem solving and process aspects of mathematics in curriculum documents from Australia, UK, USA and Singapore. The place of problem solving in the documents is reviewed and contrasted, and illustrative problems from teachers' support materials are used to demonstrate how problem solving is now more often…
King Oedipus and the Problem Solving Process.
ERIC Educational Resources Information Center
Borchardt, Donald A.
An analysis of the problem solving process reveals at least three options: (1) finding the cause, (2) solving the problem, and (3) anticipating potential problems. These methods may be illustrated by examining "Oedipus Tyrannus," a play in which a king attempts to deal with a problem that appears to be beyond his ability to solve, and…
HYDRA-II: A hydrothermal analysis computer code: Volume 2, User's manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCann, R.A.; Lowery, P.S.; Lessor, D.L.
1987-09-01
HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite-difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations formore » conservation of momentum incorporate directional porosities and permeabilities that are available to model solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated methods are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume 1 - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. This volume, Volume 2 - User's Manual, contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a sample problem. The final volume, Volume 3 - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. 6 refs.« less
NASA Technical Reports Server (NTRS)
Watson, V. R.
1983-01-01
A personal computer has been used to illustrate physical phenomena and problem solution techniques in engineering classes. According to student evaluations, instruction of concepts was greatly improved through the use of these illustrations. This paper describes the class of phenomena that can be effectively illustrated, the techniques used to create these illustrations, and the techniques used to display the illustrations in regular classrooms and over an instructional TV network. The features of a personal computer required to apply these techniques are listed. The capabilities of some present personal computers are discussed and a forecast of the capabilities of future personal computers is presented.
Problem-Solving during Shared Reading at Kindergarten
ERIC Educational Resources Information Center
Gosen, Myrte N.; Berenst, Jan; de Glopper, Kees
2015-01-01
This paper reports on a conversation analytic study of problem-solving interactions during shared reading at three kindergartens in the Netherlands. It illustrates how teachers and pupils discuss book characters' problems that arise in the events in the picture books. A close analysis of the data demonstrates that problem-solving interactions do…
A Rational Approach to Determine Minimum Strength Thresholds in Novel Structural Materials
NASA Technical Reports Server (NTRS)
Schur, Willi W.; Bilen, Canan; Sterling, Jerry
2003-01-01
Design of safe and survivable structures requires the availability of guaranteed minimum strength thresholds for structural materials to enable a meaningful comparison of strength requirement and available strength. This paper develops a procedure for determining such a threshold with a desired degree of confidence, for structural materials with none or minimal industrial experience. The problem arose in attempting to use a new, highly weight-efficient structural load tendon material to achieve a lightweight super-pressure balloon. The developed procedure applies to lineal (one dimensional) structural elements. One important aspect of the formulation is that it extrapolates to expected probability distributions for long length specimen samples from some hypothesized probability distribution that has been obtained from a shorter length specimen sample. The use of the developed procedure is illustrated using both real and simulated data.
SCALE PROBLEMS IN REPORTING LANDSCAPE PATTERN AT THE REGIONAL SCALE
Remotely sensed data for Southeastern United States (Standard Federal Region 4) are used to examine the scale problems involved in reporting landscape pattern for a large, heterogeneous region. Frequency distributions of landscape indices illustrate problems associated with the g...
Donovan, John E.; Chung, Tammy
2015-01-01
Objective: Most studies of adolescent drinking focus on single alcohol use behaviors (e.g., high-volume drinking, drunkenness) and ignore the patterning of adolescents’ involvement across multiple alcohol behaviors. The present latent class analyses (LCAs) examined a procedure for empirically determining multiple cut points on the alcohol use behaviors in order to establish a typology of adolescent alcohol involvement. Method: LCA was carried out on six alcohol use behavior indicators collected from 6,504 7th through 12th graders who participated in Wave I of the National Longitudinal Study of Adolescent Health (AddHealth). To move beyond dichotomous indicators, a “progressive elaboration” strategy was used, starting with six dichotomous indicators and then evaluating a series of models testing additional cut points on the ordinal indicators at progressively higher points for one indicator at a time. Analyses were performed on one random half-sample, and confirmatory LCAs were performed on the second random half-sample and in the Wave II data. Results: The final model consisted of four latent classes (never or non–current drinkers, low-intake drinkers, non–problem drinkers, and problem drinkers). Confirmatory LCAs in the second random half-sample from Wave I and in Wave II support this four-class solution. The means on the four latent classes were also generally ordered on an array of measures reflecting psychosocial risk for problem behavior. Conclusions: These analyses suggest that there may be four different classes or types of alcohol involvement among adolescents, and, more importantly, they illustrate the utility of the progressive elaboration strategy for moving beyond dichotomous indicators in latent class models. PMID:25978828
Höllig, Anke; Stoffel-Wagner, Birgit; Clusmann, Hans; Veldeman, Michael; Schubert, Gerrit A; Coburn, Mark
2017-01-01
Aneurysmal subarachnoid hemorrhage triggers an intense inflammatory response, which is suspected to increase the risk for secondary complications such as delayed cerebral ischemia (DCI). However, to date, the monitoring of the inflammatory response to detect secondary complications such as DCI has not become part of the clinical routine diagnostic. Here, we aim to illustrate the time courses of inflammatory parameters after aneurysmal subarachnoid hemorrhage (aSAH) and discuss the problems of inflammatory parameters as biomarkers but also their possible relevance for deeper understanding of the pathophysiology after aSAH and sophisticated planning of future studies. In this prospective cohort study, 109 patients with aSAH were initially included, n = 28 patients had to be excluded. Serum and-if possible-cerebral spinal fluid samples ( n = 48) were retrieved at days 1, 4, 7, 10, and 14 after aSAH. Samples were analyzed for leukocyte count and C-reactive protein (CRP) (serum samples only) as well as matrix metallopeptidase 9 (MMP9), intercellular adhesion molecule 1 (ICAM1), and leukemia inhibitory factor (LIF) [both serum and cerebrospinal fluid (CSF) samples]. Time courses of the inflammatory parameters were displayed and related to the occurrence of DCI. We illustrate the time courses of leukocyte count, CRP, MMP9, ICAM1, and LIF in patients' serum samples from the first until the 14th day after aSAH. Time courses of MMP9, ICAM1, and LIF in CSF samples are demonstrated. Furthermore, no significant difference was shown relating the time courses to the occurrence of DCI. We estimate that the wide range of the measured values hampers their interpretation and usage as a biomarker. However, understanding the inflammatory response after aSAH and generating a multicenter database may facilitate further studies: realistic sample size calculations on the basis of a multicenter database will increase the quality and clinical relevance of the acquired results.
Attitudes of NICU professionals regarding feeding blood-tinged colostrum or milk.
Phelps, M M; Bedard, W S; Henry, E; Christensen, S S; Gardner, R W; Karp, T; Wiedmeier, S E; Christensen, R D
2009-02-01
Mothers of neonatal intensive care unit (NICU) patients sometimes bring expressed milk that is blood tinged to the NICU. In certain instances, the blood contamination appears minimal, whereas in others, the milk is quite dark pink. We have observed inconsistencies in practice regarding whether or not to feed blood-tinged colostrum or milk to NICU patients. We know of no evidence that establishes best practice in this area, and thus we sought to determine attitudes of NICU professionals on which to base a potentially best practice. We conducted a web-based anonymous survey of attitudes of NICU professionals at Intermountain Healthcare regarding feeding blood-tinged expressed milk to NICU patients. These professionals included neonatologists, neonatal nurse practitioners, NICU nurses, NICU dieticians and lactation consultants. Survey results were returned from 64% (426 of 667) of those to whom it was sent. A total of 75% of respondents reported that their practice was NOT to feed the blood-tinged milk illustrated in the figure as sample 2, and nearly all respondents (98%) reported that they would NOT feed the milk illustrated as sample 3. The majority of the neonatologists (56%) and the lactation consultants (58%) recommended feeding moderately bloody milk (sample 2), whereas only 22% of the neonatal nurse practitioners (NNPs), NICU nurses and NICU dieticians recommended feeding such samples (<0.001). The most frequently selected reason for NOT feeding blood-tinged milk was that it would likely cause gastrointestinal upset and feeding intolerance (selected by 77%). The majority (87%) overestimated the amount of blood contaminating a milk sample (sample 3). As colostrum and human milk feedings can be of value to NICU patients, evidence should be assembled to document whether feeding blood-tinged samples indeed have the problems listed by the survey respondents. Such evidence is needed to enable informed decisions involving the benefits vs risks of feeding blood-tinged expressed milk to NICU patients.
NASA Astrophysics Data System (ADS)
Jie, Cao; Zhi-Hai, Wu; Li, Peng
2016-05-01
This paper investigates the consensus tracking problems of second-order multi-agent systems with a virtual leader via event-triggered control. A novel distributed event-triggered transmission scheme is proposed, which is intermittently examined at constant sampling instants. Only partial neighbor information and local measurements are required for event detection. Then the corresponding event-triggered consensus tracking protocol is presented to guarantee second-order multi-agent systems to achieve consensus tracking. Numerical simulations are given to illustrate the effectiveness of the proposed strategy. Project supported by the National Natural Science Foundation of China (Grant Nos. 61203147, 61374047, and 61403168).
Software for Data Analysis with Graphical Models
NASA Technical Reports Server (NTRS)
Buntine, Wray L.; Roy, H. Scott
1994-01-01
Probabilistic graphical models are being used widely in artificial intelligence and statistics, for instance, in diagnosis and expert systems, as a framework for representing and reasoning with probabilities and independencies. They come with corresponding algorithms for performing statistical inference. This offers a unifying framework for prototyping and/or generating data analysis algorithms from graphical specifications. This paper illustrates the framework with an example and then presents some basic techniques for the task: problem decomposition and the calculation of exact Bayes factors. Other tools already developed, such as automatic differentiation, Gibbs sampling, and use of the EM algorithm, make this a broad basis for the generation of data analysis software.
Analytical pricing formulas for hybrid variance swaps with regime-switching
NASA Astrophysics Data System (ADS)
Roslan, Teh Raihana Nazirah; Cao, Jiling; Zhang, Wenjun
2017-11-01
The problem of pricing discretely-sampled variance swaps under stochastic volatility, stochastic interest rate and regime-switching is being considered in this paper. An extension of the Heston stochastic volatility model structure is done by adding the Cox-Ingersoll-Ross (CIR) stochastic interest rate model. In addition, the parameters of the model are permitted to have transitions following a Markov chain process which is continuous and discoverable. This hybrid model can be used to illustrate certain macroeconomic conditions, for example the changing phases of business stages. The outcome of our regime-switching hybrid model is presented in terms of analytical pricing formulas for variance swaps.
Unobtrusive measures in behavioral assessment
Kazdin, Alan E.
1979-01-01
A major distinguishing characteristic of behavioral assessment is the direct assessment of overt behavior. Direct assessment is assumed to provide a sample of behavior that reflects client performance in the situation in which behavior is assessed, even if the assessment procedures were not implemented. Yet, in the majority of investigations, behavioral assessment procedures are obtrusive, i.e., subjects are aware that their behavior is being assessed. The potential problem with obtrusive assessment is that it may be reactive, i.e., affect how subjects perform. Recent research has demonstrated that obtrusive observations often are reactive and that behaviors assessed under obtrusive and unobtrusive conditions bear little relation. From methodological and applied perspectives, additional attention needs to be given to unobtrusive measures of behavior change. The present paper illustrates unobtrusive measures in behavior modification including direct observations, archival records, and physical traces of performance. In addition, validation and assessment problems, questions about the obtrusiveness of the measures, and ethical issues are discussed. PMID:16795622
Statistical analysis of effective singular values in matrix rank determination
NASA Technical Reports Server (NTRS)
Konstantinides, Konstantinos; Yao, Kung
1988-01-01
A major problem in using SVD (singular-value decomposition) as a tool in determining the effective rank of a perturbed matrix is that of distinguishing between significantly small and significantly large singular values to the end, conference regions are derived for the perturbed singular values of matrices with noisy observation data. The analysis is based on the theories of perturbations of singular values and statistical significance test. Threshold bounds for perturbation due to finite-precision and i.i.d. random models are evaluated. In random models, the threshold bounds depend on the dimension of the matrix, the noisy variance, and predefined statistical level of significance. Results applied to the problem of determining the effective order of a linear autoregressive system from the approximate rank of a sample autocorrelation matrix are considered. Various numerical examples illustrating the usefulness of these bounds and comparisons to other previously known approaches are given.
A decomposition approach to the design of a multiferroic memory bit
NASA Astrophysics Data System (ADS)
Acevedo, Ruben; Liang, Cheng-Yen; Carman, Gregory P.; Sepulveda, Abdon E.
2017-06-01
The objective of this paper is to present a methodology for the design of a memory bit to minimize the energy required to write data at the bit level. By straining a ferromagnetic nickel nano-dot by means of a piezoelectric substrate, its magnetization vector rotates between two stable states defined as a 1 and 0 for digital memory. The memory bit geometry, actuation mechanism and voltage control law were used as design variables. The approach used was to decompose the overall design process into simpler sub-problems whose structure can be exploited for a more efficient solution. This method minimizes the number of fully dynamic coupled finite element analyses required to converge to a near optimal design, thus decreasing the computational time for the design process. An in-plane sample design problem is presented to illustrate the advantages and flexibility of the procedure.
Finite element solution for energy conservation using a highly stable explicit integration algorithm
NASA Technical Reports Server (NTRS)
Baker, A. J.; Manhardt, P. D.
1972-01-01
Theoretical derivation of a finite element solution algorithm for the transient energy conservation equation in multidimensional, stationary multi-media continua with irregular solution domain closure is considered. The complete finite element matrix forms for arbitrarily irregular discretizations are established, using natural coordinate function representations. The algorithm is embodied into a user-oriented computer program (COMOC) which obtains transient temperature distributions at the node points of the finite element discretization using a highly stable explicit integration procedure with automatic error control features. The finite element algorithm is shown to posses convergence with discretization for a transient sample problem. The condensed form for the specific heat element matrix is shown to be preferable to the consistent form. Computed results for diverse problems illustrate the versatility of COMOC, and easily prepared output subroutines are shown to allow quick engineering assessment of solution behavior.
Bayes linear covariance matrix adjustment
NASA Astrophysics Data System (ADS)
Wilkinson, Darren J.
1995-12-01
In this thesis, a Bayes linear methodology for the adjustment of covariance matrices is presented and discussed. A geometric framework for quantifying uncertainties about covariance matrices is set up, and an inner-product for spaces of random matrices is motivated and constructed. The inner-product on this space captures aspects of our beliefs about the relationship between covariance matrices of interest to us, providing a structure rich enough for us to adjust beliefs about unknown matrices in the light of data such as sample covariance matrices, exploiting second-order exchangeability and related specifications to obtain representations allowing analysis. Adjustment is associated with orthogonal projection, and illustrated with examples of adjustments for some common problems. The problem of adjusting the covariance matrices underlying exchangeable random vectors is tackled and discussed. Learning about the covariance matrices associated with multivariate time series dynamic linear models is shown to be amenable to a similar approach. Diagnostics for matrix adjustments are also discussed.
Young Children's Analogical Problem Solving: Gaining Insights from Video Displays
ERIC Educational Resources Information Center
Chen, Zhe; Siegler, Robert S.
2013-01-01
This study examined how toddlers gain insights from source video displays and use the insights to solve analogous problems. Two- to 2.5-year-olds viewed a source video illustrating a problem-solving strategy and then attempted to solve analogous problems. Older but not younger toddlers extracted the problem-solving strategy depicted in the video…
A CASE STUDY ILLUSTRATING THE IMPORTANCE OF ACCURATE SITE CHARACTERIZATION
Too frequently, researchers rely on incomplete site characterization data to determine the placement of the sampling wells. They forget that it is these sampling wells that will be used to evaluate the effectiveness of their research efforts. This case study illustrates the eff...
Measuring nanoparticles size distribution in food and consumer products: a review.
Calzolai, L; Gilliland, D; Rossi, F
2012-08-01
Nanoparticles are already used in several consumer products including food, food packaging and cosmetics, and their detection and measurement in food represent a particularly difficult challenge. In order to fill the void in the official definition of what constitutes a nanomaterial, the European Commission published in October 2011 its recommendation on the definition of 'nanomaterial'. This will have an impact in many different areas of legislation, such as the European Cosmetic Products Regulation, where the current definitions of nanomaterial will come under discussion regarding how they should be adapted in light of this new definition. This new definition calls for the measurement of the number-based particle size distribution in the 1-100 nm size range of all the primary particles present in the sample independently of whether they are in a free, unbound state or as part of an aggregate/agglomerate. This definition does present great technical challenges for those who must develop valid and compatible measuring methods. This review will give an overview of the current state of the art, focusing particularly on the suitability of the most used techniques for the size measurement of nanoparticles when addressing this new definition of nanomaterials. The problems to be overcome in measuring nanoparticles in food and consumer products will be illustrated with some practical examples. Finally, a possible way forward (based on the combination of different measuring techniques) for solving this challenging analytical problem is illustrated.
A Case Study in Mathematics--The Cone Problem
ERIC Educational Resources Information Center
Damaskos, Nickander J.
1969-01-01
A case study in mathematics designed to illustrate how the computer may be instructed to solve complicated problems. The problem is to find the volume of a right truncated cone given the altitude and a half angle or the base radius. (RP)
The Coffee-Milk Mixture Problem Revisited
ERIC Educational Resources Information Center
Marion, Charles F.
2015-01-01
This analysis of a problem that is frequently posed at professional development workshops, in print, and on the Web--the coffee-milk mixture riddle--illustrates the timeless advice of George Pólya's masterpiece on problem solving in mathematics, "How to Solve It." In his book, Pólya recommends that problems previously solved and put…
ERIC Educational Resources Information Center
Ge, Xun; Law, Victor; Huang, Kun
2016-01-01
One of the goals for problem-based learning (PBL) is to promote self-regulation. Although self-regulation has been studied extensively, its interrelationships with ill-structured problem solving have been unclear. In order to clarify the interrelationships, this article proposes a conceptual framework illustrating the iterative processes among…
Electric Circuit Theory--Computer Illustrated Text.
ERIC Educational Resources Information Center
Riches, Brian
1990-01-01
Discusses the use of a computer-illustrated text (CIT) with integrated software to teach electric circuit theory to college students. Examples of software use are given, including simple animation, graphical displays, and problem-solving programs. Issues affecting electric circuit theory instruction are also addressed, including mathematical…
The Model-Building Process in Introductory College Geography: An Illustrative Example
ERIC Educational Resources Information Center
Cadwallader, Martin
1978-01-01
Illustrates the five elements of conceptual models by developing a model of consumer behavior in choosing among alternative supermarkets. The elements are: identifying the problem, constructing a conceptual model, translating it into a symbolic model, operationalizing the model, and testing. (Author/AV)
Instruction for Web Searching: An Empirical Study.
ERIC Educational Resources Information Center
Colaric, Susan M.
2003-01-01
Discussion of problems that users have with Web searching focuses on a study of undergraduates that investigated three instructional methods (instruction by example, conceptual models without illustrations, and conceptual models with illustrations) to determine differences in knowledge acquisition related to three types of knowledge (declarative,…
NASA Technical Reports Server (NTRS)
Tuey, R. C.
1972-01-01
Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.
Manheim, F.T.; Buchholtz ten Brink, Marilyn R.; Mecray, E.L.
1998-01-01
A comprehensive database of sediment chemistry and environmental parameters has been compiled for Boston Harbor and Massachusetts Bay. This work illustrates methodologies for rescuing and validating sediment data from heterogeneous historical sources. It greatly expands spatial and temporal data coverage of estuarine and coastal sediments. The database contains about 3500 samples containing inorganic chemical, organic, texture and other environmental data dating from 1955 to 1994. Cooperation with local and federal agencies as well as universities was essential in locating and screening documents for the database. More than 80% of references utilized came from sources with limited distribution (gray literature). Task sharing was facilitated by a comprehensive and clearly defined data dictionary for sediments. It also served as a data entry template and flat file format for data processing and as a basis for interpretation and graphical illustration. Standard QA/QC protocols are usually inapplicable to historical sediment data. In this work outliers and data quality problems were identified by batch screening techniques that also provide visualizations of data relationships and geochemical affinities. No data were excluded, but qualifying comments warn users of problem data. For Boston Harbor, the proportion of irreparable or seriously questioned data was remarkably small (<5%), although concentration values for metals and organic contaminants spanned 3 orders of magnitude for many elements or compounds. Data from the historical database provide alternatives to dated cores for measuring changes in surficial sediment contamination level with time. The data indicate that spatial inhomogeneity in harbor environments can be large with respect to sediment-hosted contaminants. Boston Inner Harbor surficial sediments showed decreases in concentrations of Cu, Hg, and Zn of 40 to 60% over a 17-year period.A comprehensive database of sediment chemistry and environmental parameters has been compiled for Boston Harbor and Massachusetts Bay. This work illustrates methodologies for rescuing and validating sediment data from heterogeneous historical sources. It greatly expands spatial and temporal data coverage of estuarine and coastal sediments. The database contains about 3500 samples containing inorganic chemical, organic, texture and other environmental data dating from 1995 to 1994. Cooperation with local and federal agencies as well as universities was essential in locating and screening documents for the database. More than 80% of references utilized came from sources with limited distribution (gray Task sharing was facilitated by a comprehensive and clearly defined data dictionary for sediments. It also served as a data entry template and flat file format for data processing and as a basis for interpretation and graphical illustration. Standard QA/QC protocols are usually inapplicable to historical sediment data. In this work outliers and data quality problems were identified by batch screening techniques that also provide visualizations of data relationships and geochemical affinities. No data were excluded, but qualifying comments warn users of problem data. For Boston Harbor, the proportion of irreparable or seriously questioned data was remarkably small (<5%), although concentration values for metals and organic contaminants spanned 3 orders of magnitude for many elements or compounds. Data from the historical database provide alternatives to dated cores for measuring changes in surficial sediment contamination level with time. The data indicate that spatial inhomogeneity in harbor environments can be large with respect to sediment-hosted contaminants. Boston Inner Harbor surficial sediments showed decreases in concentrations Cu, Hg, and Zn of 40 to 60% over a 17-year period.
Fuzzy multi objective transportation problem – evolutionary algorithm approach
NASA Astrophysics Data System (ADS)
Karthy, T.; Ganesan, K.
2018-04-01
This paper deals with fuzzy multi objective transportation problem. An fuzzy optimal compromise solution is obtained by using Fuzzy Genetic Algorithm. A numerical example is provided to illustrate the methodology.
Tan, Ziwen; Qin, Guoyou; Zhou, Haibo
2016-01-01
Outcome-dependent sampling (ODS) designs have been well recognized as a cost-effective way to enhance study efficiency in both statistical literature and biomedical and epidemiologic studies. A partially linear additive model (PLAM) is widely applied in real problems because it allows for a flexible specification of the dependence of the response on some covariates in a linear fashion and other covariates in a nonlinear non-parametric fashion. Motivated by an epidemiological study investigating the effect of prenatal polychlorinated biphenyls exposure on children's intelligence quotient (IQ) at age 7 years, we propose a PLAM in this article to investigate a more flexible non-parametric inference on the relationships among the response and covariates under the ODS scheme. We propose the estimation method and establish the asymptotic properties of the proposed estimator. Simulation studies are conducted to show the improved efficiency of the proposed ODS estimator for PLAM compared with that from a traditional simple random sampling design with the same sample size. The data of the above-mentioned study is analyzed to illustrate the proposed method. PMID:27006375
Use of Time-Series, ARIMA Designs to Assess Program Efficacy.
ERIC Educational Resources Information Center
Braden, Jeffery P.; And Others
1990-01-01
Illustrates use of time-series designs for determining efficacy of interventions with fictitious data describing drug-abuse prevention program. Discusses problems and procedures associated with time-series data analysis using Auto Regressive Integrated Moving Averages (ARIMA) models. Example illustrates application of ARIMA analysis for…
SAFE: Stopping AIDS through Functional Education.
ERIC Educational Resources Information Center
Hylton, Judith
This functional curriculum is intended to teach people with developmental disabilities or other learning problems how to prevent infection with HIV/AIDS (Human Immunodeficiency Virus/Acquired Immune Deficiency Syndrome). The entire curriculum includes six video segments, four illustrated brochures, 28 slides and illustrations, as well as a guide…
Using CAS to Solve Classical Mathematics Problems
ERIC Educational Resources Information Center
Burke, Maurice J.; Burroughs, Elizabeth A.
2009-01-01
Historically, calculus has displaced many algebraic methods for solving classical problems. This article illustrates an algebraic method for finding the zeros of polynomial functions that is closely related to Newton's method (devised in 1669, published in 1711), which is encountered in calculus. By exploring this problem, precalculus students…
Problem Solving on a Monorail.
ERIC Educational Resources Information Center
Barrow, Lloyd H.; And Others
1994-01-01
This activity was created to address a lack of problem-solving activities for elementary children. A "monorail" activity from the Evening Science Program for K-3 Students and Parents program is presented to illustrate the problem-solving format. Designed for performance at stations by groups of two students. (LZ)
Separation in Logistic Regression: Causes, Consequences, and Control.
Mansournia, Mohammad Ali; Geroldinger, Angelika; Greenland, Sander; Heinze, Georg
2018-04-01
Separation is encountered in regression models with a discrete outcome (such as logistic regression) where the covariates perfectly predict the outcome. It is most frequent under the same conditions that lead to small-sample and sparse-data bias, such as presence of a rare outcome, rare exposures, highly correlated covariates, or covariates with strong effects. In theory, separation will produce infinite estimates for some coefficients. In practice, however, separation may be unnoticed or mishandled because of software limits in recognizing and handling the problem and in notifying the user. We discuss causes of separation in logistic regression and describe how common software packages deal with it. We then describe methods that remove separation, focusing on the same penalized-likelihood techniques used to address more general sparse-data problems. These methods improve accuracy, avoid software problems, and allow interpretation as Bayesian analyses with weakly informative priors. We discuss likelihood penalties, including some that can be implemented easily with any software package, and their relative advantages and disadvantages. We provide an illustration of ideas and methods using data from a case-control study of contraceptive practices and urinary tract infection.
Project Thrive. Ways and Means: Strategies for Solving Classroom Problems. Volume I.
ERIC Educational Resources Information Center
Richards, Merle; Biemiller, Andrew
Strategies are delineated for solving elementary school classroom problems. After an introductory chapter, chapter 2 reviews problems cited by 24 kindergarten, Grade 1, and Grade 2 teachers and the strategies chosen as likely solutions to the problems. Strategies later found to be unsuccessful are discussed if they illustrate the nature of the…
A Case Study in an Integrated Development and Problem Solving Environment
ERIC Educational Resources Information Center
Deek, Fadi P.; McHugh, James A.
2003-01-01
This article describes an integrated problem solving and program development environment, illustrating the application of the system with a detailed case study of a small-scale programming problem. The system, which is based on an explicit cognitive model, is intended to guide the novice programmer through the stages of problem solving and program…
Developing 21st Century Process Skills through Project Design
ERIC Educational Resources Information Center
Yoo, Jeong-Ju; MacDonald, Nora M.
2014-01-01
The goal of this paper is to illustrate how the promotion of 21st Century process skills can be used to enhance student learning and workplace skill development: thinking, problem solving, collaboration, communication, leadership, and management. As an illustrative case, fashion merchandising and design students conducted research for a…
Points of View: Stories of Psychopathology.
ERIC Educational Resources Information Center
Mitchell, James E.
This book is designed to provide students, at differing levels of experience and training, with examples that illustrate the problems individuals have with various psychopathologies. Stories are included to illustrate the key elements of psychopathology for these disorders, and are written from the point of view of both the individual who has the…
Novice Career Changers Weather the Classroom Weather
ERIC Educational Resources Information Center
Gifford, James; Snyder, Mary Grace; Cuddapah, Jennifer Locraft
2013-01-01
A close look at one professional's career change into teaching illustrates unique challenges and qualities, showing in stark relief what makes the induction smoother and the experience more successful. This article presents the story of a novice career changer teacher that illustrates their unique problems and dispositions, as well as…
Space-Time Data fusion for Remote Sensing Applications
NASA Technical Reports Server (NTRS)
Braverman, Amy; Nguyen, H.; Cressie, N.
2011-01-01
NASA has been collecting massive amounts of remote sensing data about Earth's systems for more than a decade. Missions are selected to be complementary in quantities measured, retrieval techniques, and sampling characteristics, so these datasets are highly synergistic. To fully exploit this, a rigorous methodology for combining data with heterogeneous sampling characteristics is required. For scientific purposes, the methodology must also provide quantitative measures of uncertainty that propagate input-data uncertainty appropriately. We view this as a statistical inference problem. The true but notdirectly- observed quantities form a vector-valued field continuous in space and time. Our goal is to infer those true values or some function of them, and provide to uncertainty quantification for those inferences. We use a spatiotemporal statistical model that relates the unobserved quantities of interest at point-level to the spatially aggregated, observed data. We describe and illustrate our method using CO2 data from two NASA data sets.
Goal-oriented Site Characterization in Hydrogeological Applications: An Overview
NASA Astrophysics Data System (ADS)
Nowak, W.; de Barros, F.; Rubin, Y.
2011-12-01
In this study, we address the importance of goal-oriented site characterization. Given the multiple sources of uncertainty in hydrogeological applications, information needs of modeling, prediction and decision support should be satisfied with efficient and rational field campaigns. In this work, we provide an overview of an optimal sampling design framework based on Bayesian decision theory, statistical parameter inference and Bayesian model averaging. It optimizes the field sampling campaign around decisions on environmental performance metrics (e.g., risk, arrival times, etc.) while accounting for parametric and model uncertainty in the geostatistical characterization, in forcing terms, and measurement error. The appealing aspects of the framework lie on its goal-oriented character and that it is directly linked to the confidence in a specified decision. We illustrate how these concepts could be applied in a human health risk problem where uncertainty from both hydrogeological and health parameters are accounted.
Children as Illustrators: A Transcultural Experience.
ERIC Educational Resources Information Center
Hurwitz, Al
1980-01-01
The author discusses his cross cultural study of the painting styles of 9- to 12-year-old children in Australia, New Zealand, and South Korea. He compares their art products--all illustrations of the Noah's Ark story. A sample of the drawings illustrates the text. (SJL)
ERIC Educational Resources Information Center
Shore, Felice S.; Pascal, Matthew
2008-01-01
This article describes several distinct approaches taken by preservice elementary teachers to solving a classic rate problem. Their approaches incorporate a variety of mathematical concepts, ranging from proportions to infinite series, and illustrate the power of all five NCTM Process Standards. (Contains 8 figures.)
Laplace Boundary-Value Problem in Paraboloidal Coordinates
ERIC Educational Resources Information Center
Duggen, L.; Willatzen, M.; Voon, L. C. Lew Yan
2012-01-01
This paper illustrates both a problem in mathematical physics, whereby the method of separation of variables, while applicable, leads to three ordinary differential equations that remain fully coupled via two separation constants and a five-term recurrence relation for series solutions, and an exactly solvable problem in electrostatics, as a…
Problem Orientation in Physical Geography Teaching.
ERIC Educational Resources Information Center
Church, Michael
1988-01-01
States that the introduction of real, quantitative problems in classroom and field teaching improves scientific rigor and leads more directly to applied studies. Examines the use of problems in an introductory hydrology course, presenting teaching objectives and the full course structure to illustrate their integration with other teaching modes.…
The Effectiveness of "Pencasts" in Physics Courses
ERIC Educational Resources Information Center
Weliweriya, Nandana; Sayre, Eleanor C.; Zollman, Dean A.
2018-01-01
Pencasts are videos of problem solving with narration by the problem solver. Pedagogically, students can create pencasts to illustrate their own problem solving to the instructor or to their peers. Pencasts have implications for teaching at multiple levels from elementary grades through university courses. In this article, we describe the use of…
Problem Solving and Comprehension. Third Edition.
ERIC Educational Resources Information Center
Whimbey, Arthur; Lochhead, Jack
This book is directed toward increasing students' ability to analyze problems and comprehend what they read and hear. It outlines and illustrates the methods that good problem solvers use in attacking complex ideas, and provides practice in applying these methods to a variety of questions involving comprehension and reasoning. Chapter I includes a…
ERIC Educational Resources Information Center
Rowan, Helen
The purpose of this paper, prepared for the U. S. Commission on Civil Rights, is to indicate the types and ranges of problems facing the Mexican American community and to suggest ways in which these problems are peculiar to Mexican Americans. Specific examples are cited to illustrate major problems and personal experiences. Topics covered in the…
The Future Problem Solving Program.
ERIC Educational Resources Information Center
Crabbe, Anne B.
1989-01-01
Describes the Future Problem Solving Program, in which students from the U.S. and around the world are tackling some complex challenges facing society, ranging from acid rain to terrorism. The program uses a creative problem solving process developed for business and industry. A sixth-grade toxic waste cleanup project illustrates the process.…
The Problems of Land Consolidation: A Case Study of Taiwan
ERIC Educational Resources Information Center
Williams, Jack F.
1976-01-01
Problems of agricultural land consolidation, as illustrated by Taiwan's first 10-year land reform phase, include fragmentation of holdings, cost of consolidation, corruption and maladministration by government officials, and timing of operations. (AV)
Poverty-Exploitation-Alienation.
ERIC Educational Resources Information Center
Bronfenbrenner, Martin
1980-01-01
Illustrates how knowledge derived from the discipline of economics can be used to help shed light on social problems such as poverty, exploitation, and alienation, and can help decision makers form policy to minimize these and similar problems. (DB)
Petersen, Isaac T; Lindhiem, Oliver; LeBeau, Brandon; Bates, John E; Pettit, Gregory S; Lansford, Jennifer E; Dodge, Kenneth A
2018-03-01
Manifestations of internalizing problems, such as specific symptoms of anxiety and depression, can change across development, even if individuals show strong continuity in rank-order levels of internalizing problems. This illustrates the concept of heterotypic continuity, and raises the question of whether common measures might be construct-valid for one age but not another. This study examines mean-level changes in internalizing problems across a long span of development at the same time as accounting for heterotypic continuity by using age-appropriate, changing measures. Internalizing problems from age 14-24 were studied longitudinally in a community sample (N = 585), using Achenbach's Youth Self-Report (YSR) and Young Adult Self-Report (YASR). Heterotypic continuity was evaluated with an item response theory (IRT) approach to vertical scaling, linking different measures over time to be on the same scale, as well as with a Thurstone scaling approach. With vertical scaling, internalizing problems peaked in mid-to-late adolescence and showed a group-level decrease from adolescence to early adulthood, a change that would not have been seen with the approach of using only age-common items. Individuals' trajectories were sometimes different than would have been seen with the common-items approach. Findings support the importance of considering heterotypic continuity when examining development and vertical scaling to account for heterotypic continuity with changing measures. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
An illustrative analysis of technological alternatives for satellite communications
NASA Technical Reports Server (NTRS)
Metcalfe, M. R.; Cazalet, E. G.; North, D. W.
1979-01-01
The demand for satellite communications services in the domestic market is discussed. Two approaches to increasing system capacity are the expansion of service into frequencies presently allocated but not used for satellite communications, and the development of technologies that provide a greater level of service within the currently used frequency bands. The development of economic models and analytic techniques for evaluating capacity expansion alternatives such as these are presented. The satellite orbit spectrum problem, and also outlines of some suitable analytic approaches are examined. Illustrative analysis of domestic communications satellite technology options for providing increased levels of service are also examined. The analysis illustrates the use of probabilities and decision trees in analyzing alternatives, and provides insight into the important aspects of the orbit spectrum problem that would warrant inclusion in a larger scale analysis.
Planning and navigation as active inference.
Kaplan, Raphael; Friston, Karl J
2018-03-23
This paper introduces an active inference formulation of planning and navigation. It illustrates how the exploitation-exploration dilemma is dissolved by acting to minimise uncertainty (i.e. expected surprise or free energy). We use simulations of a maze problem to illustrate how agents can solve quite complicated problems using context sensitive prior preferences to form subgoals. Our focus is on how epistemic behaviour-driven by novelty and the imperative to reduce uncertainty about the world-contextualises pragmatic or goal-directed behaviour. Using simulations, we illustrate the underlying process theory with synthetic behavioural and electrophysiological responses during exploration of a maze and subsequent navigation to a target location. An interesting phenomenon that emerged from the simulations was a putative distinction between 'place cells'-that fire when a subgoal is reached-and 'path cells'-that fire until a subgoal is reached.
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan; Bittker, David A.
1994-01-01
LSENS, the Lewis General Chemical Kinetics Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 2 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 2 describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part 1 (NASA RP-1328) derives the governing equations describes the numerical solution procedures for the types of problems that can be solved by lSENS. Part 3 (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.
Space station integrated wall design and penetration damage control
NASA Technical Reports Server (NTRS)
Coronado, A. R.; Gibbins, M. N.; Wright, M. A.; Stern, P. H.
1987-01-01
The analysis code BUMPER executes a numerical solution to the problem of calculating the probability of no penetration (PNP) of a spacecraft subject to man-made orbital debris or meteoroid impact. The codes were developed on a DEC VAX 11/780 computer that uses the Virtual Memory System (VMS) operating system, which is written in FORTRAN 77 with no VAX extensions. To help illustrate the steps involved, a single sample analysis is performed. The example used is the space station reference configuration. The finite element model (FEM) of this configuration is relatively complex but demonstrates many BUMPER features. The computer tools and guidelines are described for constructing a FEM for the space station under consideration. The methods used to analyze the sensitivity of PNP to variations in design, are described. Ways are suggested for developing contour plots of the sensitivity study data. Additional BUMPER analysis examples are provided, including FEMs, command inputs, and data outputs. The mathematical theory used as the basis for the code is described, and illustrates the data flow within the analysis.
Advantages of Unfair Quantum Ground-State Sampling.
Zhang, Brian Hu; Wagenbreth, Gene; Martin-Mayor, Victor; Hen, Itay
2017-04-21
The debate around the potential superiority of quantum annealers over their classical counterparts has been ongoing since the inception of the field. Recent technological breakthroughs, which have led to the manufacture of experimental prototypes of quantum annealing optimizers with sizes approaching the practical regime, have reignited this discussion. However, the demonstration of quantum annealing speedups remains to this day an elusive albeit coveted goal. We examine the power of quantum annealers to provide a different type of quantum enhancement of practical relevance, namely, their ability to serve as useful samplers from the ground-state manifolds of combinatorial optimization problems. We study, both numerically by simulating stoquastic and non-stoquastic quantum annealing processes, and experimentally, using a prototypical quantum annealing processor, the ability of quantum annealers to sample the ground-states of spin glasses differently than thermal samplers. We demonstrate that (i) quantum annealers sample the ground-state manifolds of spin glasses very differently than thermal optimizers (ii) the nature of the quantum fluctuations driving the annealing process has a decisive effect on the final distribution, and (iii) the experimental quantum annealer samples ground-state manifolds significantly differently than thermal and ideal quantum annealers. We illustrate how quantum annealers may serve as powerful tools when complementing standard sampling algorithms.
Neighboring extremals of dynamic optimization problems with path equality constraints
NASA Technical Reports Server (NTRS)
Lee, A. Y.
1988-01-01
Neighboring extremals of dynamic optimization problems with path equality constraints and with an unknown parameter vector are considered in this paper. With some simplifications, the problem is reduced to solving a linear, time-varying two-point boundary-value problem with integral path equality constraints. A modified backward sweep method is used to solve this problem. Two example problems are solved to illustrate the validity and usefulness of the solution technique.
Method for evaluating wind turbine wake effects on wind farm performance
NASA Technical Reports Server (NTRS)
Neustadter, H. E.; Spera, D. A.
1985-01-01
A method of testing the performance of a cluster of wind turbine units an data analysis equations are presented which together form a simple and direct procedure for determining the reduction in energy output caused by the wake of an upwind turbine. This method appears to solve the problems presented by data scatter and wind variability. Test data from the three-unit Mod-2 wind turbine cluster at Goldendale, Washington, are analyzed to illustrate the application of the proposed method. In this sample case the reduction in energy was found to be about 10 percent when the Mod-2 units were separated a distance equal to seven diameters and winds were below rated.
On Bayesian Testing of Additive Conjoint Measurement Axioms Using Synthetic Likelihood.
Karabatsos, George
2018-06-01
This article introduces a Bayesian method for testing the axioms of additive conjoint measurement. The method is based on an importance sampling algorithm that performs likelihood-free, approximate Bayesian inference using a synthetic likelihood to overcome the analytical intractability of this testing problem. This new method improves upon previous methods because it provides an omnibus test of the entire hierarchy of cancellation axioms, beyond double cancellation. It does so while accounting for the posterior uncertainty that is inherent in the empirical orderings that are implied by these axioms, together. The new method is illustrated through a test of the cancellation axioms on a classic survey data set, and through the analysis of simulated data.
Domain decomposition methods in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Gropp, William D.; Keyes, David E.
1991-01-01
The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.
Hawaiian lavas: a window into mantle dynamics
NASA Astrophysics Data System (ADS)
Jones, Tim; Davies, Rhodri; Campbell, Ian
2017-04-01
The emergence of double track volcanism at Hawaii has traditionally posed two problems: (i) the physical emergence of two parallel chains of volcanoes at around 3 Ma, named the Loa and Kea tracks after the largest volcanoes in their sequence, and (ii) the systematic geochemical differences between the erupted lavas along each track. In this study, we dissolve this distinction by providing a geodynamical explanation for the physical emergence of double track volcanism at 3 Ma and use numerical models of the Hawaiian plume to illustrate how this process naturally leads to each volcanic track sampling distinct mantle compositions, which accounts for much of the geochemical characteristics of the Loa and Kea trends.
Baker, Amy J; Raymond, Mark R; Haist, Steven A; Boulet, John R
2017-04-01
One challenge when implementing case-based learning, and other approaches to contextualized learning, is determining which clinical problems to include. This article illustrates how health care utilization data, readily available from the National Center for Health Statistics (NCHS), can be incorporated into an educational needs assessment to identify medical problems physicians are likely to encounter in clinical practice. The NCHS survey data summarize patient demographics, diagnoses, and interventions for tens of thousands of patients seen in various settings, including emergency departments (EDs), clinics, and hospitals.Selected data from the National Hospital Ambulatory Medical Care Survey: Emergency Department illustrate how instructional materials can be derived from the results of such public-use health care data. Using fever as the reason for visit to the ED, the patient management path is depicted in the form of a case drill-down by exploring the most common diagnoses, blood tests, diagnostic studies, procedures, and medications associated with fever.Although these types of data are quite useful, they should not serve as the sole basis for determining which instructional cases to include. Additional sources of information should be considered to ensure the inclusion of cases that represent infrequent but high-impact problems and those that illustrate fundamental principles that generalize to other cases.
Pensamientos Sobre (Thoughts on) Teaching English as a Second Language.
ERIC Educational Resources Information Center
Ulibarri, Mari-Luci
This document presents ideas on various topics in teaching English as a second language. Some of the problems of English orthography and semantics are illustrated. The role of contrastive analysis is mentioned with Spanish-English illustrations. A list of second-language-acquisition principles and techniques is provided, and suggestions for…
Student Idealogies and Cultural Radicalism--Some Swedish Evidence.
ERIC Educational Resources Information Center
Runeby, Nils
1988-01-01
The consequences of the German model for the relation between university professors and students and between students and society are summarized. Two positions are outlined, using a novel by German author Michael Zeller to illustrate some of the problems of student ideologies in higher education. The novel illustrates a period in the history of…
ERIC Educational Resources Information Center
Nelson, Timothy D.; Mashunkashey, Joanna O.; Mitchell, Montserrat C.; Benson, Eric R.; Vernberg, Eric M.; Roberts, Michael C.
2008-01-01
We describe cases from the clinical records in the Intensive Mental Health Program to illustrate the diverse presenting problems, intervention strategies, therapeutic process, and outcomes for children receiving services in this school-based, community-oriented treatment model. Cases reflect varying degrees of treatment response and potential…
ERIC Educational Resources Information Center
Candace, Walkington; Clinton, Virginia; Mingle, Leigh
2016-01-01
This paper examines two factors that have been shown in previous literature to enhance students' interest in learning mathematics--personalization of problems to students' interest areas, and the addition of visual representations such as decorative illustrations. In two studies taking place within an online curriculum for middle school…
Fast, Nonlinear, Fully Probabilistic Inversion of Large Geophysical Problems
NASA Astrophysics Data System (ADS)
Curtis, A.; Shahraeeni, M.; Trampert, J.; Meier, U.; Cho, G.
2010-12-01
Almost all Geophysical inverse problems are in reality nonlinear. Fully nonlinear inversion including non-approximated physics, and solving for probability distribution functions (pdf’s) that describe the solution uncertainty, generally requires sampling-based Monte-Carlo style methods that are computationally intractable in most large problems. In order to solve such problems, physical relationships are usually linearized leading to efficiently-solved, (possibly iterated) linear inverse problems. However, it is well known that linearization can lead to erroneous solutions, and in particular to overly optimistic uncertainty estimates. What is needed across many Geophysical disciplines is a method to invert large inverse problems (or potentially tens of thousands of small inverse problems) fully probabilistically and without linearization. This talk shows how very large nonlinear inverse problems can be solved fully probabilistically and incorporating any available prior information using mixture density networks (driven by neural network banks), provided the problem can be decomposed into many small inverse problems. In this talk I will explain the methodology, compare multi-dimensional pdf inversion results to full Monte Carlo solutions, and illustrate the method with two applications: first, inverting surface wave group and phase velocities for a fully-probabilistic global tomography model of the Earth’s crust and mantle, and second inverting industrial 3D seismic data for petrophysical properties throughout and around a subsurface hydrocarbon reservoir. The latter problem is typically decomposed into 104 to 105 individual inverse problems, each solved fully probabilistically and without linearization. The results in both cases are sufficiently close to the Monte Carlo solution to exhibit realistic uncertainty, multimodality and bias. This provides far greater confidence in the results, and in decisions made on their basis.
Problem Solving & Comprehension. Fourth Edition.
ERIC Educational Resources Information Center
Whimbey, Arthur; Lochhead, Jack
This book shows how to increase one's power to analyze and comprehend problems. First, it outlines and illustrates the methods that good problem solvers use in attacking complex ideas. Then it gives some practice in applying these methods to a variety of questions in comprehension and reasoning. Chapters include: (1) "Test Your Mind--See How…
Hidden Hazards of Radon: Scanning the Country for Problem Locations.
ERIC Educational Resources Information Center
Gundersen, Linda C. S.
1992-01-01
Describes the geology of the radon problem in the United States and suggests how homeowners can cope with the radio active gas. Vignettes illustrate how and where radon is produced beneath the earth's surface, testing sites and procedures for radon in houses, and locations for potential radon problems across the United States. (MCO)
Nonfiction Literature that Highlights Inquiry: How "Real" People Solve "Real" Problems
ERIC Educational Resources Information Center
Zarnowski, Myra; Turkel, Susan
2011-01-01
In this article, the authors explain how nonfiction literature can demonstrate the nature of problem solving within disciplines such as math, science, and social studies. This literature illustrates what it means to puzzle over problems, to apply disciplinary thinking, and to develop creative solutions. The authors look closely at three examples…
The Multiple Pendulum Problem via Maple[R
ERIC Educational Resources Information Center
Salisbury, K. L.; Knight, D. G.
2002-01-01
The way in which computer algebra systems, such as Maple, have made the study of physical problems of some considerable complexity accessible to mathematicians and scientists with modest computational skills is illustrated by solving the multiple pendulum problem. A solution is obtained for four pendulums with no restriction on the size of the…
ERIC Educational Resources Information Center
Lee, Chwee Beng
2010-01-01
This study examines the interactions between problem solving and conceptual change in an elementary science class where students build system dynamic models as a form of problem representations. Through mostly qualitative findings, we illustrate the interplay of three emerging intervening conditions (epistemological belief, structural knowledge…
ERIC Educational Resources Information Center
Caron, Rosemary M.; Serrell, Nancy
2009-01-01
Wicked problems are multifactorial in nature and possess no clear resolution due to numerous community stakeholder involvement. We demonstrate childhood lead poisoning as a wicked problem and illustrate how understanding a community's ecology can build community capacity to affect local environmental management by (1) forming an academic-community…
ERIC Educational Resources Information Center
Le, Huy; Schmidt, Frank L.; Harter, James K.; Lauver, Kristy J.
2010-01-01
Construct empirical redundancy may be a major problem in organizational research today. In this paper, we explain and empirically illustrate a method for investigating this potential problem. We applied the method to examine the empirical redundancy of job satisfaction (JS) and organizational commitment (OC), two well-established organizational…
Clinical Reasoning Terms Included in Clinical Problem Solving Exercises?
Musgrove, John L.; Morris, Jason; Estrada, Carlos A.; Kraemer, Ryan R.
2016-01-01
Background Published clinical problem solving exercises have emerged as a common tool to illustrate aspects of the clinical reasoning process. The specific clinical reasoning terms mentioned in such exercises is unknown. Objective We identified which clinical reasoning terms are mentioned in published clinical problem solving exercises and compared them to clinical reasoning terms given high priority by clinician educators. Methods A convenience sample of clinician educators prioritized a list of clinical reasoning terms (whether to include, weight percentage of top 20 terms). The authors then electronically searched the terms in the text of published reports of 4 internal medicine journals between January 2010 and May 2013. Results The top 5 clinical reasoning terms ranked by educators were dual-process thinking (weight percentage = 24%), problem representation (12%), illness scripts (9%), hypothesis generation (7%), and problem categorization (7%). The top clinical reasoning terms mentioned in the text of 79 published reports were context specificity (n = 20, 25%), bias (n = 13, 17%), dual-process thinking (n = 11, 14%), illness scripts (n = 11, 14%), and problem representation (n = 10, 13%). Context specificity and bias were not ranked highly by educators. Conclusions Some core concepts of modern clinical reasoning theory ranked highly by educators are mentioned explicitly in published clinical problem solving exercises. However, some highly ranked terms were not used, and some terms used were not ranked by the clinician educators. Effort to teach clinical reasoning to trainees may benefit from a common nomenclature of clinical reasoning terms. PMID:27168884
Clinical Reasoning Terms Included in Clinical Problem Solving Exercises?
Musgrove, John L; Morris, Jason; Estrada, Carlos A; Kraemer, Ryan R
2016-05-01
Background Published clinical problem solving exercises have emerged as a common tool to illustrate aspects of the clinical reasoning process. The specific clinical reasoning terms mentioned in such exercises is unknown. Objective We identified which clinical reasoning terms are mentioned in published clinical problem solving exercises and compared them to clinical reasoning terms given high priority by clinician educators. Methods A convenience sample of clinician educators prioritized a list of clinical reasoning terms (whether to include, weight percentage of top 20 terms). The authors then electronically searched the terms in the text of published reports of 4 internal medicine journals between January 2010 and May 2013. Results The top 5 clinical reasoning terms ranked by educators were dual-process thinking (weight percentage = 24%), problem representation (12%), illness scripts (9%), hypothesis generation (7%), and problem categorization (7%). The top clinical reasoning terms mentioned in the text of 79 published reports were context specificity (n = 20, 25%), bias (n = 13, 17%), dual-process thinking (n = 11, 14%), illness scripts (n = 11, 14%), and problem representation (n = 10, 13%). Context specificity and bias were not ranked highly by educators. Conclusions Some core concepts of modern clinical reasoning theory ranked highly by educators are mentioned explicitly in published clinical problem solving exercises. However, some highly ranked terms were not used, and some terms used were not ranked by the clinician educators. Effort to teach clinical reasoning to trainees may benefit from a common nomenclature of clinical reasoning terms.
Motion-compensated compressed sensing for dynamic imaging
NASA Astrophysics Data System (ADS)
Sundaresan, Rajagopalan; Kim, Yookyung; Nadar, Mariappan S.; Bilgin, Ali
2010-08-01
The recently introduced Compressed Sensing (CS) theory explains how sparse or compressible signals can be reconstructed from far fewer samples than what was previously believed possible. The CS theory has attracted significant attention for applications such as Magnetic Resonance Imaging (MRI) where long acquisition times have been problematic. This is especially true for dynamic MRI applications where high spatio-temporal resolution is needed. For example, in cardiac cine MRI, it is desirable to acquire the whole cardiac volume within a single breath-hold in order to avoid artifacts due to respiratory motion. Conventional MRI techniques do not allow reconstruction of high resolution image sequences from such limited amount of data. Vaswani et al. recently proposed an extension of the CS framework to problems with partially known support (i.e. sparsity pattern). In their work, the problem of recursive reconstruction of time sequences of sparse signals was considered. Under the assumption that the support of the signal changes slowly over time, they proposed using the support of the previous frame as the "known" part of the support for the current frame. While this approach works well for image sequences with little or no motion, motion causes significant change in support between adjacent frames. In this paper, we illustrate how motion estimation and compensation techniques can be used to reconstruct more accurate estimates of support for image sequences with substantial motion (such as cardiac MRI). Experimental results using phantoms as well as real MRI data sets illustrate the improved performance of the proposed technique.
A Method for Overcoming the Problem of Concept-Scale Interaction in Semantic Differential Research
ERIC Educational Resources Information Center
Bynner, John; Romney, David
1972-01-01
Data collected in a study of hospital staff attitudes to drug addicts and other types of patients are used to illustrate the problem of concept-scale interaction in semantic differential research. (Authors)
Counting Pizza Pieces and Other Combinatorial Problems.
ERIC Educational Resources Information Center
Maier, Eugene
1988-01-01
The general combinatorial problem of counting the number of regions into which the interior of a circle is divided by a family of lines is considered. A general formula is developed and its use is illustrated in two situations. (PK)
ERIC Educational Resources Information Center
Oberlin, Lynn
1974-01-01
Discusses the problem of gravity as it relates to distance from the center of the earth, and reports contradictory explanations from different source books. Uses this example to illustrate that science should not be taught from a single source, such as a textbook. (JR)
The Preparation and Characterization of Materials.
ERIC Educational Resources Information Center
Wold, Aaron
1980-01-01
Presents several examples illustrating different aspects of materials problems, including problems associated with solid-solid reactions, sintering and crystal growth, characterization of materials, preparation and characterization of stoichiometric ferrites and chromites, copper-sulfur systems, growth of single crystals by chemical vapor…
Designing a two-rank acceptance sampling plan for quality inspection of geospatial data products
NASA Astrophysics Data System (ADS)
Tong, Xiaohua; Wang, Zhenhua; Xie, Huan; Liang, Dan; Jiang, Zuoqin; Li, Jinchao; Li, Jun
2011-10-01
To address the disadvantages of classical sampling plans designed for traditional industrial products, we originally propose a two-rank acceptance sampling plan (TRASP) for the inspection of geospatial data outputs based on the acceptance quality level (AQL). The first rank sampling plan is to inspect the lot consisting of map sheets, and the second is to inspect the lot consisting of features in an individual map sheet. The TRASP design is formulated as an optimization problem with respect to sample size and acceptance number, which covers two lot size cases. The first case is for a small lot size with nonconformities being modeled by a hypergeometric distribution function, and the second is for a larger lot size with nonconformities being modeled by a Poisson distribution function. The proposed TRASP is illustrated through two empirical case studies. Our analysis demonstrates that: (1) the proposed TRASP provides a general approach for quality inspection of geospatial data outputs consisting of non-uniform items and (2) the proposed acceptance sampling plan based on TRASP performs better than other classical sampling plans. It overcomes the drawbacks of percent sampling, i.e., "strictness for large lot size, toleration for small lot size," and those of a national standard used specifically for industrial outputs, i.e., "lots with different sizes corresponding to the same sampling plan."
Multitarget-multisensor management for decentralized sensor networks
NASA Astrophysics Data System (ADS)
Tharmarasa, R.; Kirubarajan, T.; Sinha, A.; Hernandez, M. L.
2006-05-01
In this paper, we consider the problem of sensor resource management in decentralized tracking systems. Due to the availability of cheap sensors, it is possible to use a large number of sensors and a few fusion centers (FCs) to monitor a large surveillance region. Even though a large number of sensors are available, due to frequency, power and other physical limitations, only a few of them can be active at any one time. The problem is then to select sensor subsets that should be used by each FC at each sampling time in order to optimize the tracking performance subject to their operational constraints. In a recent paper, we proposed an algorithm to handle the above issues for joint detection and tracking, without using simplistic clustering techniques that are standard in the literature. However, in that paper, a hierarchical architecture with feedback at every sampling time was considered, and the sensor management was performed only at a central fusion center (CFC). However, in general, it is not possible to communicate with the CFC at every sampling time, and in many cases there may not even be a CFC. Sometimes, communication between CFC and local fusion centers might fail as well. Therefore performing sensor management only at the CFC is not viable in most networks. In this paper, we consider an architecture in which there is no CFC, each FC communicates only with the neighboring FCs, and communications are restricted. In this case, each FC has to decide which sensors are to be used by itself at each measurement time step. We propose an efficient algorithm to handle the above problem in real time. Simulation results illustrating the performance of the proposed algorithm are also presented.
Data Synchronization Discrepancies in a Formation Flight Control System
NASA Technical Reports Server (NTRS)
Ryan, Jack; Hanson, Curtis E.; Norlin, Ken A.; Allen, Michael J.; Schkolnik, Gerard (Technical Monitor)
2001-01-01
Aircraft hardware-in-the-loop simulation is an invaluable tool to flight test engineers; it reveals design and implementation flaws while operating in a controlled environment. Engineers, however, must always be skeptical of the results and analyze them within their proper context. Engineers must carefully ascertain whether an anomaly that occurs in the simulation will also occur in flight. This report presents a chronology illustrating how misleading simulation timing problems led to the implementation of an overly complex position data synchronization guidance algorithm in place of a simpler one. The report illustrates problems caused by the complex algorithm and how the simpler algorithm was chosen in the end. Brief descriptions of the project objectives, approach, and simulation are presented. The misleading simulation results and the conclusions then drawn are presented. The complex and simple guidance algorithms are presented with flight data illustrating their relative success.
[Software for illustrating a cost-quality balance carried out by clinical laboratory practice].
Nishibori, Masahiro; Asayama, Hitoshi; Kimura, Satoshi; Takagi, Yasushi; Hagihara, Michio; Fujiwara, Mutsunori; Yoneyama, Akiko; Watanabe, Takashi
2010-09-01
We have no proper reference indicating the quality of clinical laboratory practice, which should clearly illustrates that better medical tests require more expenses. Japanese Society of Laboratory Medicine was concerned about recent difficult medical economy and issued a committee report proposing a guideline to evaluate the good laboratory practice. According to the guideline, we developed software that illustrate a cost-quality balance carried out by clinical laboratory practice. We encountered a number of controversial problems, for example, how to measure and weight each quality-related factor, how to calculate costs of a laboratory test and how to consider characteristics of a clinical laboratory. Consequently we finished only prototype software within the given period and the budget. In this paper, software implementation of the guideline and the above-mentioned problems are summarized. Aiming to stimulate these discussions, the operative software will be put on the Society's homepage for trial
Not Just Hats Anymore: Binomial Inversion and the Problem of Multiple Coincidences
ERIC Educational Resources Information Center
Hathout, Leith
2007-01-01
The well-known "hats" problem, in which a number of people enter a restaurant and check their hats, and then receive them back at random, is often used to illustrate the concept of derangements, that is, permutations with no fixed points. In this paper, the problem is extended to multiple items of clothing, and a general solution to the problem of…
ERIC Educational Resources Information Center
Anderson, William L.; Mitchell, Steven M.; Osgood, Marcy P.
2008-01-01
For the past 3 yr, faculty at the University of New Mexico, Department of Biochemistry and Molecular Biology have been using interactive online Problem-Based Learning (PBL) case discussions in our large-enrollment classes. We have developed an illustrative tracking method to monitor student use of problem-solving strategies to provide targeted…
An approach to solve replacement problems under intuitionistic fuzzy nature
NASA Astrophysics Data System (ADS)
Balaganesan, M.; Ganesan, K.
2018-04-01
Due to impreciseness to solve the day to day problems the researchers use fuzzy sets in their discussions of the replacement problems. The aim of this paper is to solve the replacement theory problems with triangular intuitionistic fuzzy numbers. An effective methodology based on fuzziness index and location index is proposed to determine the optimal solution of the replacement problem. A numerical example is illustrated to validate the proposed method.
Unsteady Solution of Non-Linear Differential Equations Using Walsh Function Series
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.
2015-01-01
Walsh functions form an orthonormal basis set consisting of square waves. The discontinuous nature of square waves make the system well suited for representing functions with discontinuities. The product of any two Walsh functions is another Walsh function - a feature that can radically change an algorithm for solving non-linear partial differential equations (PDEs). The solution algorithm of non-linear differential equations using Walsh function series is unique in that integrals and derivatives may be computed using simple matrix multiplication of series representations of functions. Solutions to PDEs are derived as functions of wave component amplitude. Three sample problems are presented to illustrate the Walsh function series approach to solving unsteady PDEs. These include an advection equation, a Burgers equation, and a Riemann problem. The sample problems demonstrate the use of the Walsh function solution algorithms, exploiting Fast Walsh Transforms in multi-dimensions (O(Nlog(N))). Details of a Fast Walsh Reciprocal, defined here for the first time, enable inversion of aWalsh Symmetric Matrix in O(Nlog(N)) operations. Walsh functions have been derived using a fractal recursion algorithm and these fractal patterns are observed in the progression of pairs of wave number amplitudes in the solutions. These patterns are most easily observed in a remapping defined as a fractal fingerprint (FFP). A prolongation of existing solutions to the next highest order exploits these patterns. The algorithms presented here are considered a work in progress that provide new alternatives and new insights into the solution of non-linear PDEs.
An Ideological Framework in Adult Education: Poverty and Social Change.
ERIC Educational Resources Information Center
Lee, Jo-Anne
1981-01-01
Posits that the basic system of values and beliefs held by adult educators influences their stance on social problems. Examples of responses to the problem of poverty illustrate four basic ideological positions: liberalism, conservatism, liberal radicalism, and Marxism. (JOW)
A Problem-Oriented Record System for Counselors.
ERIC Educational Resources Information Center
Law, Joseph; And Others
1981-01-01
Recommends the adoption of Weed's Problem Oriented Records System by practitioners and supervisors. Also discusses the purposes of recordkeeping in counseling and establishes criteria for adopting documentation systems. Case examples illustrate the applicability of Weed's approach in counseling and practicum supervision. (Author)
Konovalov, Arkady; Krajbich, Ian
2016-01-01
Organisms appear to learn and make decisions using different strategies known as model-free and model-based learning; the former is mere reinforcement of previously rewarded actions and the latter is a forward-looking strategy that involves evaluation of action-state transition probabilities. Prior work has used neural data to argue that both model-based and model-free learners implement a value comparison process at trial onset, but model-based learners assign more weight to forward-looking computations. Here using eye-tracking, we report evidence for a different interpretation of prior results: model-based subjects make their choices prior to trial onset. In contrast, model-free subjects tend to ignore model-based aspects of the task and instead seem to treat the decision problem as a simple comparison process between two differentially valued items, consistent with previous work on sequential-sampling models of decision making. These findings illustrate a problem with assuming that experimental subjects make their decisions at the same prescribed time. PMID:27511383
Inverse problems in heterogeneous and fractured media using peridynamics
Turner, Daniel Z.; van Bloemen Waanders, Bart G.; Parks, Michael L.
2015-12-10
The following work presents an adjoint-based methodology for solving inverse problems in heterogeneous and fractured media using state-based peridynamics. We show that the inner product involving the peridynamic operators is self-adjoint. The proposed method is illustrated for several numerical examples with constant and spatially varying material parameters as well as in the context of fractures. We also present a framework for obtaining material parameters by integrating digital image correlation (DIC) with inverse analysis. This framework is demonstrated by evaluating the bulk and shear moduli for a sample of nuclear graphite using digital photographs taken during the experiment. The resulting measuredmore » values correspond well with other results reported in the literature. Lastly, we show that this framework can be used to determine the load state given observed measurements of a crack opening. Furthermore, this type of analysis has many applications in characterizing subsurface stress-state conditions given fracture patterns in cores of geologic material.« less
Probability bounds analysis for nonlinear population ecology models.
Enszer, Joshua A; Andrei Măceș, D; Stadtherr, Mark A
2015-09-01
Mathematical models in population ecology often involve parameters that are empirically determined and inherently uncertain, with probability distributions for the uncertainties not known precisely. Propagating such imprecise uncertainties rigorously through a model to determine their effect on model outputs can be a challenging problem. We illustrate here a method for the direct propagation of uncertainties represented by probability bounds though nonlinear, continuous-time, dynamic models in population ecology. This makes it possible to determine rigorous bounds on the probability that some specified outcome for a population is achieved, which can be a core problem in ecosystem modeling for risk assessment and management. Results can be obtained at a computational cost that is considerably less than that required by statistical sampling methods such as Monte Carlo analysis. The method is demonstrated using three example systems, with focus on a model of an experimental aquatic food web subject to the effects of contamination by ionic liquids, a new class of potentially important industrial chemicals. Copyright © 2015. Published by Elsevier Inc.
Expected Utility Illustrated: A Graphical Analysis of Gambles with More than Two Possible Outcomes
ERIC Educational Resources Information Center
Chen, Frederick H.
2010-01-01
The author presents a simple geometric method to graphically illustrate the expected utility from a gamble with more than two possible outcomes. This geometric result gives economics students a simple visual aid for studying expected utility theory and enables them to analyze a richer set of decision problems under uncertainty compared to what…
Our Educational Melting Pot: Have We Reached the Boiling Point?
ERIC Educational Resources Information Center
Lauderdale, Katherine Lynn, Ed.; Bonilla, Carlos A., Ed.
The articles and excerpts in this collection illustrate the complexity of the melting pot concept. Multiculturalism has become a watchword in American life and education, but it may be that in trying to atone for past transgressions educators and others are simply going too far. These essays illustrate some of the problems of a multicultural…
Promoting the Multidimensional Character of Scientific Reasoning.
Bradshaw, William S; Nelson, Jennifer; Adams, Byron J; Bell, John D
2017-04-01
This study reports part of a long-term program to help students improve scientific reasoning using higher-order cognitive tasks set in the discipline of cell biology. This skill was assessed using problems requiring the construction of valid conclusions drawn from authentic research data. We report here efforts to confirm the hypothesis that data interpretation is a complex, multifaceted exercise. Confirmation was obtained using a statistical treatment showing that various such problems rank students differently-each contains a unique set of cognitive challenges. Additional analyses of performance results have allowed us to demonstrate that individuals differ in their capacity to navigate five independent generic elements that constitute successful data interpretation: biological context, connection to course concepts, experimental protocols, data inference, and integration of isolated experimental observations into a coherent model. We offer these aspects of scientific thinking as a "data analysis skills inventory," along with usable sample problems that illustrate each element. Additionally, we show that this kind of reasoning is rigorous in that it is difficult for most novice students, who are unable to intuitively implement strategies for improving these skills. Instructors armed with knowledge of the specific challenges presented by different types of problems can provide specific helpful feedback during formative practice. The use of this instructional model is most likely to require changes in traditional classroom instruction.
NASA Astrophysics Data System (ADS)
Huyakorn, Peter S.; Springer, Everett P.; Guvanasen, Varut; Wadsworth, Terry D.
1986-12-01
A three-dimensional finite-element model for simulating water flow in variably saturated porous media is presented. The model formulation is general and capable of accommodating complex boundary conditions associated with seepage faces and infiltration or evaporation on the soil surface. Included in this formulation is an improved Picard algorithm designed to cope with severely nonlinear soil moisture relations. The algorithm is formulated for both rectangular and triangular prism elements. The element matrices are evaluated using an "influence coefficient" technique that avoids costly numerical integration. Spatial discretization of a three-dimensional region is performed using a vertical slicing approach designed to accommodate complex geometry with irregular boundaries, layering, and/or lateral discontinuities. Matrix solution is achieved using a slice successive overrelaxation scheme that permits a fairly large number of nodal unknowns (on the order of several thousand) to be handled efficiently on small minicomputers. Six examples are presented to verify and demonstrate the utility of the proposed finite-element model. The first four examples concern one- and two-dimensional flow problems used as sample problems to benchmark the code. The remaining examples concern three-dimensional problems. These problems are used to illustrate the performance of the proposed algorithm in three-dimensional situations involving seepage faces and anisotropic soil media.
Quantum walks: The first detected passage time problem
NASA Astrophysics Data System (ADS)
Friedman, H.; Kessler, D. A.; Barkai, E.
2017-03-01
Even after decades of research, the problem of first passage time statistics for quantum dynamics remains a challenging topic of fundamental and practical importance. Using a projective measurement approach, with a sampling time τ , we obtain the statistics of first detection events for quantum dynamics on a lattice, with the detector located at the origin. A quantum renewal equation for a first detection wave function, in terms of which the first detection probability can be calculated, is derived. This formula gives the relation between first detection statistics and the solution of the corresponding Schrödinger equation in the absence of measurement. We illustrate our results with tight-binding quantum walk models. We examine a closed system, i.e., a ring, and reveal the intricate influence of the sampling time τ on the statistics of detection, discussing the quantum Zeno effect, half dark states, revivals, and optimal detection. The initial condition modifies the statistics of a quantum walk on a finite ring in surprising ways. In some cases, the average detection time is independent of the sampling time while in others the average exhibits multiple divergences as the sampling time is modified. For an unbounded one-dimensional quantum walk, the probability of first detection decays like (time)(-3 ) with superimposed oscillations, with exceptional behavior when the sampling period τ times the tunneling rate γ is a multiple of π /2 . The amplitude of the power-law decay is suppressed as τ →0 due to the Zeno effect. Our work, an extended version of our previously published paper, predicts rich physical behaviors compared with classical Brownian motion, for which the first passage probability density decays monotonically like (time)-3 /2, as elucidated by Schrödinger in 1915.
ERIC Educational Resources Information Center
van den Putte, Bas; Hoogstraten, Johan
1997-01-01
Problems found in the application of structural equation modeling to the theory of reasoned action are explored, and an alternative model specification is proposed that improves the fit of the data while leaving intact the structural part of the model being tested. Problems and the proposed alternative are illustrated. (SLD)
Comparative Properties of Collaborative Optimization and Other Approaches to MDO
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia M.; Lewis, Robert Michael
1999-01-01
We, discuss criteria by which one can classify, analyze, and evaluate approaches to solving multidisciplinary design optimization (MDO) problems. Central to our discussion is the often overlooked distinction between questions of formulating MDO problems and solving the resulting computational problem. We illustrate our general remarks by comparing several approaches to MDO that have been proposed.
Comparative Properties of Collaborative Optimization and other Approaches to MDO
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia M.; Lewis, Robert Michael
1999-01-01
We discuss criteria by which one can classify, analyze, and evaluate approaches to solving multidisciplinary design optimization (MDO) problems. Central to our discussion is the often overlooked distinction between questions of formulating MDO problems and solving the resulting computational problem. We illustrate our general remarks by comparing several approaches to MDO that have been proposed.
BI GIS Competition Brings DSS to AITP NCC
ERIC Educational Resources Information Center
Hayen, Roger L.
2011-01-01
A national student competition problem in business intelligence (BI) is considered to foster an understanding of this competition and of the underlying case study problem used. The focus here is two-fold. First, is to illustrate this competition, and second, is to provide a case problem that can be considered for use in various information systems…
ERIC Educational Resources Information Center
Reynolds, Thomas D.; And Others
This compilation of 138 problems illustrating applications of high school mathematics to various aspects of space science is intended as a resource from which the teacher may select questions to supplement his regular course. None of the problems require a knowledge of calculus or physics, and solutions are presented along with the problem…
ERIC Educational Resources Information Center
Connell, Michael; Abramovich, Sergei
2017-01-01
This paper illustrates how the notion of Technology Immune Technology Enabled (TITE) problems (Abramovich, 2014), in this case an exploration of variations in surface area we refer to as Stamping Functions, might be incorporated into a K-6 mathematics methods class operating within an Action on Objects framework (Connell, 2001). TITE problems have…
ERIC Educational Resources Information Center
Petersen, Isaac T.; Lindhiem, Oliver; LeBeau, Brandon; Bates, John E.; Pettit, Gregory S.; Lansford, Jennifer E.; Dodge, Kenneth A.
2018-01-01
Manifestations of internalizing problems, such as specific symptoms of anxiety and depression, can change across development, even if individuals show strong continuity in rank-order levels of internalizing problems. This illustrates the concept of heterotypic continuity, and raises the question of whether common measures might be construct-valid…
Scale problems in reporting landscape pattern at the regional scale
R.V. O' Neill; C.T. Hunsaker; S.P. Timmins; B.L. Jackson; K.B. Jones; Kurt H. Riitters; James D. Wickham
1996-01-01
Remotely sensed data for Southeastern United States (Standard Federal Region 4) are used to examine the scale problems involved in reporting landscape pattern for a large, heterogeneous region. Frequency distribu-tions of landscape indices illustrate problems associated with the grain or resolution of the data. Grain should be 2 to 5 times smaller than the...
Solving intuitionistic fuzzy multi-objective nonlinear programming problem
NASA Astrophysics Data System (ADS)
Anuradha, D.; Sobana, V. E.
2017-11-01
This paper presents intuitionistic fuzzy multi-objective nonlinear programming problem (IFMONLPP). All the coefficients of the multi-objective nonlinear programming problem (MONLPP) and the constraints are taken to be intuitionistic fuzzy numbers (IFN). The IFMONLPP has been transformed into crisp one and solved by using Kuhn-Tucker condition. Numerical example is provided to illustrate the approach.
Node-Based Learning of Multiple Gaussian Graphical Models
Mohan, Karthik; London, Palma; Fazel, Maryam; Witten, Daniela; Lee, Su-In
2014-01-01
We consider the problem of estimating high-dimensional Gaussian graphical models corresponding to a single set of variables under several distinct conditions. This problem is motivated by the task of recovering transcriptional regulatory networks on the basis of gene expression data containing heterogeneous samples, such as different disease states, multiple species, or different developmental stages. We assume that most aspects of the conditional dependence networks are shared, but that there are some structured differences between them. Rather than assuming that similarities and differences between networks are driven by individual edges, we take a node-based approach, which in many cases provides a more intuitive interpretation of the network differences. We consider estimation under two distinct assumptions: (1) differences between the K networks are due to individual nodes that are perturbed across conditions, or (2) similarities among the K networks are due to the presence of common hub nodes that are shared across all K networks. Using a row-column overlap norm penalty function, we formulate two convex optimization problems that correspond to these two assumptions. We solve these problems using an alternating direction method of multipliers algorithm, and we derive a set of necessary and sufficient conditions that allows us to decompose the problem into independent subproblems so that our algorithm can be scaled to high-dimensional settings. Our proposal is illustrated on synthetic data, a webpage data set, and a brain cancer gene expression data set. PMID:25309137
Patrick, Christopher J; Venables, Noah C; Yancey, James R; Hicks, Brian M; Nelson, Lindsay D; Kramer, Mark D
2013-08-01
A crucial challenge in efforts to link psychological disorders to neural systems, with the aim of developing biologically informed conceptions of such disorders, is the problem of method variance (Campbell & Fiske, 1959). Since even measures of the same construct in differing domains correlate only moderately, it is unsurprising that large sample studies of diagnostic biomarkers yield only modest associations. To address this challenge, a construct-network approach is proposed in which psychometric operationalizations of key neurobehavioral constructs serve as anchors for identifying neural indicators of psychopathology-relevant dispositions, and as vehicles for bridging between domains of clinical problems and neurophysiology. An empirical illustration is provided for the construct of inhibition-disinhibition, which is of central relevance to problems entailing deficient impulse control. Findings demonstrate that: (1) a well-designed psychometric index of trait disinhibition effectively predicts externalizing problems of multiple types, (2) this psychometric measure of disinhibition shows reliable brain response correlates, and (3) psychometric and brain-response indicators can be combined to form a joint psychoneurometric factor that predicts effectively across clinical and physiological domains. As a methodology for bridging between clinical problems and neural systems, the construct-network approach provides a concrete means by which existing conceptions of psychological disorders can accommodate and be reshaped by neurobiological insights. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Modelling the contribution of changes in family life to time trends in adolescent conduct problems.
Collishaw, Stephan; Goodman, Robert; Pickles, Andrew; Maughan, Barbara
2007-12-01
The past half-century has seen significant changes in family life, including an increase in parental divorce, increases in the numbers of lone parent and stepfamilies, changes in socioeconomic well being, and a decrease in family size. Evidence also shows substantial time trends in adolescent mental health, including a marked increase in conduct problems over the last 25 years of the 20th Century in the UK. The aim of this study was to examine how these two sets of trends may be related. To illustrate the complexity of the issues involved, we focused on three well-established family risks for conduct problems: family type, income and family size. Three community samples of adolescents from England, Scotland and Wales were compared: 10,348 16-year olds assessed in 1974 as part of the National Child Development Study, 7234 16-year olds assessed in 1986 as part of the British Cohort Study, and 860 15-year olds assessed in the 1999 British Child and Adolescent Mental Health Survey. Parents completed comparable ratings of conduct problems in each survey and provided information on family type, income and size. Findings highlight important variations in both the prevalence of these family variables and their associations with conduct problems over time, underscoring the complex conceptual issues involved in testing causes of trends in mental health.
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan; Bittker, David A.
1994-01-01
LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part II of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part II describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part I (NASA RP-1328) derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved by LSENS. Part III (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.
NASA Astrophysics Data System (ADS)
Zhao, Liyun; Zhou, Jin; Wu, Quanjun
2016-01-01
This paper considers the sampled-data synchronisation problems of coupled harmonic oscillators with communication and input delays subject to controller failure. A synchronisation protocol is proposed for such oscillator systems over directed network topology, and then some general algebraic criteria on exponential convergence for the proposed protocol are established. The main features of the present investigation include: (1) both the communication and input delays are simultaneously addressed, and the directed network topology is firstly considered and (2) the effects of time delays on synchronisation performance are theoretically and numerically investigated. It is shown that in the absence of communication delays, coupled harmonic oscillators can achieve synchronisation oscillatory motion. Whereas if communication delays are nonzero at infinite multiple sampled-data instants, its synchronisation (or consensus) state is zero. This conclusion can be used as an effective control strategy to stabilise coupled harmonic oscillators in practical applications. Furthermore, it is interesting to find that increasing either communication or input delays will enhance the synchronisation performance of coupled harmonic oscillators. Subsequently, numerical examples illustrate and visualise theoretical results.
New psychoactive substance α-PVP in a traffic accident case.
Rojek, Sebastian; Kula, Karol; Maciów-Głąb, Martyna; Kłys, Małgorzata
The problems of new psychoactive substances (NPSs), especially related to drivers, constitute an open research area. In this case report, we present a traffic accident case, in which two passengers of five individuals died instantly, while the other three persons survived the accident with minor injuries only. From the blood samples of the driver and the passengers, α-pyrrolidinovalerophenone (α-PVP), an NPS belonging to the category of cathinone derivatives, was disclosed. Therefore, we established a detailed procedure for analysis of α-PVP in blood samples by liquid chromatography-tandem mass spectrometry. After careful validation tests of this method, α-PVP concentration in blood samples from the surviving driver and passengers, and from the two deceased, were measured. The concentrations varied from 20 to 650 ng/mL. Access to detailed information originating from the court files and from explanations provided by the driver and eye witnesses revealed extremely valuable illustrative details addressing the symptoms and pharmacological effects of α-PVP on the human organism, thus contributing to enriching the body of knowledge of α-PVP abuse.
ERIC Educational Resources Information Center
Oetting, Janna B.; Cleveland, Lesli H.; Cope, Robert F., III
2008-01-01
Purpose: Using a sample of culturally/linguistically diverse children, we present data to illustrate the value of empirically derived combinations of tools and cutoffs for determining eligibility in child language impairment. Method: Data were from 95 4- and 6-year-olds (40 African American, 55 White; 18 with language impairment, 77 without) who…
New methods for image collection and analysis in scanning Auger microscopy
NASA Technical Reports Server (NTRS)
Browning, R.
1985-01-01
While scanning Auger micrographs are used extensively for illustrating the stoichiometry of complex surfaces and for indicating areas of interest for fine point Auger spectroscopy, there are many problems in the quantification and analysis of Auger images. These problems include multiple contrast mechanisms and the lack of meaningful relationships with other Auger data. Collection of multielemental Auger images allows some new approaches to image analysis and presentation. Information about the distribution and quantity of elemental combinations at a surface are retrievable, and particular combinations of elements can be imaged, such as alloy phases. Results from the precipitate hardened alloy Al-2124 illustrate multispectral Auger imaging.
Chronochemistry in neurodegeneration
Pastore, Annalisa; Adinolfi, Salvatore
2014-01-01
The problem of distinguishing causes from effects is not a trivial one, as illustrated by the science fiction writer Isaac Asimov in a novel dedicated to an imaginary compound with surprising “chronochemistry” properties. The problem is particularly important when trying to establish the etiology of diseases. Here, we discuss how the problem reflects on our understanding of disease using two specific examples: Alzheimer’s disease (AD) and Friedreich’s ataxia (FRDA). We show how the fibrillar aggregates observed in AD were first denied any interest, then to assume a central focus, and to finally recess to be considered the dead-end point of the aggregation pathway. This current view is that the soluble aggregates formed along the aggregation pathway rather than the mature amyliod fiber are the causes of disease, Similarly, we illustrate how the identification of causes and and effects have been important in the study of FRDA. This disease has alternatively been considered as the consequence of oxidative stress, iron precipitation or reduction of iron–sulfur cluster protein context. We illustrate how new tools have recently been established which allow us to follow the development of the disease. We hope that this review may inspire similar studies in other scientific disciplines. PMID:24744696
Chronochemistry in neurodegeneration.
Pastore, Annalisa; Adinolfi, Salvatore
2014-01-01
The problem of distinguishing causes from effects is not a trivial one, as illustrated by the science fiction writer Isaac Asimov in a novel dedicated to an imaginary compound with surprising "chronochemistry" properties. The problem is particularly important when trying to establish the etiology of diseases. Here, we discuss how the problem reflects on our understanding of disease using two specific examples: Alzheimer's disease (AD) and Friedreich's ataxia (FRDA). We show how the fibrillar aggregates observed in AD were first denied any interest, then to assume a central focus, and to finally recess to be considered the dead-end point of the aggregation pathway. This current view is that the soluble aggregates formed along the aggregation pathway rather than the mature amyliod fiber are the causes of disease, Similarly, we illustrate how the identification of causes and and effects have been important in the study of FRDA. This disease has alternatively been considered as the consequence of oxidative stress, iron precipitation or reduction of iron-sulfur cluster protein context. We illustrate how new tools have recently been established which allow us to follow the development of the disease. We hope that this review may inspire similar studies in other scientific disciplines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, William E.; Siirola, John Daniel
We describe new capabilities for modeling MPEC problems within the Pyomo modeling software. These capabilities include new modeling components that represent complementar- ity conditions, modeling transformations for re-expressing models with complementarity con- ditions in other forms, and meta-solvers that apply transformations and numeric optimization solvers to optimize MPEC problems. We illustrate the breadth of Pyomo's modeling capabil- ities for MPEC problems, and we describe how Pyomo's meta-solvers can perform local and global optimization of MPEC problems.
Numerical Problem Solving Using Mathcad in Undergraduate Reaction Engineering
ERIC Educational Resources Information Center
Parulekar, Satish J.
2006-01-01
Experience in using a user-friendly software, Mathcad, in the undergraduate chemical reaction engineering course is discussed. Example problems considered for illustration deal with simultaneous solution of linear algebraic equations (kinetic parameter estimation), nonlinear algebraic equations (equilibrium calculations for multiple reactions and…
Siblings of Oedipus: Brothers and Sisters of Incest Victims.
ERIC Educational Resources Information Center
de Young, Mary
1981-01-01
Investigates the roles and problems of siblings of incest victims, describes the dynamics of the incestuous family, and identifies some behavior problems of children whose siblings were incest victims. Data from two siblings' lives are presented to illustrate points. (Author/DB)
Problems for Clinical Diagnosis.
ERIC Educational Resources Information Center
Campbell, Susan B. Goodman
1979-01-01
The author examines implications of the P.L. 94-142 (the Education for All Handicapped Children Act) definition of learning disability, and cites two cases to illustrate two basic problems: that the definition is both overinclusive and underinclusive, and that it does not consider underlying causes. (CL)
ERIC Educational Resources Information Center
Kostadinov, Boyan
2013-01-01
This article attempts to introduce the reader to computational thinking and solving problems involving randomness. The main technique being employed is the Monte Carlo method, using the freely available software "R for Statistical Computing." The author illustrates the computer simulation approach by focusing on several problems of…
Solving lot-sizing problem with quantity discount and transportation cost
NASA Astrophysics Data System (ADS)
Lee, Amy H. I.; Kang, He-Yau; Lai, Chun-Mei
2013-04-01
Owing to today's increasingly competitive market and ever-changing manufacturing environment, the inventory problem is becoming more complicated to solve. The incorporation of heuristics methods has become a new trend to tackle the complex problem in the past decade. This article considers a lot-sizing problem, and the objective is to minimise total costs, where the costs include ordering, holding, purchase and transportation costs, under the requirement that no inventory shortage is allowed in the system. We first formulate the lot-sizing problem as a mixed integer programming (MIP) model. Next, an efficient genetic algorithm (GA) model is constructed for solving large-scale lot-sizing problems. An illustrative example with two cases in a touch panel manufacturer is used to illustrate the practicality of these models, and a sensitivity analysis is applied to understand the impact of the changes in parameters to the outcomes. The results demonstrate that both the MIP model and the GA model are effective and relatively accurate tools for determining the replenishment for touch panel manufacturing for multi-periods with quantity discount and batch transportation. The contributions of this article are to construct an MIP model to obtain an optimal solution when the problem is not too complicated itself and to present a GA model to find a near-optimal solution efficiently when the problem is complicated.
Sarter, Nadine
2008-06-01
The goal of this article is to illustrate the problem-driven, cumulative, and highly interdisciplinary nature of human factors research by providing a brief overview of the work on mode errors on modern flight decks over the past two decades. Mode errors on modem flight decks were first reported in the late 1980s. Poor feedback, inadequate mental models of the automation, and the high degree of coupling and complexity of flight deck systems were identified as main contributors to these breakdowns in human-automation interaction. Various improvements of design, training, and procedures were proposed to address these issues. The author describes when and why the problem of mode errors surfaced, summarizes complementary research activities that helped identify and understand the contributing factors to mode errors, and describes some countermeasures that have been developed in recent years. This brief review illustrates how one particular human factors problem in the aviation domain enabled various disciplines and methodological approaches to contribute to a better understanding of, as well as provide better support for, effective human-automation coordination. Converging operations and interdisciplinary collaboration over an extended period of time are hallmarks of successful human factors research. The reported body of research can serve as a model for future research and as a teaching tool for students in this field of work.
A Robust Bayesian Random Effects Model for Nonlinear Calibration Problems
Fong, Y.; Wakefield, J.; De Rosa, S.; Frahm, N.
2013-01-01
Summary In the context of a bioassay or an immunoassay, calibration means fitting a curve, usually nonlinear, through the observations collected on a set of samples containing known concentrations of a target substance, and then using the fitted curve and observations collected on samples of interest to predict the concentrations of the target substance in these samples. Recent technological advances have greatly improved our ability to quantify minute amounts of substance from a tiny volume of biological sample. This has in turn led to a need to improve statistical methods for calibration. In this paper, we focus on developing calibration methods robust to dependent outliers. We introduce a novel normal mixture model with dependent error terms to model the experimental noise. In addition, we propose a re-parameterization of the five parameter logistic nonlinear regression model that allows us to better incorporate prior information. We examine the performance of our methods with simulation studies and show that they lead to a substantial increase in performance measured in terms of mean squared error of estimation and a measure of the average prediction accuracy. A real data example from the HIV Vaccine Trials Network Laboratory is used to illustrate the methods. PMID:22551415
Two-dimensional simple proportional feedback control of a chaotic reaction system
NASA Astrophysics Data System (ADS)
Mukherjee, Ankur; Searson, Dominic P.; Willis, Mark J.; Scott, Stephen K.
2008-04-01
The simple proportional feedback (SPF) control algorithm may, in principle, be used to attain periodic oscillations in dynamic systems exhibiting low-dimensional chaos. However, if implemented within a discrete control framework with sampling frequency limitations, controller performance may deteriorate. This phenomenon is illustrated using simulations of a chaotic autocatalytic reaction system. A two-dimensional (2D) SPF controller that explicitly takes into account some of the problems caused by limited sampling rates is then derived by introducing suitable modifications to the original SPF method. Using simulations, the performance of the 2D-SPF controller is compared to that of a conventional SPF control law when implemented as a sampled data controller. Two versions of the 2D-SPF controller are described: linear (L2D-SPF) and quadratic (Q2D-SPF). The performance of both the L2D-SPF and Q2D-SPF controllers is shown to be superior to the SPF when controller sampling frequencies are decreased. Furthermore, it is demonstrated that the Q2D-SPF controller provides better fixed point stabilization compared to both the L2D-SPF and the conventional SPF when concentration measurements are corrupted by noise.
Distributed computer taxonomy based on O/S structure
NASA Technical Reports Server (NTRS)
Foudriat, Edwin C.
1985-01-01
The taxonomy considers the resource structure at the operating system level. It compares a communication based taxonomy with the new taxonomy to illustrate how the latter does a better job when related to the client's view of the distributed computer. The results illustrate the fundamental features and what is required to construct fully distributed processing systems. The problem of using network computers on the space station is addressed. A detailed discussion of the taxonomy is not given here. Information is given in the form of charts and diagrams that were used to illustrate a talk.
Continuous-time mean-variance portfolio selection with value-at-risk and no-shorting constraints
NASA Astrophysics Data System (ADS)
Yan, Wei
2012-01-01
An investment problem is considered with dynamic mean-variance(M-V) portfolio criterion under discontinuous prices which follow jump-diffusion processes according to the actual prices of stocks and the normality and stability of the financial market. The short-selling of stocks is prohibited in this mathematical model. Then, the corresponding stochastic Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and the solution of the stochastic HJB equation based on the theory of stochastic LQ control and viscosity solution is obtained. The efficient frontier and optimal strategies of the original dynamic M-V portfolio selection problem are also provided. And then, the effects on efficient frontier under the value-at-risk constraint are illustrated. Finally, an example illustrating the discontinuous prices based on M-V portfolio selection is presented.
Wu, Jia-ting; Wang, Jian-qiang; Wang, Jing; Zhang, Hong-yu; Chen, Xiao-hong
2014-01-01
Based on linguistic term sets and hesitant fuzzy sets, the concept of hesitant fuzzy linguistic sets was introduced. The focus of this paper is the multicriteria decision-making (MCDM) problems in which the criteria are in different priority levels and the criteria values take the form of hesitant fuzzy linguistic numbers (HFLNs). A new approach to solving these problems is proposed, which is based on the generalized prioritized aggregation operator of HFLNs. Firstly, the new operations and comparison method for HFLNs are provided and some linguistic scale functions are applied. Subsequently, two prioritized aggregation operators and a generalized prioritized aggregation operator of HFLNs are developed and applied to MCDM problems. Finally, an illustrative example is given to illustrate the effectiveness and feasibility of the proposed method, which are then compared to the existing approach.
Benchmark dose analysis via nonparametric regression modeling
Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen
2013-01-01
Estimation of benchmark doses (BMDs) in quantitative risk assessment traditionally is based upon parametric dose-response modeling. It is a well-known concern, however, that if the chosen parametric model is uncertain and/or misspecified, inaccurate and possibly unsafe low-dose inferences can result. We describe a nonparametric approach for estimating BMDs with quantal-response data based on an isotonic regression method, and also study use of corresponding, nonparametric, bootstrap-based confidence limits for the BMD. We explore the confidence limits’ small-sample properties via a simulation study, and illustrate the calculations with an example from cancer risk assessment. It is seen that this nonparametric approach can provide a useful alternative for BMD estimation when faced with the problem of parametric model uncertainty. PMID:23683057
Asymptotic approximations to posterior distributions via conditional moment equations
Yee, J.L.; Johnson, W.O.; Samaniego, F.J.
2002-01-01
We consider asymptotic approximations to joint posterior distributions in situations where the full conditional distributions referred to in Gibbs sampling are asymptotically normal. Our development focuses on problems where data augmentation facilitates simpler calculations, but results hold more generally. Asymptotic mean vectors are obtained as simultaneous solutions to fixed point equations that arise naturally in the development. Asymptotic covariance matrices flow naturally from the work of Arnold & Press (1989) and involve the conditional asymptotic covariance matrices and first derivative matrices for conditional mean functions. When the fixed point equations admit an analytical solution, explicit formulae are subsequently obtained for the covariance structure of the joint limiting distribution, which may shed light on the use of the given statistical model. Two illustrations are given. ?? 2002 Biometrika Trust.
Bolam, Bruce; McLean, Carl; Pennington, Andrew; Gillies, Pamela
2006-03-01
The present article presents an exploratory qualitative process evaluation study of 'Ambassador' participation in CityNet, an innovative information-communication technology-based (ICT) project that aims to build aspects of social capital and improve access to information and services among disadvantaged groups in Nottingham, UK. A purposive sample of 40 'Ambassadors' interviewees was gathered in three waves of data collection. The two emergent analytic themes highlighted how improvements in confidence, self-esteem and social networks produced via participation were mitigated by structural problems in devolving power within the project. This illustrates how concepts of power are important for understanding the process of health promotion interventions using new media.
Soulakova, Julia N; Bright, Brianna C
2013-01-01
A large-sample problem of illustrating noninferiority of an experimental treatment over a referent treatment for binary outcomes is considered. The methods of illustrating noninferiority involve constructing the lower two-sided confidence bound for the difference between binomial proportions corresponding to the experimental and referent treatments and comparing it with the negative value of the noninferiority margin. The three considered methods, Anbar, Falk-Koch, and Reduced Falk-Koch, handle the comparison in an asymmetric way, that is, only the referent proportion out of the two, experimental and referent, is directly involved in the expression for the variance of the difference between two sample proportions. Five continuity corrections (including zero) are considered with respect to each approach. The key properties of the corresponding methods are evaluated via simulations. First, the uncorrected two-sided confidence intervals can, potentially, have smaller coverage probability than the nominal level even for moderately large sample sizes, for example, 150 per group. Next, the 15 testing methods are discussed in terms of their Type I error rate and power. In the settings with a relatively small referent proportion (about 0.4 or smaller), the Anbar approach with Yates' continuity correction is recommended for balanced designs and the Falk-Koch method with Yates' correction is recommended for unbalanced designs. For relatively moderate (about 0.6) and large (about 0.8 or greater) referent proportion, the uncorrected Reduced Falk-Koch method is recommended, although in this case, all methods tend to be over-conservative. These results are expected to be used in the design stage of a noninferiority study when asymmetric comparisons are envisioned. Copyright © 2013 John Wiley & Sons, Ltd.
A Pilot Evaluation of a Tutorial to Teach Clients and Clinicians About Gambling Game Design.
Turner, Nigel E; Robinson, Janine; Harrigan, Kevin; Ferentzy, Peter; Jindani, Farah
2018-01-01
This paper describes the pilot evaluation of an Internet-based intervention, designed to teach counselors and problem gamblers about how electronic gambling machines (EGMs) work. This study evaluated the tutorial using assessment tools, such as rating scales and test of knowledge about EGMs and random chance. The study results are based on a number of samples, including problem gambling counselors ( n = 25) and problem gamblers ( n = 26). The interactive tutorial was positively rated by both clients and counselors. In addition, we found a significant improvement in scores on a content test about EGM games for both clients and counselors. An analysis of the specific items suggests that the effects of the tutorial were mainly on those items that were most directly related to the content of the tutorial and did not always generalize to other items. This tutorial is available for use with clients and for education counselors. The data also suggest that the tutorial is equally effective in group settings and in individual settings. These results are promising and illustrate that the tool can be used to teach counselors and clients about game design. Furthermore, research is needed to evaluate its impact on gambling behavior.
Nichols, James D.; Hines, James E.; Pollock, Kenneth H.; Hinz, Robert L.; Link, William A.
1994-01-01
The proportion of animals in a population that breeds is an important determinant of population growth rate. Usual estimates of this quantity from field sampling data assume that the probability of appearing in the capture or count statistic is the same for animals that do and do not breed. A similar assumption is required by most existing methods used to test ecologically interesting hypotheses about reproductive costs using field sampling data. However, in many field sampling situations breeding and nonbreeding animals are likely to exhibit different probabilities of being seen or caught. In this paper, we propose the use of multistate capture-recapture models for these estimation and testing problems. This methodology permits a formal test of the hypothesis of equal capture/sighting probabilities for breeding and nonbreeding individuals. Two estimators of breeding proportion (and associated standard errors) are presented, one for the case of equal capture probabilities and one for the case of unequal capture probabilities. The multistate modeling framework also yields formal tests of hypotheses about reproductive costs to future reproduction or survival or both fitness components. The general methodology is illustrated using capture-recapture data on female meadow voles, Microtus pennsylvanicus. Resulting estimates of the proportion of reproductively active females showed strong seasonal variation, as expected, with low breeding proportions in midwinter. We found no evidence of reproductive costs extracted in subsequent survival or reproduction. We believe that this methodological framework has wide application to problems in animal ecology concerning breeding proportions and phenotypic reproductive costs.
Chapman, Benjamin P.; Weiss, Alexander; Duberstein, Paul
2016-01-01
Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in “big data” problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how three common SLT algorithms–Supervised Principal Components, Regularization, and Boosting—can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach—or perhaps because of them–SLT methods may hold value as a statistically rigorous approach to exploratory regression. PMID:27454257
Needed: Clean Water. Problems of Pollution.
ERIC Educational Resources Information Center
Environmental Protection Agency, Washington, DC.
This pamphlet utilizes illustrations and captions to indicate the demands currently made on our water resources and the problems associated with that demand. Current and future solutions are described with suggestions for personal conservation efforts to help provide enough clean water for everyone in the future. (CS)
A Program for Automatic Generation of Dimensionless Parameters.
ERIC Educational Resources Information Center
Hundal, M. S.
1982-01-01
Following a review of the theory of dimensional analysis, presents a method for generating all of the possible sets of nondimensional parameters for a given problem, a digital computer program to implement the method, and a mechanical design problem to illustrate its use. (Author/JN)
Multiple Imputation for Multivariate Missing-Data Problems: A Data Analyst's Perspective.
ERIC Educational Resources Information Center
Schafer, Joseph L.; Olsen, Maren K.
1998-01-01
The key ideas of multiple imputation for multivariate missing data problems are reviewed. Software programs available for this analysis are described, and their use is illustrated with data from the Adolescent Alcohol Prevention Trial (W. Hansen and J. Graham, 1991). (SLD)
I Can Problem Solve (ICPS): A Cognitive Approach to Preventing Early High Risk Behaviors.
ERIC Educational Resources Information Center
Shure, Myrna B.; And Others
This outline presents a program designed to teach children "how" to think, not what to think--so as to help them solve typical interpersonal problems with peers and adults. Through games, stories, puppets, illustrations, and role plays, children learn a pre-problem solving vocabulary, feeling word concepts, and ways to arrive at solutions to…
ERIC Educational Resources Information Center
Puchner, Laurel; Markowitz, Linda
2015-01-01
In this article Puchner and Markowitz illustrate a major problem in education and in teacher education, the underlying dynamics of which are a national problem. The problem of negative beliefs about African American families in schools is not a new idea but actually stems from unfounded and untested assumptions about the way the world works and…
ERIC Educational Resources Information Center
Kribbs, Elizabeth E.; Rogowsky, Beth A.
2016-01-01
Mathematics word-problems continue to be an insurmountable challenge for many middle school students. Educators have used pictorial and schematic illustrations within the classroom to help students visualize these problems. However, the data shows that pictorial representations can be more harmful than helpful in that they only display objects or…
Linear System of Equations, Matrix Inversion, and Linear Programming Using MS Excel
ERIC Educational Resources Information Center
El-Gebeily, M.; Yushau, B.
2008-01-01
In this note, we demonstrate with illustrations two different ways that MS Excel can be used to solve Linear Systems of Equation, Linear Programming Problems, and Matrix Inversion Problems. The advantage of using MS Excel is its availability and transparency (the user is responsible for most of the details of how a problem is solved). Further, we…
Fort, J C
1988-01-01
We present an application of the Kohonen algorithm to the traveling salesman problem: Using only this algorithm, without energy function nor any parameter chosen "ad hoc", we found good suboptimal tours. We give a neural model version of this algorithm, closer to classical neural networks. This is illustrated with various numerical examples.
Direct SQP-methods for solving optimal control problems with delays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goellmann, L.; Bueskens, C.; Maurer, H.
The maximum principle for optimal control problems with delays leads to a boundary value problem (BVP) which is retarded in the state and advanced in the costate function. Based on shooting techniques, solution methods for this type of BVP have been proposed. In recent years, direct optimization methods have been favored for solving control problems without delays. Direct methods approximate the control and the state over a fixed mesh and solve the resulting NLP-problem with SQP-methods. These methods dispense with the costate function and have shown to be robust and efficient. In this paper, we propose a direct SQP-method formore » retarded control problems. In contrast to conventional direct methods, only the control variable is approximated by e.g. spline-functions. The state is computed via a high order Runge-Kutta type algorithm and does not enter explicitly the NLP-problem through an equation. This approach reduces the number of optimization variables considerably and is implementable even on a PC. Our method is illustrated by the numerical solution of retarded control problems with constraints. In particular, we consider the control of a continuous stirred tank reactor which has been solved by dynamic programming. This example illustrates the robustness and efficiency of the proposed method. Open questions concerning sufficient conditions and convergence of discretized NLP-problems are discussed.« less
Portfolio optimization using fuzzy linear programming
NASA Astrophysics Data System (ADS)
Pandit, Purnima K.
2013-09-01
Portfolio Optimization (PO) is a problem in Finance, in which investor tries to maximize return and minimize risk by carefully choosing different assets. Expected return and risk are the most important parameters with regard to optimal portfolios. In the simple form PO can be modeled as quadratic programming problem which can be put into equivalent linear form. PO problems with the fuzzy parameters can be solved as multi-objective fuzzy linear programming problem. In this paper we give the solution to such problems with an illustrative example.
A Kind of Nonlinear Programming Problem Based on Mixed Fuzzy Relation Equations Constraints
NASA Astrophysics Data System (ADS)
Li, Jinquan; Feng, Shuang; Mi, Honghai
In this work, a kind of nonlinear programming problem with non-differential objective function and under the constraints expressed by a system of mixed fuzzy relation equations is investigated. First, some properties of this kind of optimization problem are obtained. Then, a polynomial-time algorithm for this kind of optimization problem is proposed based on these properties. Furthermore, we show that this algorithm is optimal for the considered optimization problem in this paper. Finally, numerical examples are provided to illustrate our algorithms.
Dimensions of design space: a decision-theoretic approach to optimal research design.
Conti, Stefano; Claxton, Karl
2009-01-01
Bayesian decision theory can be used not only to establish the optimal sample size and its allocation in a single clinical study but also to identify an optimal portfolio of research combining different types of study design. Within a single study, the highest societal payoff to proposed research is achieved when its sample sizes and allocation between available treatment options are chosen to maximize the expected net benefit of sampling (ENBS). Where a number of different types of study informing different parameters in the decision problem could be conducted, the simultaneous estimation of ENBS across all dimensions of the design space is required to identify the optimal sample sizes and allocations within such a research portfolio. This is illustrated through a simple example of a decision model of zanamivir for the treatment of influenza. The possible study designs include: 1) a single trial of all the parameters, 2) a clinical trial providing evidence only on clinical endpoints, 3) an epidemiological study of natural history of disease, and 4) a survey of quality of life. The possible combinations, samples sizes, and allocation between trial arms are evaluated over a range of cost-effectiveness thresholds. The computational challenges are addressed by implementing optimization algorithms to search the ENBS surface more efficiently over such large dimensions.
Euclidean, Spherical, and Hyperbolic Shadows
ERIC Educational Resources Information Center
Hoban, Ryan
2013-01-01
Many classical problems in elementary calculus use Euclidean geometry. This article takes such a problem and solves it in hyperbolic and in spherical geometry instead. The solution requires only the ability to compute distances and intersections of points in these geometries. The dramatically different results we obtain illustrate the effect…
The Adolescent Runaway: A National Problem.
ERIC Educational Resources Information Center
Ritter, Bruce
1979-01-01
The author discusses the problems of teenage runaways: abuse which forces many to leave home, violence and sexual exploitation, lack of help from the child welfare bureaucracy. He illustrates with descriptions of several youngsters at his Covenant House crisis center, Under Twenty-One, in New York City. (SJL)
Hierarchical statistical modeling of xylem vulnerability to cavitation.
Ogle, Kiona; Barber, Jarrett J; Willson, Cynthia; Thompson, Brenda
2009-01-01
Cavitation of xylem elements diminishes the water transport capacity of plants, and quantifying xylem vulnerability to cavitation is important to understanding plant function. Current approaches to analyzing hydraulic conductivity (K) data to infer vulnerability to cavitation suffer from problems such as the use of potentially unrealistic vulnerability curves, difficulty interpreting parameters in these curves, a statistical framework that ignores sampling design, and an overly simplistic view of uncertainty. This study illustrates how two common curves (exponential-sigmoid and Weibull) can be reparameterized in terms of meaningful parameters: maximum conductivity (k(sat)), water potential (-P) at which percentage loss of conductivity (PLC) =X% (P(X)), and the slope of the PLC curve at P(X) (S(X)), a 'sensitivity' index. We provide a hierarchical Bayesian method for fitting the reparameterized curves to K(H) data. We illustrate the method using data for roots and stems of two populations of Juniperus scopulorum and test for differences in k(sat), P(X), and S(X) between different groups. Two important results emerge from this study. First, the Weibull model is preferred because it produces biologically realistic estimates of PLC near P = 0 MPa. Second, stochastic embolisms contribute an important source of uncertainty that should be included in such analyses.
Convex Banding of the Covariance Matrix
Bien, Jacob; Bunea, Florentina; Xiao, Luo
2016-01-01
We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings. PMID:28042189
Convex Banding of the Covariance Matrix.
Bien, Jacob; Bunea, Florentina; Xiao, Luo
2016-01-01
We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings.
Robust Airfoil Optimization to Achieve Consistent Drag Reduction Over a Mach Range
NASA Technical Reports Server (NTRS)
Li, Wu; Huyse, Luc; Padula, Sharon; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
We prove mathematically that in order to avoid point-optimization at the sampled design points for multipoint airfoil optimization, the number of design points must be greater than the number of free-design variables. To overcome point-optimization at the sampled design points, a robust airfoil optimization method (called the profile optimization method) is developed and analyzed. This optimization method aims at a consistent drag reduction over a given Mach range and has three advantages: (a) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (b) there is no random airfoil shape distortion for any iterate it generates, and (c) it allows a designer to make a trade-off between a truly optimized airfoil and the amount of computing time consumed. For illustration purposes, we use the profile optimization method to solve a lift-constrained drag minimization problem for 2-D airfoil in Euler flow with 20 free-design variables. A comparison with other airfoil optimization methods is also included.
Information fusion methods based on physical laws.
Rao, Nageswara S V; Reister, David B; Barhen, Jacob
2005-01-01
We consider systems whose parameters satisfy certain easily computable physical laws. Each parameter is directly measured by a number of sensors, or estimated using measurements, or both. The measurement process may introduce both systematic and random errors which may then propagate into the estimates. Furthermore, the actual parameter values are not known since every parameter is measured or estimated, which makes the existing sample-based fusion methods inapplicable. We propose a fusion method for combining the measurements and estimators based on the least violation of physical laws that relate the parameters. Under fairly general smoothness and nonsmoothness conditions on the physical laws, we show the asymptotic convergence of our method and also derive distribution-free performance bounds based on finite samples. For suitable choices of the fuser classes, we show that for each parameter the fused estimate is probabilistically at least as good as its best measurement as well as best estimate. We illustrate the effectiveness of this method for a practical problem of fusing well-log data in methane hydrate exploration.
Simulation-based hypothesis testing of high dimensional means under covariance heterogeneity.
Chang, Jinyuan; Zheng, Chao; Zhou, Wen-Xin; Zhou, Wen
2017-12-01
In this article, we study the problem of testing the mean vectors of high dimensional data in both one-sample and two-sample cases. The proposed testing procedures employ maximum-type statistics and the parametric bootstrap techniques to compute the critical values. Different from the existing tests that heavily rely on the structural conditions on the unknown covariance matrices, the proposed tests allow general covariance structures of the data and therefore enjoy wide scope of applicability in practice. To enhance powers of the tests against sparse alternatives, we further propose two-step procedures with a preliminary feature screening step. Theoretical properties of the proposed tests are investigated. Through extensive numerical experiments on synthetic data sets and an human acute lymphoblastic leukemia gene expression data set, we illustrate the performance of the new tests and how they may provide assistance on detecting disease-associated gene-sets. The proposed methods have been implemented in an R-package HDtest and are available on CRAN. © 2017, The International Biometric Society.
The A[subscript 1c] Blood Test: An Illustration of Principles from General and Organic Chemistry
ERIC Educational Resources Information Center
Kerber, Robert C.
2007-01-01
The glycated hemoglobin blood test, usually designated as the A[subscript 1c] test, is a key measure of the effectiveness of glucose control in diabetics. The chemistry of glucose in the bloodstream, which underlies the test and its impact, provides an illustration of the importance of chemical equilibrium and kinetics to a major health problem.…
Decision rules for allocation of finances to health systems strengthening
Morton, Alec; Thomas, Ranjeeta; Smith, Peter C.
2017-01-01
A key dilemma in global health is how to allocate funds between disease-specific “vertical projects” on the one hand and “horizontal programmes” which aim to strengthen the entire health system on the other. While economic evaluation provides a way of approaching the prioritisation of vertical projects, it provides less guidance on how to prioritise between horizontal and vertical spending. We approach this problem by formulating a mathematical program which captures the complementary benefits of funding both vertical projects and horizontal programmes. We show that our solution to this math program has an appealing intuitive structure. We illustrate our model by computationally solving two specialised versions of this problem, with illustrations based on the problem of allocating funding for infectious diseases in sub-Saharan Africa. We conclude by reflecting on how such a model may be developed in the future and used to guide empirical data collection and theory development. PMID:27394006
Frequent methodological errors in clinical research.
Silva Aycaguer, L C
2018-03-07
Several errors that are frequently present in clinical research are listed, discussed and illustrated. A distinction is made between what can be considered an "error" arising from ignorance or neglect, from what stems from a lack of integrity of researchers, although it is recognized and documented that it is not easy to establish when we are in a case and when in another. The work does not intend to make an exhaustive inventory of such problems, but focuses on those that, while frequent, are usually less evident or less marked in the various lists that have been published with this type of problems. It has been a decision to develop in detail the examples that illustrate the problems identified, instead of making a list of errors accompanied by an epidermal description of their characteristics. Copyright © 2018 Elsevier España, S.L.U. y SEMICYUC. All rights reserved.
High-Fidelity Real-Time Simulation on Deployed Platforms
2010-08-26
three–dimensional transient heat conduction “ Swiss Cheese ” problem; and a three–dimensional unsteady incompressible Navier- Stokes low–Reynolds–number...our approach with three examples: a two?dimensional Helmholtz acoustics ?horn? problem; a three?dimensional transient heat conduction ? Swiss Cheese ...solutions; a transient lin- ear heat conduction problem in a three–dimensional “ Swiss Cheese ” configuration Ω — to illustrate treat- ment of many
ERIC Educational Resources Information Center
Maples, James N.; Taylor, William V.
2013-01-01
In this instructional article, we describe a non-traditional course assignment in which we ask students in our social problems courses to write, illustrate, and present a children's book about a social problem as part of the process of learning. Over the course of the semester, students utilize guided handouts to create a children's book exploring…
ERIC Educational Resources Information Center
Foley, Greg
2014-01-01
A problem that illustrates two ways of computing the break-even radius of insulation is outlined. The problem is suitable for students who are taking an introductory module in heat transfer or transport phenomena and who have some previous knowledge of the numerical solution of non- linear algebraic equations. The potential for computer algebra,…
Application of artificial intelligence to impulsive orbital transfers
NASA Technical Reports Server (NTRS)
Burns, Rowland E.
1987-01-01
A generalized technique for the numerical solution of any given class of problems is presented. The technique requires the analytic (or numerical) solution of every applicable equation for all variables that appear in the problem. Conditional blocks are employed to rapidly expand the set of known variables from a minimum of input. The method is illustrated via the use of the Hohmann transfer problem from orbital mechanics.
NASA Astrophysics Data System (ADS)
Williams, Christopher J.; Moffitt, Christine M.
2003-03-01
An important emerging issue in fisheries biology is the health of free-ranging populations of fish, particularly with respect to the prevalence of certain pathogens. For many years, pathologists focused on captive populations and interest was in the presence or absence of certain pathogens, so it was economically attractive to test pooled samples of fish. Recently, investigators have begun to study individual fish prevalence from pooled samples. Estimation of disease prevalence from pooled samples is straightforward when assay sensitivity and specificity are perfect, but this assumption is unrealistic. Here we illustrate the use of a Bayesian approach for estimating disease prevalence from pooled samples when sensitivity and specificity are not perfect. We also focus on diagnostic plots to monitor the convergence of the Gibbs-sampling-based Bayesian analysis. The methods are illustrated with a sample data set.
Children of Cocaine: Facing the Issues.
ERIC Educational Resources Information Center
Fact Find, 1990
1990-01-01
Statistical data illustrate the incidence of babies who have been prenatally exposed to cocaine. The damaging effects of maternal cocaine use on the fetus, infant, and young child are described, including: (1) prenatal strokes, malformed kidneys and limbs, and deformed hearts and lungs; (2) physical problems, social and emotional problems, and…
Nutrition and Its Relationship to Autism.
ERIC Educational Resources Information Center
Adams, Lynn; Conn, Susan
1997-01-01
Discusses the relationship between food allergies and sensitivities and autism. Information is provided on two dietary problems (candidiasis and gluten/casein intolerance) and case histories of two three-year-old children with autism are provided to illustrate each of the problems. Diet and vitamin therapy interventions are also described.…
A Design To Improve Children's Competencies in Solving Mathematical Word Problems.
ERIC Educational Resources Information Center
Zimmerman, Helene
A discrepancy exists between children's ability to compute and their ability to solve mathematical word problems. The literature suggests a variety of methods that have been attempted to improve this skill with varying success. The utilization of manipulatives, visualization, illustration, and emphasis on improving listening skills all were…
Teaching Real-World Applications of Business Statistics Using Communication to Scaffold Learning
ERIC Educational Resources Information Center
Green, Gareth P.; Jones, Stacey; Bean, John C.
2015-01-01
Our assessment research suggests that quantitative business courses that rely primarily on algorithmic problem solving may not produce the deep learning required for addressing real-world business problems. This article illustrates a strategy, supported by recent learning theory, for promoting deep learning by moving students gradually from…
On the numerical treatment of Coulomb forces in scattering problems
NASA Astrophysics Data System (ADS)
Randazzo, J. M.; Ancarani, L. U.; Colavecchia, F. D.; Gasaneo, G.; Frapiccini, A. L.
2012-11-01
We investigate the limiting procedures to obtain Coulomb interactions from short-range potentials. The application of standard techniques used for the two-body case (exponential and sharp cutoff) to the three-body break-up problem is illustrated numerically by considering the Temkin-Poet (TP) model of e-H processes.
ERIC Educational Resources Information Center
Foster, Robert J.
Intended mainly as a source book for instructors in area training programs, this handbook contains summary accounts of events illustrating problems frequently met by Americans working overseas, especially those providing technical assistance in developing nations. Examples are drawn from case studies, interviews, anthropology texts, and other…
A Non-Traditional Natural Science Course for Off-Campus Locations.
ERIC Educational Resources Information Center
Payez, Joseph
Science faculty at small community colleges often face the problem of teaching courses at off-campus locations without laboratory facilities or equipment. An introductory physical science course offered at Southampton Correctional Center in Capron, Virginia, illustrates one approach to this problem. First, the instructor met with students prior to…
Digital Maps, Matrices and Computer Algebra
ERIC Educational Resources Information Center
Knight, D. G.
2005-01-01
The way in which computer algebra systems, such as Maple, have made the study of complex problems accessible to undergraduate mathematicians with modest computational skills is illustrated by some large matrix calculations, which arise from representing the Earth's surface by digital elevation models. Such problems are often considered to lie in…
Teacher Certification: The Problem in the Pacific Northwest.
ERIC Educational Resources Information Center
Leonard, Leo D.
1985-01-01
Teacher certification procedures in the Pacific Northwest are used to illustrate the kinds of problems facing the nation in terms of teacher certification and program accreditation. Proposals for change include: cooperation between public schools and universities; five year programs; and use of research to study the teacher education process. (DF)
Linguistic Problems in the Work of the Translator.
ERIC Educational Resources Information Center
Szymczak, M.
Noting that no clear and adequate basis for a theory of translation exists at this time, this article examines problems common to three fundamental elements of translation. Illustrative examples, taken from Slavic languages, relate to discussion of grammatical, semantic-lexical, and stylistic aspects of translation. Various contributions of…
Digesting Student-Authored Story Problems
ERIC Educational Resources Information Center
Alexander, Cathleen M.; Ambrose, Rebecca C.
2010-01-01
When students are asked to write original story problems about fractional amounts, it can illustrate their misunderstandings about fractions. Think about the situations students would describe to model 1/2 + 2/3. Three elements, in particular, challenge students: (1) Which of three models (region, or area; measure; or set) is best suited for a…
Sociodrama: Group Creative Problem Solving in Action.
ERIC Educational Resources Information Center
Riley, John F.
1990-01-01
Sociodrama is presented as a structured, yet flexible, method of encouraging the use of creative thinking to examine a difficult problem. An example illustrates the steps involved in putting sociodrama into action. Production techniques useful in sociodrama include the soliloquy, double, role reversal, magic shop, unity of opposites, and audience…
Conceptual Transformation and Cognitive Processes in Origami Paper Folding
ERIC Educational Resources Information Center
Tenbrink, Thora; Taylor, Holly A.
2015-01-01
Research on problem solving typically does not address tasks that involve following detailed and/or illustrated step-by-step instructions. Such tasks are not seen as cognitively challenging problems to be solved. In this paper, we challenge this assumption by analyzing verbal protocols collected during an Origami folding task. Participants…
Working In-Vivo with Client Sense of Unlovability
ERIC Educational Resources Information Center
Tsai, Mavis; Reed, Richard
2012-01-01
Clients sometimes react negatively when their in-session problem behavior is simply blocked. This article illustrates how a FAP (Functional Analytic Psychotherapy) therapist can work effectively in session with a client's problem feeling of unlovability by: 1) understanding its antecedents and functions, 2) using therapeutic love to reinforce…
Resolving Relationship Problems in Communication Disorders Treatment: A Systems Approach.
ERIC Educational Resources Information Center
Stone, Judith R.
1992-01-01
Systems concepts from general systems theory and family therapy literature are presented as analytical tools to help professionals understand and change interactions with their clients having communication disorders. Two case examples that illustrate relationship problems are presented, and approaches taken to their resolution are described.…
Seven Special Kids: Employment Problems of Handicapped Youth.
ERIC Educational Resources Information Center
Smith, R. C.
A study of the employment problems facing physically and mentally handicapped youth is reported. To illustrate the main points, results of extensive interviews with seven handicapped youth are juxtaposed with statistics and findings. The study looks at tne continuum of services offered to handicapped individuals, including understanding the…
Multiple shooting algorithms for jump-discontinuous problems in optimal control and estimation
NASA Technical Reports Server (NTRS)
Mook, D. J.; Lew, Jiann-Shiun
1991-01-01
Multiple shooting algorithms are developed for jump-discontinuous two-point boundary value problems arising in optimal control and optimal estimation. Examples illustrating the origin of such problems are given to motivate the development of the solution algorithms. The algorithms convert the necessary conditions, consisting of differential equations and transversality conditions, into algebraic equations. The solution of the algebraic equations provides exact solutions for linear problems. The existence and uniqueness of the solution are proved.
ERIC Educational Resources Information Center
Isitan, Sonnur
2016-01-01
The purpose of this study was to examine the distribution of topics featured in illustrated storybooks that address preschool children. The sample of the current study included a total of 1,050 illustrated storybooks published in Turkish between 1980 and 2013. Books for pre-school children that incorporated the components of setting, attempt, and…
Statistical analysis tables for truncated or censored samples
NASA Technical Reports Server (NTRS)
Cohen, A. C.; Cooley, C. G.
1971-01-01
Compilation describes characteristics of truncated and censored samples, and presents six illustrations of practical use of tables in computing mean and variance estimates for normal distribution using selected samples.
Illustrated Examples of the Effects of Risk Preferences and Expectations on Bargaining Outcomes.
ERIC Educational Resources Information Center
Dickinson, David L.
2003-01-01
Describes bargaining examples that use expected utility theory. Provides example results that are intuitive, shown graphically and algebraically, and offer upper-level student samples that illustrate the usefulness of the expected utility theory. (JEH)
Li, Ben; Sun, Zhaonan; He, Qing; Zhu, Yu; Qin, Zhaohui S.
2016-01-01
Motivation: Modern high-throughput biotechnologies such as microarray are capable of producing a massive amount of information for each sample. However, in a typical high-throughput experiment, only limited number of samples were assayed, thus the classical ‘large p, small n’ problem. On the other hand, rapid propagation of these high-throughput technologies has resulted in a substantial collection of data, often carried out on the same platform and using the same protocol. It is highly desirable to utilize the existing data when performing analysis and inference on a new dataset. Results: Utilizing existing data can be carried out in a straightforward fashion under the Bayesian framework in which the repository of historical data can be exploited to build informative priors and used in new data analysis. In this work, using microarray data, we investigate the feasibility and effectiveness of deriving informative priors from historical data and using them in the problem of detecting differentially expressed genes. Through simulation and real data analysis, we show that the proposed strategy significantly outperforms existing methods including the popular and state-of-the-art Bayesian hierarchical model-based approaches. Our work illustrates the feasibility and benefits of exploiting the increasingly available genomics big data in statistical inference and presents a promising practical strategy for dealing with the ‘large p, small n’ problem. Availability and implementation: Our method is implemented in R package IPBT, which is freely available from https://github.com/benliemory/IPBT. Contact: yuzhu@purdue.edu; zhaohui.qin@emory.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26519502
Information processing of motion in facial expression and the geometry of dynamical systems
NASA Astrophysics Data System (ADS)
Assadi, Amir H.; Eghbalnia, Hamid; McMenamin, Brenton W.
2005-01-01
An interesting problem in analysis of video data concerns design of algorithms that detect perceptually significant features in an unsupervised manner, for instance methods of machine learning for automatic classification of human expression. A geometric formulation of this genre of problems could be modeled with help of perceptual psychology. In this article, we outline one approach for a special case where video segments are to be classified according to expression of emotion or other similar facial motions. The encoding of realistic facial motions that convey expression of emotions for a particular person P forms a parameter space XP whose study reveals the "objective geometry" for the problem of unsupervised feature detection from video. The geometric features and discrete representation of the space XP are independent of subjective evaluations by observers. While the "subjective geometry" of XP varies from observer to observer, levels of sensitivity and variation in perception of facial expressions appear to share a certain level of universality among members of similar cultures. Therefore, statistical geometry of invariants of XP for a sample of population could provide effective algorithms for extraction of such features. In cases where frequency of events is sufficiently large in the sample data, a suitable framework could be provided to facilitate the information-theoretic organization and study of statistical invariants of such features. This article provides a general approach to encode motion in terms of a particular genre of dynamical systems and the geometry of their flow. An example is provided to illustrate the general theory.
Scale problems in assessment of hydrogeological parameters of groundwater flow models
NASA Astrophysics Data System (ADS)
Nawalany, Marek; Sinicyn, Grzegorz
2015-09-01
An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i) spatial extent and geometry of hydrogeological system, (ii) spatial continuity and granularity of both natural and man-made objects within the system, (iii) duration of the system and (iv) continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scale - scale of pores, meso-scale - scale of laboratory sample, macro-scale - scale of typical blocks in numerical models of groundwater flow, local-scale - scale of an aquifer/aquitard and regional-scale - scale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical) block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here). Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.
Reconstructing Images in Astrophysics, an Inverse Problem Point of View
NASA Astrophysics Data System (ADS)
Theys, Céline; Aime, Claude
2016-04-01
After a short introduction, a first section provides a brief tutorial to the physics of image formation and its detection in the presence of noises. The rest of the chapter focuses on the resolution of the inverse problem
Promoting the Multidimensional Character of Scientific Reasoning †
Bradshaw, William S.; Nelson, Jennifer; Adams, Byron J.; Bell, John D.
2017-01-01
This study reports part of a long-term program to help students improve scientific reasoning using higher-order cognitive tasks set in the discipline of cell biology. This skill was assessed using problems requiring the construction of valid conclusions drawn from authentic research data. We report here efforts to confirm the hypothesis that data interpretation is a complex, multifaceted exercise. Confirmation was obtained using a statistical treatment showing that various such problems rank students differently—each contains a unique set of cognitive challenges. Additional analyses of performance results have allowed us to demonstrate that individuals differ in their capacity to navigate five independent generic elements that constitute successful data interpretation: biological context, connection to course concepts, experimental protocols, data inference, and integration of isolated experimental observations into a coherent model. We offer these aspects of scientific thinking as a “data analysis skills inventory,” along with usable sample problems that illustrate each element. Additionally, we show that this kind of reasoning is rigorous in that it is difficult for most novice students, who are unable to intuitively implement strategies for improving these skills. Instructors armed with knowledge of the specific challenges presented by different types of problems can provide specific helpful feedback during formative practice. The use of this instructional model is most likely to require changes in traditional classroom instruction. PMID:28512524
Assessing problem-solving skills in construction education with the virtual construction simulator
NASA Astrophysics Data System (ADS)
Castronovo, Fadi
The ability to solve complex problems is an essential skill that a construction and project manager must possess when entering the architectural, engineering, and construction industry. Such ability requires a mixture of problem-solving skills, ranging from lower to higher order thinking skills, composed of cognitive and metacognitive processes. These skills include the ability to develop and evaluate construction plans and manage the execution of such plans. However, in a typical construction program, introducing students to such complex problems can be a challenge, and most commonly the learner is presented with only part of a complex problem. To support this challenge, the traditional methodology of delivering design, engineering, and construction instruction has been going through a technological revolution, due to the rise of computer-based technology. For example, in construction classrooms, and other disciplines, simulations and educational games are being utilized to support the development of problem-solving skills. Previous engineering education research has illustrated the high potential that simulations and educational games have in engaging in lower and higher order thinking skills. Such research illustrated their capacity to support the development of problem-solving skills. This research presents evidence supporting the theory that educational simulation games can help with the learning and retention of transferable problem-solving skills, which are necessary to solve complex construction problems. The educational simulation game employed in this study is the Virtual Construction Simulator (VCS). The VCS is a game developed to provide students in an engaging learning activity that simulates the planning and managing phases of a construction project. Assessment of the third iteration of the VCS(3) game has shown pedagogical value in promoting students' motivation and a basic understanding of construction concepts. To further evaluate the benefits on problem-solving skills, a new version of the VCS(4) was developed, with new building modules and assessment framework. The design and development of the VCS4 leveraged research in educational psychology, multimedia learning, human-computer interaction, and Building Information Modeling. In this dissertation the researcher aimed to evaluate the pedagogical value of the VCS4 in fostering problem-solving skills. To answer the research questions, a crossover repeated measures quasi-experiment was designed to assess the educational gains that the VCS can provide to construction education. A group of 34 students, attending a fourth-year construction course at a university in the United States was chosen to participate in the experiment. The three learning modules of the VCS were used, which challenged the students to plan and manage the construction process of a wooden pavilion, the steel erection of a dormitory, and the concrete placement of the same dormitory. Based on the results the researcher was able to provide evidence supporting the hypothesis that the chosen sample of construction students were able to gain and retain problem-solving skills necessary to solve complex construction simulation problems, no matter what the sequence with which these modules were played. In conclusion, the presented results provide evidence supporting the theory that educational simulation games can help the learning and retention of transferable problem-solving skills, which are necessary to solve complex construction problems.
Analysis and control of hourglass instabilities in underintegrated linear and nonlinear elasticity
NASA Technical Reports Server (NTRS)
Jacquotte, Olivier P.; Oden, J. Tinsley
1994-01-01
Methods are described to identify and correct a bad finite element approximation of the governing operator obtained when under-integration is used in numerical code for several model problems: the Poisson problem, the linear elasticity problem, and for problems in the nonlinear theory of elasticity. For each of these problems, the reason for the occurrence of instabilities is given, a way to control or eliminate them is presented, and theorems of existence, uniqueness, and convergence for the given methods are established. Finally, numerical results are included which illustrate the theory.
Fuzzy Hungarian Method for Solving Intuitionistic Fuzzy Travelling Salesman Problem
NASA Astrophysics Data System (ADS)
Prabakaran, K.; Ganesan, K.
2018-04-01
The travelling salesman problem is to identify the shortest route that the salesman journey all the places and return the starting place with minimum cost. We develop a fuzzy version of Hungarian algorithm for the solution of intuitionistic fuzzy travelling salesman problem using triangular intuitionistic fuzzy numbers without changing them to classical travelling salesman problem. The purposed method is easy to empathize and to implement for finding solution of intuitionistic travelling salesman problem happening in real life situations. To illustrate the proposed method numerical example are provided.
Modelling the effect of immigration on drinking behaviour.
Xiang, Hong; Zhu, Cheng-Cheng; Huo, Hai-Feng
2017-12-01
A drinking model with immigration is constructed. For the model with problem drinking immigration, the model admits only one problem drinking equilibrium. For the model without problem drinking immigration, the model has two equilibria, one is problem drinking-free equilibrium and the other is problem drinking equilibrium. By employing the method of Lyapunov function, stability of all kinds of equilibria is obtained. Numerical simulations are also provided to illustrate our analytical results. Our results show that alcohol immigrants increase the difficulty of the temperance work of the region.
Yin, Jingjing; Nakas, Christos T; Tian, Lili; Reiser, Benjamin
2018-03-01
This article explores both existing and new methods for the construction of confidence intervals for differences of indices of diagnostic accuracy of competing pairs of biomarkers in three-class classification problems and fills the methodological gaps for both parametric and non-parametric approaches in the receiver operating characteristic surface framework. The most widely used such indices are the volume under the receiver operating characteristic surface and the generalized Youden index. We describe implementation of all methods and offer insight regarding the appropriateness of their use through a large simulation study with different distributional and sample size scenarios. Methods are illustrated using data from the Alzheimer's Disease Neuroimaging Initiative study, where assessment of cognitive function naturally results in a three-class classification setting.
Design Techniques for Uniform-DFT, Linear Phase Filter Banks
NASA Technical Reports Server (NTRS)
Sun, Honglin; DeLeon, Phillip
1999-01-01
Uniform-DFT filter banks are an important class of filter banks and their theory is well known. One notable characteristic is their very efficient implementation when using polyphase filters and the FFT. Separately, linear phase filter banks, i.e. filter banks in which the analysis filters have a linear phase are also an important class of filter banks and desired in many applications. Unfortunately, it has been proved that one cannot design critically-sampled, uniform-DFT, linear phase filter banks and achieve perfect reconstruction. In this paper, we present a least-squares solution to this problem and in addition prove that oversampled, uniform-DFT, linear phase filter banks (which are also useful in many applications) can be constructed for perfect reconstruction. Design examples are included illustrate the methods.
NASA Astrophysics Data System (ADS)
Szpunar, Joanna; McSheehy, Shona; Połeć, Kasia; Vacchina, Véronique; Mounicou, Sandra; Rodriguez, Isaac; Łobiński, Ryszard
2000-07-01
Recent advances in the coupling of gas chromatography (GC) and high performance liquid chromatography (HPLC) with inductively coupled plasma mass spectrometry (ICP MS) and their role in trace element speciation analysis of environmental materials are presented. The discussion is illustrated with three research examples concerning the following topics: (i) development and coupling of multicapillary microcolumn GC with ICP MS for speciation of organotin in sediment and biological tissue samples; (ii) speciation of arsenic in marine algae by size-exclusion-anion-exchange HPLC-ICP MS; and (iii) speciation of cadmium in plant cell cultures by size-exclusion HPLC-ICP MS. Particular attention is paid to the problem of signal identification in ICP MS chromatograms; the potential of electrospray MS/MS for this purpose is highlighted.
Semi-blind sparse image reconstruction with application to MRFM.
Park, Se Un; Dobigeon, Nicolas; Hero, Alfred O
2012-09-01
We propose a solution to the image deconvolution problem where the convolution kernel or point spread function (PSF) is assumed to be only partially known. Small perturbations generated from the model are exploited to produce a few principal components explaining the PSF uncertainty in a high-dimensional space. Unlike recent developments on blind deconvolution of natural images, we assume the image is sparse in the pixel basis, a natural sparsity arising in magnetic resonance force microscopy (MRFM). Our approach adopts a Bayesian Metropolis-within-Gibbs sampling framework. The performance of our Bayesian semi-blind algorithm for sparse images is superior to previously proposed semi-blind algorithms such as the alternating minimization algorithm and blind algorithms developed for natural images. We illustrate our myopic algorithm on real MRFM tobacco virus data.
Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection
NASA Technical Reports Server (NTRS)
Kumar, Sricharan; Srivistava, Ashok N.
2012-01-01
Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.
Marine natural products: a new wave of drugs?
Montaser, Rana; Luesch, Hendrik
2011-01-01
The largely unexplored marine world that presumably harbors the most biodiversity may be the vastest resource to discover novel ‘validated’ structures with novel modes of action that cover biologically relevant chemical space. Several challenges, including the supply problem and target identification, need to be met for successful drug development of these often complex molecules; however, approaches are available to overcome the hurdles. Advances in technologies such as sampling strategies, nanoscale NMR for structure determination, total chemical synthesis, fermentation and biotechnology are all crucial to the success of marine natural products as drug leads. We illustrate the high degree of innovation in the field of marine natural products, which in our view will lead to a new wave of drugs that flow into the market and pharmacies in the future. PMID:21882941
NASA Astrophysics Data System (ADS)
Vrugt, J. A.
2012-12-01
In the past decade much progress has been made in the treatment of uncertainty in earth systems modeling. Whereas initial approaches has focused mostly on quantification of parameter and predictive uncertainty, recent methods attempt to disentangle the effects of parameter, forcing (input) data, model structural and calibration data errors. In this talk I will highlight some of our recent work involving theory, concepts and applications of Bayesian parameter and/or state estimation. In particular, new methods for sequential Monte Carlo (SMC) and Markov Chain Monte Carlo (MCMC) simulation will be presented with emphasis on massively parallel distributed computing and quantification of model structural errors. The theoretical and numerical developments will be illustrated using model-data synthesis problems in hydrology, hydrogeology and geophysics.
A new statistic to express the uncertainty of kriging predictions for purposes of survey planning.
NASA Astrophysics Data System (ADS)
Lark, R. M.; Lapworth, D. J.
2014-05-01
It is well-known that one advantage of kriging for spatial prediction is that, given the random effects model, the prediction error variance can be computed a priori for alternative sampling designs. This allows one to compare sampling schemes, in particular sampling at different densities, and so to decide on one which meets requirements in terms of the uncertainty of the resulting predictions. However, the planning of sampling schemes must account not only for statistical considerations, but also logistics and cost. This requires effective communication between statisticians, soil scientists and data users/sponsors such as managers, regulators or civil servants. In our experience the latter parties are not necessarily able to interpret the prediction error variance as a measure of uncertainty for decision making. In some contexts (particularly the solution of very specific problems at large cartographic scales, e.g. site remediation and precision farming) it is possible to translate uncertainty of predictions into a loss function directly comparable with the cost incurred in increasing precision. Often, however, sampling must be planned for more generic purposes (e.g. baseline or exploratory geochemical surveys). In this latter context the prediction error variance may be of limited value to a non-statistician who has to make a decision on sample intensity and associated cost. We propose an alternative criterion for these circumstances to aid communication between statisticians and data users about the uncertainty of geostatistical surveys based on different sampling intensities. The criterion is the consistency of estimates made from two non-coincident instantiations of a proposed sample design. We consider square sample grids, one instantiation is offset from the second by half the grid spacing along the rows and along the columns. If a sample grid is coarse relative to the important scales of variation in the target property then the consistency of predictions from two instantiations is expected to be small, and can be increased by reducing the grid spacing. The measure of consistency is the correlation between estimates from the two instantiations of the sample grid, averaged over a grid cell. We call this the offset correlation, it can be calculated from the variogram. We propose that this measure is easier to grasp intuitively than the prediction error variance, and has the advantage of having an upper bound (1.0) which will aid its interpretation. This quality measure is illustrated for some hypothetical examples, considering both ordinary kriging and factorial kriging of the variable of interest. It is also illustrated using data on metal concentrations in the soil of north-east England.
NASA Astrophysics Data System (ADS)
Antunes, Pedro R. S.; Ferreira, Rui A. C.
2017-07-01
In this work we study boundary value problems associated to a nonlinear fractional ordinary differential equation involving left and right Caputo derivatives. We discuss the regularity of the solutions of such problems and, in particular, give precise necessary conditions so that the solutions are C1([0, 1]). Taking into account our analytical results, we address the numerical solution of those problems by the augmented -RBF method. Several examples illustrate the good performance of the numerical method.
Domain decomposition in time for PDE-constrained optimization
Barker, Andrew T.; Stoll, Martin
2015-08-28
Here, PDE-constrained optimization problems have a wide range of applications, but they lead to very large and ill-conditioned linear systems, especially if the problems are time dependent. In this paper we outline an approach for dealing with such problems by decomposing them in time and applying an additive Schwarz preconditioner in time, so that we can take advantage of parallel computers to deal with the very large linear systems. We then illustrate the performance of our method on a variety of problems.
Schematic of Sample Analysis at Mars SAM Instrument
2011-01-18
This schematic illustration for NASA Mars Science Laboratory Sample Analysis at Mars SAM instrument shows major components of the microwave-oven-size instrument, which will examine samples of Martian rocks, soil and atmosphere.
Feed Forward Neural Network and Optimal Control Problem with Control and State Constraints
NASA Astrophysics Data System (ADS)
Kmet', Tibor; Kmet'ová, Mária
2009-09-01
A feed forward neural network based optimal control synthesis is presented for solving optimal control problems with control and state constraints. The paper extends adaptive critic neural network architecture proposed by [5] to the optimal control problems with control and state constraints. The optimal control problem is transcribed into a nonlinear programming problem which is implemented with adaptive critic neural network. The proposed simulation method is illustrated by the optimal control problem of nitrogen transformation cycle model. Results show that adaptive critic based systematic approach holds promise for obtaining the optimal control with control and state constraints.
Atkins, Lou; Francis, Jill; Islam, Rafat; O'Connor, Denise; Patey, Andrea; Ivers, Noah; Foy, Robbie; Duncan, Eilidh M; Colquhoun, Heather; Grimshaw, Jeremy M; Lawton, Rebecca; Michie, Susan
2017-06-21
Implementing new practices requires changes in the behaviour of relevant actors, and this is facilitated by understanding of the determinants of current and desired behaviours. The Theoretical Domains Framework (TDF) was developed by a collaboration of behavioural scientists and implementation researchers who identified theories relevant to implementation and grouped constructs from these theories into domains. The collaboration aimed to provide a comprehensive, theory-informed approach to identify determinants of behaviour. The first version was published in 2005, and a subsequent version following a validation exercise was published in 2012. This guide offers practical guidance for those who wish to apply the TDF to assess implementation problems and support intervention design. It presents a brief rationale for using a theoretical approach to investigate and address implementation problems, summarises the TDF and its development, and describes how to apply the TDF to achieve implementation objectives. Examples from the implementation research literature are presented to illustrate relevant methods and practical considerations. Researchers from Canada, the UK and Australia attended a 3-day meeting in December 2012 to build an international collaboration among researchers and decision-makers interested in the advancing use of the TDF. The participants were experienced in using the TDF to assess implementation problems, design interventions, and/or understand change processes. This guide is an output of the meeting and also draws on the authors' collective experience. Examples from the implementation research literature judged by authors to be representative of specific applications of the TDF are included in this guide. We explain and illustrate methods, with a focus on qualitative approaches, for selecting and specifying target behaviours key to implementation, selecting the study design, deciding the sampling strategy, developing study materials, collecting and analysing data, and reporting findings of TDF-based studies. Areas for development include methods for triangulating data, e.g. from interviews, questionnaires and observation and methods for designing interventions based on TDF-based problem analysis. We offer this guide to the implementation community to assist in the application of the TDF to achieve implementation objectives. Benefits of using the TDF include the provision of a theoretical basis for implementation studies, good coverage of potential reasons for slow diffusion of evidence into practice and a method for progressing from theory-based investigation to intervention.
Arnup, Sarah J; McKenzie, Joanne E; Hemming, Karla; Pilcher, David; Forbes, Andrew B
2017-08-15
In a cluster randomised crossover (CRXO) design, a sequence of interventions is assigned to a group, or 'cluster' of individuals. Each cluster receives each intervention in a separate period of time, forming 'cluster-periods'. Sample size calculations for CRXO trials need to account for both the cluster randomisation and crossover aspects of the design. Formulae are available for the two-period, two-intervention, cross-sectional CRXO design, however implementation of these formulae is known to be suboptimal. The aims of this tutorial are to illustrate the intuition behind the design; and provide guidance on performing sample size calculations. Graphical illustrations are used to describe the effect of the cluster randomisation and crossover aspects of the design on the correlation between individual responses in a CRXO trial. Sample size calculations for binary and continuous outcomes are illustrated using parameters estimated from the Australia and New Zealand Intensive Care Society - Adult Patient Database (ANZICS-APD) for patient mortality and length(s) of stay (LOS). The similarity between individual responses in a CRXO trial can be understood in terms of three components of variation: variation in cluster mean response; variation in the cluster-period mean response; and variation between individual responses within a cluster-period; or equivalently in terms of the correlation between individual responses in the same cluster-period (within-cluster within-period correlation, WPC), and between individual responses in the same cluster, but in different periods (within-cluster between-period correlation, BPC). The BPC lies between zero and the WPC. When the WPC and BPC are equal the precision gained by crossover aspect of the CRXO design equals the precision lost by cluster randomisation. When the BPC is zero there is no advantage in a CRXO over a parallel-group cluster randomised trial. Sample size calculations illustrate that small changes in the specification of the WPC or BPC can increase the required number of clusters. By illustrating how the parameters required for sample size calculations arise from the CRXO design and by providing guidance on both how to choose values for the parameters and perform the sample size calculations, the implementation of the sample size formulae for CRXO trials may improve.
ERIC Educational Resources Information Center
Swanson, H. Lee
1982-01-01
An information processing approach to the assessment of learning disabled students' intellectual performance is presented. The model is based on the assumption that intelligent behavior is comprised of a variety of problem- solving strategies. An account of child problem solving is explained and illustrated with a "thinking aloud" protocol.…
Response Mode Effects on Computer Based Problem Solving. Report Series 1979.
ERIC Educational Resources Information Center
Brown, Bobby R.; Sustik, Joan M.
This response mode study attempts to determine whether different response modes are helpful or not in facilitating the thought process in a given problem solving situation. The Luchins Water Jar Test (WJT) used in this study illustrates the phenomena "Einstelling" (mechanization of response) because it does not require any specialized content…
Useful Material Efficiency Green Metrics Problem Set Exercises for Lecture and Laboratory
ERIC Educational Resources Information Center
Andraos, John
2015-01-01
A series of pedagogical problem set exercises are posed that illustrate the principles behind material efficiency green metrics and their application in developing a deeper understanding of reaction and synthesis plan analysis and strategies to optimize them. Rigorous, yet simple, mathematical proofs are given for some of the fundamental concepts,…
Generating Alternative Engineering Designs by Integrating Desktop VR with Genetic Algorithms
ERIC Educational Resources Information Center
Chandramouli, Magesh; Bertoline, Gary; Connolly, Patrick
2009-01-01
This study proposes an innovative solution to the problem of multiobjective engineering design optimization by integrating desktop VR with genetic computing. Although, this study considers the case of construction design as an example to illustrate the framework, this method can very much be extended to other engineering design problems as well.…
USA by Numbers: A Statistical Portrait of the United States.
ERIC Educational Resources Information Center
Weber, Susan, Ed.
This book presents demographic data about a variety of U.S. public policies, social problems, and environmental issues. The issues and problems that the statistics illustrate (such as overflowing garbage dumps, homelessness, child poverty, and smog and water pollution) are connected with, and the consequences of, the expanding U.S. population. The…
Female Gambling, Trauma, and the Not Good Enough Self: An Interpretative Phenomenological Analysis
ERIC Educational Resources Information Center
Nixon, Gary; Evans, Kyler; Kalischuk, Ruth Grant; Solowoniuk, Jason; McCallum, Karim; Hagen, Brad
2013-01-01
A gap exists within current literature regarding understanding the role that trauma may play in the initiation, development, and progression of female problem and pathological gambling. The purpose of this study is to further illustrate the relationship between trauma and the development problem and pathological gambling by investigating the lived…
Interactive Drawing Therapy and Chinese Migrants with Gambling Problems
ERIC Educational Resources Information Center
Zhang, Wenli; Everts, Hans
2012-01-01
Ethnic Chinese migrants in a country like New Zealand face a range of well-documented challenges. A proportion of such migrants find that recreational gambling turns into a pernicious gambling problem. This issue is addressed through illustrated case studies of Interactive Drawing Therapy, a drawing-based modality of therapy that facilitates…
Psychological Problems in Mental Deficiency.
ERIC Educational Resources Information Center
Sarason, Seymour B.; Doris, John
A statement of goals and the rationale for organization precede a historical discussion of mental deficiency and society. The problem of labels like IQ and brain injured and the consequences of the diagnostic process are illustrated by case histories; case studies are also used to examine the criteria used to decide who is retarded and to discuss…
The Role of Problem Specification Workshops in Extension: An IPM Example.
ERIC Educational Resources Information Center
Foster, John; And Others
1995-01-01
Of three extension models--top-down technology transfer, farmers-first approach, and participatory research--the latter extends elements of the other two into a more comprehensive analysis of a problem and specification of solution strategies. An Australian integrated pest management (IPM) example illustrates how structured workshops are a useful…
Analyzing the Responses of 7-8 Year Olds When Solving Partitioning Problems
ERIC Educational Resources Information Center
Badillo, Edelmira; Font, Vicenç; Edo, Mequè
2015-01-01
We analyze the mathematical solutions of 7- to 8-year-old pupils while individually solving an arithmetic problem. The analysis was based on the "configuration of objects," an instrument derived from the onto-semiotic approach to mathematical knowledge. Results are illustrated through a number of cases. From the analysis of mathematical…
La Verde, R; Manai, A
1983-08-25
The IATA regulations on the scheduled flight transportation of sick passengers is presented and the problems involved illustrated. Among other recommendations it is suggested that collaboration with the patients' doctors in filling up the MEDIF form is essential for the sick passenger's comfort and safety.
ERIC Educational Resources Information Center
Allinjawi, Arwa A.; Al-Nuaim, Hana A.; Krause, Paul
2014-01-01
Students often face difficulties while learning object-oriented programming (OOP) concepts. Many papers have presented various assessment methods for diagnosing learning problems to improve the teaching of programming in computer science (CS) higher education. The research presented in this article illustrates that although max-min composition is…
The Problem of Pseudoscience in Science Education and Implications of Constructivist Pedagogy
ERIC Educational Resources Information Center
Mugaloglu, Ebru Z.
2014-01-01
The intrusion of pseudoscience into science classrooms is a problem in science education today. This paper discusses the implications of constructivist pedagogy, which relies on the notions of viability and inter-subjectivity, in a context favourable to the acceptance of pseudoscience. Examples from written statements illustrate how prospective…
ERIC Educational Resources Information Center
Marsh, Herbert W.; Dowson, Martin; Pietsch, James; Walker, Richard
2004-01-01
Multicollinearity is a well-known general problem, but it also seriously threatens valid interpretations in structural equation models. Illustrating this problem, J. Pietsch, R. Walker, and E. Chapman (2003) found paths leading to achievement were apparently much larger for self-efficacy (.55) than self-concept (-.05), suggesting--erroneously, as…
Beliefs of Women Faculty About Discrimination.
ERIC Educational Resources Information Center
Ingram, Anne
This document presents the results of a survey of women faculty at the University of Maryland, College Park, in which they were to (1) indicate the most critical problems facing women at the university; and (2) provide facts that illustrate a specific pattern of discrimination or specific problems observed or experienced at the university. The…
Canada's Indians. Minority Rights Group Report No. 21.
ERIC Educational Resources Information Center
Wilson, James
An attempt to describe some of the long-term social and historical causes of Canada Native problems, this document illustrates the way in which numerous problems have combined to create the present situation and outlines some of the Canada Natives' current aspirations for the future. The introduction addresses the initial and resultant impact of…
Improving Problem-Solving Performance of Students with Autism Spectrum Disorders
ERIC Educational Resources Information Center
Yakubova, Gulnoza; Taber-Doughty, Teresa
2017-01-01
The effectiveness of a multicomponent intervention to improve the problem-solving performance of students with autism spectrum disorders (ASD) during vocational tasks was examined. A multiple-probe across-students design was used to illustrate the effectiveness of point-of-view video modeling paired with practice sessions and a self-operated cue…
Interactive Problem-Solving Geography: An Introduction in Chinese Classrooms to Locational Analysis
ERIC Educational Resources Information Center
Wai, Nu Nu; Giles, John H.
2006-01-01
Reform in geography education, as reflected in "Geography for Life: National Geography Standards" (1994) for the U.S.A., favors a constructivist approach to learning. This study examines the acceptance of this approach among students in two upper secondary schools in China. A lesson was developed to illustrate interactive problem solving…
Children's Rights: Countering the Opposition.
ERIC Educational Resources Information Center
Starr, R. H., Jr.; And Others
The problems of enacting and implementing child advocacy laws at State and Federal levels are presented along with two cases which illustrate these problems and point to the advocacy role that psychologists can perform. The first case deals with the use of corporal punishment in family day care homes in Michigan. In 1974, rules against corporal…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Metcalf, R.L.
The proliferation of xenobiotic chemicals in the global environment poses living problems for each of us aboard {open_quotes}spaceship earth.{close_quotes} Seven case studies are presented that illustrate the magnitude of the problem that can result from waiting to identify toxic hazards until there have been decades of {open_quotes}human guinea pig{close_quotes} exposure. 25 refs., 5 tabs.
Marketing Hardwoods to Furniture Producers
Steven A. Sinclair; Robert J. Bush; Philip A. Araman
1989-01-01
This paper discusses some of the many problems in developing marketing programs for small wood products manufacturers. It examines the problems of using price as a dominant means for getting and attracting customers. The marketing of hardwood lumber to furniture producers is then used as an example. Data from 36 furniture lumber buyers is presented to illustrate...
Operant Variability: Some Random Thoughts
ERIC Educational Resources Information Center
Marr, M. Jackson
2012-01-01
Barba's (2012) paper is a serious and thoughtful analysis of a vexing problem in behavior analysis: Just what should count as an operant class and how do people know? The slippery issue of a "generalized operant" or functional response class illustrates one aspect of this problem, and "variation" or "novelty" as an operant appears to fall into…
ERIC Educational Resources Information Center
de Villiers, Michael
2017-01-01
This paper discusses an interesting, classic problem that provides a nice classroom investigation for dynamic geometry, and which can easily be explained (proved) with transformation geometry. The deductive explanation (proof) provides insight into why it is true, leading to an immediate generalization, thus illustrating the discovery function of…
ERIC Educational Resources Information Center
Bolt, Mike
2010-01-01
Many optimization problems can be solved without resorting to calculus. This article develops a new variational method for optimization that relies on inequalities. The method is illustrated by four examples, the last of which provides a completely algebraic solution to the problem of minimizing the time it takes a dog to retrieve a thrown ball,…
When Religion Becomes Deviance: Introducing Religion in Deviance and Social Problems Courses.
ERIC Educational Resources Information Center
Perrin, Robin D.
2001-01-01
Focuses on teaching new religious movements (NRMs), or cults, within deviance or social problems courses. Provides information about the conceptions and theories of deviance. Includes three illustrations of how to use deviant religions in a deviance course and offers insights into teaching religion as deviance. Includes references. (CMK)
The Lily-White University Presses.
ERIC Educational Resources Information Center
Shin, Annys
1996-01-01
Argues that the university presses are immune from racial change and discusses the problem of using location as an argument for not being able to lure blacks into university publishing. Howard University Press is used to illustrate the problem of budget cutting and the ability to boost black recruitment efforts or establish a united black press…
Water Pollution (Causes, Mechanisms, Solution).
ERIC Educational Resources Information Center
Strandberg, Carl
Written for the general public, this book illustrates the causes, status, problem areas, and prediction and control of water pollution. Water pollution is one of the most pressing issues of our time and the author communicates the complexities of this problem to the reader in common language. The purpose of the introductory chapter is to show what…
The Role of Learning in Social Development: Illustrations from Neglected Children
ERIC Educational Resources Information Center
Wismer Fries, Alison B.; Pollak, Seth D.
2017-01-01
Children who experience early caregiving neglect are very likely to have problems developing and maintaining relationships and regulating their social behavior. One of the earliest manifestations of this problem is reflected in indiscriminate behavior, a phenomenon where young children do not show normative wariness of strangers or use familiar…
Common Issues in World Regions: A Video Series.
ERIC Educational Resources Information Center
Becker, James
1992-01-01
Describes a video series that offers information on the impact of current world problems on family life. Explains that the programs illustrate the five geographic themes by comparing the experiences of young people in North America and Western Europe. Suggests that the series helps teenagers see how the same problems affect families in different…
ERIC Educational Resources Information Center
Lippert, Renate
The application of recent advances in the understanding of problem solving to the classroom is reviewed. Current research findings are described, and the instructional validity of these findings is illustrated by a research study of an instructional strategy called novice knowledge engineering. How various instructional strategies serve as…
NASA Technical Reports Server (NTRS)
Halldane, J. F.
1972-01-01
Technology is considered as a culture for changing a physical world and technology assessment questions the inherent cultural capability to modify power and material in support of living organisms. A comprehensive goal-parameter-synthesis-criterion specification is presented as a basis for a rational assessment of technology. The thesis queries the purpose of the assessed problems, the factors considered, the relationships between factors, and the values assigned those factors to accomplish the appropriate purpose. Stationary and sequential evaluation of enviro-organismic systems are delegated to the responsible personalities involved in design; from promoter/designer through contractor to occupant. Discussion includes design goals derived from organismic factors, definitions of human responses which establish viable criteria and relevant correlation models, linking stimulus parameters, and parallel problem-discipline centered design organization. A consistent concept of impedance, as a degradation in the performance of a specified parameter, is introduced to overcome the arbitrary inoperative connotations of terms like noise, discomfort, and glare. Applications of the evaluative specification are illustrated through design problems related to auditory impedance and sound distribution.
Rapid, Selective Heavy Metal Removal from Water by a Metal-Organic Framework/Polydopamine Composite.
Sun, Daniel T; Peng, Li; Reeder, Washington S; Moosavi, Seyed Mohamad; Tiana, Davide; Britt, David K; Oveisi, Emad; Queen, Wendy L
2018-03-28
Drinking water contamination with heavy metals, particularly lead, is a persistent problem worldwide with grave public health consequences. Existing purification methods often cannot address this problem quickly and economically. Here we report a cheap, water stable metal-organic framework/polymer composite, Fe-BTC/PDA, that exhibits rapid, selective removal of large quantities of heavy metals, such as Pb 2+ and Hg 2+ , from real world water samples. In this work, Fe-BTC is treated with dopamine, which undergoes a spontaneous polymerization to polydopamine (PDA) within its pores via the Fe 3+ open metal sites. The PDA, pinned on the internal MOF surface, gains extrinsic porosity, resulting in a composite that binds up to 1634 mg of Hg 2+ and 394 mg of Pb 2+ per gram of composite and removes more than 99.8% of these ions from a 1 ppm solution, yielding drinkable levels in seconds. Further, the composite properties are well-maintained in river and seawater samples spiked with only trace amounts of lead, illustrating unprecedented selectivity. Remarkably, no significant uptake of competing metal ions is observed even when interferents, such as Na + , are present at concentrations up to 14 000 times that of Pb 2+ . The material is further shown to be resistant to fouling when tested in high concentrations of common organic interferents, like humic acid, and is fully regenerable over many cycles.
Rapid, Selective Heavy Metal Removal from Water by a Metal–Organic Framework/Polydopamine Composite
2018-01-01
Drinking water contamination with heavy metals, particularly lead, is a persistent problem worldwide with grave public health consequences. Existing purification methods often cannot address this problem quickly and economically. Here we report a cheap, water stable metal–organic framework/polymer composite, Fe-BTC/PDA, that exhibits rapid, selective removal of large quantities of heavy metals, such as Pb2+ and Hg2+, from real world water samples. In this work, Fe-BTC is treated with dopamine, which undergoes a spontaneous polymerization to polydopamine (PDA) within its pores via the Fe3+ open metal sites. The PDA, pinned on the internal MOF surface, gains extrinsic porosity, resulting in a composite that binds up to 1634 mg of Hg2+ and 394 mg of Pb2+ per gram of composite and removes more than 99.8% of these ions from a 1 ppm solution, yielding drinkable levels in seconds. Further, the composite properties are well-maintained in river and seawater samples spiked with only trace amounts of lead, illustrating unprecedented selectivity. Remarkably, no significant uptake of competing metal ions is observed even when interferents, such as Na+, are present at concentrations up to 14 000 times that of Pb2+. The material is further shown to be resistant to fouling when tested in high concentrations of common organic interferents, like humic acid, and is fully regenerable over many cycles. PMID:29632880
Models for inference in dynamic metacommunity systems
Dorazio, Robert M.; Kery, Marc; Royle, J. Andrew; Plattner, Matthias
2010-01-01
A variety of processes are thought to be involved in the formation and dynamics of species assemblages. For example, various metacommunity theories are based on differences in the relative contributions of dispersal of species among local communities and interactions of species within local communities. Interestingly, metacommunity theories continue to be advanced without much empirical validation. Part of the problem is that statistical models used to analyze typical survey data either fail to specify ecological processes with sufficient complexity or they fail to account for errors in detection of species during sampling. In this paper, we describe a statistical modeling framework for the analysis of metacommunity dynamics that is based on the idea of adopting a unified approach, multispecies occupancy modeling, for computing inferences about individual species, local communities of species, or the entire metacommunity of species. This approach accounts for errors in detection of species during sampling and also allows different metacommunity paradigms to be specified in terms of species- and location-specific probabilities of occurrence, extinction, and colonization: all of which are estimable. In addition, this approach can be used to address inference problems that arise in conservation ecology, such as predicting temporal and spatial changes in biodiversity for use in making conservation decisions. To illustrate, we estimate changes in species composition associated with the species-specific phenologies of flight patterns of butterflies in Switzerland for the purpose of estimating regional differences in biodiversity.
Evaluation of natural language processing systems: Issues and approaches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guida, G.; Mauri, G.
This paper encompasses two main topics: a broad and general analysis of the issue of performance evaluation of NLP systems and a report on a specific approach developed by the authors and experimented on a sample test case. More precisely, it first presents a brief survey of the major works in the area of NLP systems evaluation. Then, after introducing the notion of the life cycle of an NLP system, it focuses on the concept of performance evaluation and analyzes the scope and the major problems of the investigation. The tools generally used within computer science to assess the qualitymore » of a software system are briefly reviewed, and their applicability to the task of evaluation of NLP systems is discussed. Particular attention is devoted to the concepts of efficiency, correctness, reliability, and adequacy, and how all of them basically fail in capturing the peculiar features of performance evaluation of an NLP system is discussed. Two main approaches to performance evaluation are later introduced; namely, black-box- and model-based, and their most important characteristics are presented. Finally, a specific model for performance evaluation proposed by the authors is illustrated, and the results of an experiment with a sample application are reported. The paper concludes with a discussion on research perspective, open problems, and importance of performance evaluation to industrial applications.« less
Model Comparison of Nonlinear Structural Equation Models with Fixed Covariates.
ERIC Educational Resources Information Center
Lee, Sik-Yum; Song, Xin-Yuan
2003-01-01
Proposed a new nonlinear structural equation model with fixed covariates to deal with some complicated substantive theory and developed a Bayesian path sampling procedure for model comparison. Illustrated the approach with an illustrative example using data from an international study. (SLD)
Evaluating the efficiency of environmental monitoring programs
Levine, Carrie R.; Yanai, Ruth D.; Lampman, Gregory G.; Burns, Douglas A.; Driscoll, Charles T.; Lawrence, Gregory B.; Lynch, Jason; Schoch, Nina
2014-01-01
Statistical uncertainty analyses can be used to improve the efficiency of environmental monitoring, allowing sampling designs to maximize information gained relative to resources required for data collection and analysis. In this paper, we illustrate four methods of data analysis appropriate to four types of environmental monitoring designs. To analyze a long-term record from a single site, we applied a general linear model to weekly stream chemistry data at Biscuit Brook, NY, to simulate the effects of reducing sampling effort and to evaluate statistical confidence in the detection of change over time. To illustrate a detectable difference analysis, we analyzed a one-time survey of mercury concentrations in loon tissues in lakes in the Adirondack Park, NY, demonstrating the effects of sampling intensity on statistical power and the selection of a resampling interval. To illustrate a bootstrapping method, we analyzed the plot-level sampling intensity of forest inventory at the Hubbard Brook Experimental Forest, NH, to quantify the sampling regime needed to achieve a desired confidence interval. Finally, to analyze time-series data from multiple sites, we assessed the number of lakes and the number of samples per year needed to monitor change over time in Adirondack lake chemistry using a repeated-measures mixed-effects model. Evaluations of time series and synoptic long-term monitoring data can help determine whether sampling should be re-allocated in space or time to optimize the use of financial and human resources.
Statistical Significance for Hierarchical Clustering
Kimes, Patrick K.; Liu, Yufeng; Hayes, D. Neil; Marron, J. S.
2017-01-01
Summary Cluster analysis has proved to be an invaluable tool for the exploratory and unsupervised analysis of high dimensional datasets. Among methods for clustering, hierarchical approaches have enjoyed substantial popularity in genomics and other fields for their ability to simultaneously uncover multiple layers of clustering structure. A critical and challenging question in cluster analysis is whether the identified clusters represent important underlying structure or are artifacts of natural sampling variation. Few approaches have been proposed for addressing this problem in the context of hierarchical clustering, for which the problem is further complicated by the natural tree structure of the partition, and the multiplicity of tests required to parse the layers of nested clusters. In this paper, we propose a Monte Carlo based approach for testing statistical significance in hierarchical clustering which addresses these issues. The approach is implemented as a sequential testing procedure guaranteeing control of the family-wise error rate. Theoretical justification is provided for our approach, and its power to detect true clustering structure is illustrated through several simulation studies and applications to two cancer gene expression datasets. PMID:28099990
CAVE3: A general transient heat transfer computer code utilizing eigenvectors and eigenvalues
NASA Technical Reports Server (NTRS)
Palmieri, J. V.; Rathjen, K. A.
1978-01-01
The method of solution is a hybrid analytical numerical technique which utilizes eigenvalues and eigenvectors. The method is inherently stable, permitting large time steps even with the best of conductors with the finest of mesh sizes which can provide a factor of five reduction in machine time compared to conventional explicit finite difference methods when structures with small time constants are analyzed over long time periods. This code will find utility in analyzing hypersonic missile and aircraft structures which fall naturally into this class. The code is a completely general one in that problems involving any geometry, boundary conditions and materials can be analyzed. This is made possible by requiring the user to establish the thermal network conductances between nodes. Dynamic storage allocation is used to minimize core storage requirements. This report is primarily a user's manual for CAVE3 code. Input and output formats are presented and explained. Sample problems are included which illustrate the usage of the code as well as establish the validity and accuracy of the method.
NASA Technical Reports Server (NTRS)
Head, J. W.; Belton, M.; Greeley, R.; Pieters, C.; Mcewen, A.; Neukum, G.; Mccord, T.
1993-01-01
The Lunar Scout Missions (payload: x-ray fluorescence spectrometer, high-resolution stereocamera, neutron spectrometer, gamma-ray spectrometer, imaging spectrometer, gravity experiment) will provide a global data set for the chemistry, mineralogy, geology, topography, and gravity of the Moon. These data will in turn provide an important baseline for the further scientific exploration of the Moon by all-purpose landers and micro-rovers, and sample return missions from sites shown to be of primary interest from the global orbital data. These data would clearly provide the basis for intelligent selection of sites for the establishment of lunar base sites for long-term scientific and resource exploration and engineering studies. The two recent Galileo encounters with the Moon (December, 1990 and December, 1992) illustrate how modern technology can be applied to significant lunar problems. We emphasize the regional results of the Galileo SSI to show the promise of geologic unit definition and characterization as an example of what can be done with the global coverage to be obtained by the Lunar Scout Missions.
Quantum Weak Values and Logic: An Uneasy Couple
NASA Astrophysics Data System (ADS)
Svensson, Bengt E. Y.
2017-03-01
Quantum mechanical weak values of projection operators have been used to answer which-way questions, e. g. to trace which arms in a multiple Mach-Zehnder setup a particle may have traversed from a given initial to a prescribed final state. I show that this procedure might lead to logical inconsistencies in the sense that different methods used to answer composite questions, like "Has the particle traversed the way X or the way Y?", may result in different answers depending on which methods are used to find the answer. I illustrate the problem by considering some examples: the "quantum pigeonhole" framework of Aharonov et al., the three-box problem, and Hardy's paradox. To prepare the ground for my main conclusion on the incompatibility in certain cases of weak values and logic, I study the corresponding situation for strong/projective measurements. In this case, no logical inconsistencies occur provided one is always careful in specifying exactly to which ensemble or sample space one refers. My results cast doubts on the utility of quantum weak values in treating cases like the examples mentioned.
NASA Astrophysics Data System (ADS)
Donner, Reik
2013-04-01
Time series analysis offers a rich toolbox for deciphering information from high-resolution geological and geomorphological archives and linking the thus obtained results to distinct climate and environmental processes. Specifically, on various time-scales from inter-annual to multi-millenial, underlying driving forces exhibit more or less periodic oscillations, the detection of which in proxy records often allows linking them to specific mechanisms by which the corresponding drivers may have affected the archive under study. A persistent problem in geomorphology is that available records do not present a clear signal of the variability of environmental conditions, but exhibit considerable uncertainties of both the measured proxy variables and the associated age model. Particularly, time-scale uncertainty as well as the heterogeneity of sampling in the time domain are source of severe conceptual problems that may lead to false conclusions about the presence or absence of oscillatory patterns and their mutual phasing in different archives. In my presentation, I will discuss how one can cope with non-uniformly sampled proxy records to detect and quantify oscillatory patterns in one or more data sets. For this purpose, correlation analysis is reformulated using kernel estimates which are found superior to classical estimators based on interpolation or Fourier transform techniques. In order to characterize non-stationary or noisy periodicities and their relative phasing between different records, an extension of continuous wavelet transform is utilized. The performance of both methods is illustrated for different case studies. An extension to explicitly considering time-scale uncertainties by means of Bayesian techniques is briefly outlined.
State space approach to mixed boundary value problems.
NASA Technical Reports Server (NTRS)
Chen, C. F.; Chen, M. M.
1973-01-01
A state-space procedure for the formulation and solution of mixed boundary value problems is established. This procedure is a natural extension of the method used in initial value problems; however, certain special theorems and rules must be developed. The scope of the applications of the approach includes beam, arch, and axisymmetric shell problems in structural analysis, boundary layer problems in fluid mechanics, and eigenvalue problems for deformable bodies. Many classical methods in these fields developed by Holzer, Prohl, Myklestad, Thomson, Love-Meissner, and others can be either simplified or unified under new light shed by the state-variable approach. A beam problem is included as an illustration.
Blanco-Claraco, José Luis; López-Martínez, Javier; Torres-Moreno, José Luis; Giménez-Fernández, Antonio
2015-01-01
Most experimental fields of science and engineering require the use of data acquisition systems (DAQ), devices in charge of sampling and converting electrical signals into digital data and, typically, performing all of the required signal preconditioning. Since commercial DAQ systems are normally focused on specific types of sensors and actuators, systems engineers may need to employ mutually-incompatible hardware from different manufacturers in applications demanding heterogeneous inputs and outputs, such as small-signal analog inputs, differential quadrature rotatory encoders or variable current outputs. A common undesirable side effect of heterogeneous DAQ hardware is the lack of an accurate synchronization between samples captured by each device. To solve such a problem with low-cost hardware, we present a novel modular DAQ architecture comprising a base board and a set of interchangeable modules. Our main design goal is the ability to sample all sources at predictable, fixed sampling frequencies, with a reduced synchronization mismatch (<1 μs) between heterogeneous signal sources. We present experiments in the field of mechanical engineering, illustrating vibration spectrum analyses from piezoelectric accelerometers and, as a novelty in these kinds of experiments, the spectrum of quadrature encoder signals. Part of the design and software will be publicly released online. PMID:26516865
Robust reliable sampled-data control for switched systems with application to flight control
NASA Astrophysics Data System (ADS)
Sakthivel, R.; Joby, Maya; Shi, P.; Mathiyalagan, K.
2016-11-01
This paper addresses the robust reliable stabilisation problem for a class of uncertain switched systems with random delays and norm bounded uncertainties. The main aim of this paper is to obtain the reliable robust sampled-data control design which involves random time delay with an appropriate gain control matrix for achieving the robust exponential stabilisation for uncertain switched system against actuator failures. In particular, the involved delays are assumed to be randomly time-varying which obeys certain mutually uncorrelated Bernoulli distributed white noise sequences. By constructing an appropriate Lyapunov-Krasovskii functional (LKF) and employing an average-dwell time approach, a new set of criteria is derived for ensuring the robust exponential stability of the closed-loop switched system. More precisely, the Schur complement and Jensen's integral inequality are used in derivation of stabilisation criteria. By considering the relationship among the random time-varying delay and its lower and upper bounds, a new set of sufficient condition is established for the existence of reliable robust sampled-data control in terms of solution to linear matrix inequalities (LMIs). Finally, an illustrative example based on the F-18 aircraft model is provided to show the effectiveness of the proposed design procedures.
Cuevas, Erik; Díaz, Margarita
2015-01-01
In this paper, a new method for robustly estimating multiple view relations from point correspondences is presented. The approach combines the popular random sampling consensus (RANSAC) algorithm and the evolutionary method harmony search (HS). With this combination, the proposed method adopts a different sampling strategy than RANSAC to generate putative solutions. Under the new mechanism, at each iteration, new candidate solutions are built taking into account the quality of the models generated by previous candidate solutions, rather than purely random as it is the case of RANSAC. The rules for the generation of candidate solutions (samples) are motivated by the improvisation process that occurs when a musician searches for a better state of harmony. As a result, the proposed approach can substantially reduce the number of iterations still preserving the robust capabilities of RANSAC. The method is generic and its use is illustrated by the estimation of homographies, considering synthetic and real images. Additionally, in order to demonstrate the performance of the proposed approach within a real engineering application, it is employed to solve the problem of position estimation in a humanoid robot. Experimental results validate the efficiency of the proposed method in terms of accuracy, speed, and robustness.
Wang, Zhuoyu; Dendukuri, Nandini; Pai, Madhukar; Joseph, Lawrence
2017-11-01
When planning a study to estimate disease prevalence to a pre-specified precision, it is of interest to minimize total testing cost. This is particularly challenging in the absence of a perfect reference test for the disease because different combinations of imperfect tests need to be considered. We illustrate the problem and a solution by designing a study to estimate the prevalence of childhood tuberculosis in a hospital setting. All possible combinations of 3 commonly used tuberculosis tests, including chest X-ray, tuberculin skin test, and a sputum-based test, either culture or Xpert, are considered. For each of the 11 possible test combinations, 3 Bayesian sample size criteria, including average coverage criterion, average length criterion and modified worst outcome criterion, are used to determine the required sample size and total testing cost, taking into consideration prior knowledge about the accuracy of the tests. In some cases, the required sample sizes and total testing costs were both reduced when more tests were used, whereas, in other examples, lower costs are achieved with fewer tests. Total testing cost should be formally considered when designing a prevalence study.
Chapman, Benjamin P; Weiss, Alexander; Duberstein, Paul R
2016-12-01
Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in "big data" problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how 3 common SLT algorithms-supervised principal components, regularization, and boosting-can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach-or perhaps because of them-SLT methods may hold value as a statistically rigorous approach to exploratory regression. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Illustration of Launching Samples Home from Mars
NASA Technical Reports Server (NTRS)
2005-01-01
One crucial step in a Mars sample return mission would be to launch the collected sample away from the surface of Mars. This artist's concept depicts a Mars ascent vehicle for starting a sample of Mars rocks on their trip to Earth.NASA Astrophysics Data System (ADS)
Mahaney, William C.; Hancock, Ronald G. V.; Somelar, Peeter; Milan, Alison
2016-10-01
Various chemical extractions of Fe and Al from bulk soil samples, including Na-pyrophosphate (Fep, Alp), acid ammonium oxalate (Feo, Alo), and Na-dithionite (Fed, Ald), have been used over the last half century to distinguish soil ages over varying time frames from 102 to 106 years and even as far into antiquity as the Oligocene (30 × 106) years. Problems with mineral/chemical uniformity of sediments, free drainage of open system profiles, and variable climate over long time frames have produced problems and uncertainties as to just what each extraction removes from the bulk material analyzed. Some problems have been resolved by the work of Parfitt and Childs (1988); but some persist, especially with respect to the solubility of some extractant forms and the actual composition of others, particularly Alp, Alo, and Ald. A recent test of soils and paleosols in a fluvial chronosequence in southern Ontario illustrates the soil-paleosol evolutionary time trend over a period of ~ 11 ky, essentially post-Iroquois time in the Ontario basin (Jackson et al., 2000). This work highlights the importance of isolated, free draining weathering systems, mineral uniformity, and new relationships between secondary forms of Fed and Ald, the latter previously considered of little importance in age relationship quests.
Engineering Risk Assessment of Space Thruster Challenge Problem
NASA Technical Reports Server (NTRS)
Mathias, Donovan L.; Mattenberger, Christopher J.; Go, Susie
2014-01-01
The Engineering Risk Assessment (ERA) team at NASA Ames Research Center utilizes dynamic models with linked physics-of-failure analyses to produce quantitative risk assessments of space exploration missions. This paper applies the ERA approach to the baseline and extended versions of the PSAM Space Thruster Challenge Problem, which investigates mission risk for a deep space ion propulsion system with time-varying thruster requirements and operations schedules. The dynamic mission is modeled using a combination of discrete and continuous-time reliability elements within the commercially available GoldSim software. Loss-of-mission (LOM) probability results are generated via Monte Carlo sampling performed by the integrated model. Model convergence studies are presented to illustrate the sensitivity of integrated LOM results to the number of Monte Carlo trials. A deterministic risk model was also built for the three baseline and extended missions using the Ames Reliability Tool (ART), and results are compared to the simulation results to evaluate the relative importance of mission dynamics. The ART model did a reasonable job of matching the simulation models for the baseline case, while a hybrid approach using offline dynamic models was required for the extended missions. This study highlighted that state-of-the-art techniques can adequately adapt to a range of dynamic problems.
3D gravity inversion and uncertainty assessment of basement relief via Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Pallero, J. L. G.; Fernández-Martínez, J. L.; Bonvalot, S.; Fudym, O.
2017-04-01
Nonlinear gravity inversion in sedimentary basins is a classical problem in applied geophysics. Although a 2D approximation is widely used, 3D models have been also proposed to better take into account the basin geometry. A common nonlinear approach to this 3D problem consists in modeling the basin as a set of right rectangular prisms with prescribed density contrast, whose depths are the unknowns. Then, the problem is iteratively solved via local optimization techniques from an initial model computed using some simplifications or being estimated using prior geophysical models. Nevertheless, this kind of approach is highly dependent on the prior information that is used, and lacks from a correct solution appraisal (nonlinear uncertainty analysis). In this paper, we use the family of global Particle Swarm Optimization (PSO) optimizers for the 3D gravity inversion and model appraisal of the solution that is adopted for basement relief estimation in sedimentary basins. Synthetic and real cases are illustrated, showing that robust results are obtained. Therefore, PSO seems to be a very good alternative for 3D gravity inversion and uncertainty assessment of basement relief when used in a sampling while optimizing approach. That way important geological questions can be answered probabilistically in order to perform risk assessment in the decisions that are made.
Teaching basic science to optimize transfer.
Norman, Geoff
2009-09-01
Basic science teachers share the concern that much of what they teach is soon forgotten. Although some evidence suggests that relatively little basic science is forgotten, it may not appear so, as students commonly have difficulty using these concepts to solve or explain clinical problems: This phenomenon, using a concept learned in one context to solve a problem in a different context, is known to cognitive psychologists as transfer. The psychology literature shows that transfer is difficult; typically, even though students may know a concept, fewer than 30% will be able to use it to solve new problems. However a number of strategies to improve transfer can be adopted at the time of initial teaching of the concept, in the use of exemplars to illustrate the concept, and in practice with additional problems. In this article, we review the literature in psychology to identify practical strategies to improve transfer. Critical review of psychology literature to identify factors that enhance or impede transfer. There are a number of strategies available to teachers to facilitate transfer. These include active problem-solving at the time of initial learning, imbedding the concept in a problem context, using everyday analogies, and critically, practice with multiple dissimilar problems. Further, mixed practice, where problems illustrating different concepts are mixed together, and distributed practice, spread out over time, can result in significant and large gains. Transfer is difficult, but specific teaching strategies can enhance this skill by factors of two or three.
NASA Astrophysics Data System (ADS)
Mukherjee, Sathi; Basu, Kajla
2010-10-01
In this paper we develop a methodology to solve the multiple attribute assignment problems where the attributes are considered to be Intuitionistic Fuzzy Sets (IFS). We apply the concept of similarity measures of IFS to solve the Intuitionistic Fuzzy Multi-Attribute Assignment Problem (IFMAAP). The weights of the attributes are determined from expert opinion. An illustrative example is solved to verify the developed approach and to demonstrate its practicality.
Solving fully fuzzy transportation problem using pentagonal fuzzy numbers
NASA Astrophysics Data System (ADS)
Maheswari, P. Uma; Ganesan, K.
2018-04-01
In this paper, we propose a simple approach for the solution of fuzzy transportation problem under fuzzy environment in which the transportation costs, supplies at sources and demands at destinations are represented by pentagonal fuzzy numbers. The fuzzy transportation problem is solved without converting to its equivalent crisp form using a robust ranking technique and a new fuzzy arithmetic on pentagonal fuzzy numbers. To illustrate the proposed approach a numerical example is provided.
Simultaneous and semi-alternating projection algorithms for solving split equality problems.
Dong, Qiao-Li; Jiang, Dan
2018-01-01
In this article, we first introduce two simultaneous projection algorithms for solving the split equality problem by using a new choice of the stepsize, and then propose two semi-alternating projection algorithms. The weak convergence of the proposed algorithms is analyzed under standard conditions. As applications, we extend the results to solve the split feasibility problem. Finally, a numerical example is presented to illustrate the efficiency and advantage of the proposed algorithms.
NASA Astrophysics Data System (ADS)
Quy Muoi, Pham; Nho Hào, Dinh; Sahoo, Sujit Kumar; Tang, Dongliang; Cong, Nguyen Huu; Dang, Cuong
2018-05-01
In this paper, we study a gradient-type method and a semismooth Newton method for minimization problems in regularizing inverse problems with nonnegative and sparse solutions. We propose a special penalty functional forcing the minimizers of regularized minimization problems to be nonnegative and sparse, and then we apply the proposed algorithms in a practical the problem. The strong convergence of the gradient-type method and the local superlinear convergence of the semismooth Newton method are proven. Then, we use these algorithms for the phase retrieval problem and illustrate their efficiency in numerical examples, particularly in the practical problem of optical imaging through scattering media where all the noises from experiment are presented.
Rostami, Sima; Torbaghan, Shams Shariat; Dabiri, Shahriar; Babaei, Zahra; Mohammadi, Mohammad Ali; Sharbatkhori, Mitra; Harandi, Majid Fasihi
2015-01-01
Cystic echinococcosis (CE), caused by the larval stage of Echinococcus granulosus, presents an important medical and veterinary problem globally, including that in Iran. Different genotypes of E. granulosus have been reported from human isolates worldwide. This study identifies the genotype of the parasite responsible for human hydatidosis in three provinces of Iran using formalin-fixed paraffin-embedded tissue samples. In this study, 200 formalin-fixed paraffin-embedded tissue samples from human CE cases were collected from Alborz, Tehran, and Kerman provinces. Polymerase chain reaction amplification and sequencing of the partial mitochondrial cytochrome c oxidase subunit 1 gene were performed for genetic characterization of the samples. Phylogenetic analysis of the isolates from this study and reference sequences of different genotypes was done using a maximum likelihood method. In total, 54.4%, 0.8%, 1%, and 40.8% of the samples were identified as the G1, G2, G3, and G6 genotypes, respectively. The findings of the current study confirm the G1 genotype (sheep strain) to be the most prevalent genotype involved in human CE cases in Iran and indicates the high prevalence of the G6 genotype with a high infectivity for humans. Furthermore, this study illustrates the first documented human CE case in Iran infected with the G2 genotype. PMID:25535316
Outcome-Dependent Sampling with Interval-Censored Failure Time Data
Zhou, Qingning; Cai, Jianwen; Zhou, Haibo
2017-01-01
Summary Epidemiologic studies and disease prevention trials often seek to relate an exposure variable to a failure time that suffers from interval-censoring. When the failure rate is low and the time intervals are wide, a large cohort is often required so as to yield reliable precision on the exposure-failure-time relationship. However, large cohort studies with simple random sampling could be prohibitive for investigators with a limited budget, especially when the exposure variables are expensive to obtain. Alternative cost-effective sampling designs and inference procedures are therefore desirable. We propose an outcome-dependent sampling (ODS) design with interval-censored failure time data, where we enrich the observed sample by selectively including certain more informative failure subjects. We develop a novel sieve semiparametric maximum empirical likelihood approach for fitting the proportional hazards model to data from the proposed interval-censoring ODS design. This approach employs the empirical likelihood and sieve methods to deal with the infinite-dimensional nuisance parameters, which greatly reduces the dimensionality of the estimation problem and eases the computation difficulty. The consistency and asymptotic normality of the resulting regression parameter estimator are established. The results from our extensive simulation study show that the proposed design and method works well for practical situations and is more efficient than the alternative designs and competing approaches. An example from the Atherosclerosis Risk in Communities (ARIC) study is provided for illustration. PMID:28771664
Validating a biometric authentication system: sample size requirements.
Dass, Sarat C; Zhu, Yongfang; Jain, Anil K
2006-12-01
Authentication systems based on biometric features (e.g., fingerprint impressions, iris scans, human face images, etc.) are increasingly gaining widespread use and popularity. Often, vendors and owners of these commercial biometric systems claim impressive performance that is estimated based on some proprietary data. In such situations, there is a need to independently validate the claimed performance levels. System performance is typically evaluated by collecting biometric templates from n different subjects, and for convenience, acquiring multiple instances of the biometric for each of the n subjects. Very little work has been done in 1) constructing confidence regions based on the ROC curve for validating the claimed performance levels and 2) determining the required number of biometric samples needed to establish confidence regions of prespecified width for the ROC curve. To simplify the analysis that address these two problems, several previous studies have assumed that multiple acquisitions of the biometric entity are statistically independent. This assumption is too restrictive and is generally not valid. We have developed a validation technique based on multivariate copula models for correlated biometric acquisitions. Based on the same model, we also determine the minimum number of samples required to achieve confidence bands of desired width for the ROC curve. We illustrate the estimation of the confidence bands as well as the required number of biometric samples using a fingerprint matching system that is applied on samples collected from a small population.
Social Influences in Sequential Decision Making
Schöbel, Markus; Rieskamp, Jörg; Huber, Rafael
2016-01-01
People often make decisions in a social environment. The present work examines social influence on people’s decisions in a sequential decision-making situation. In the first experimental study, we implemented an information cascade paradigm, illustrating that people infer information from decisions of others and use this information to make their own decisions. We followed a cognitive modeling approach to elicit the weight people give to social as compared to private individual information. The proposed social influence model shows that participants overweight their own private information relative to social information, contrary to the normative Bayesian account. In our second study, we embedded the abstract decision problem of Study 1 in a medical decision-making problem. We examined whether in a medical situation people also take others’ authority into account in addition to the information that their decisions convey. The social influence model illustrates that people weight social information differentially according to the authority of other decision makers. The influence of authority was strongest when an authority's decision contrasted with private information. Both studies illustrate how the social environment provides sources of information that people integrate differently for their decisions. PMID:26784448
Social Influences in Sequential Decision Making.
Schöbel, Markus; Rieskamp, Jörg; Huber, Rafael
2016-01-01
People often make decisions in a social environment. The present work examines social influence on people's decisions in a sequential decision-making situation. In the first experimental study, we implemented an information cascade paradigm, illustrating that people infer information from decisions of others and use this information to make their own decisions. We followed a cognitive modeling approach to elicit the weight people give to social as compared to private individual information. The proposed social influence model shows that participants overweight their own private information relative to social information, contrary to the normative Bayesian account. In our second study, we embedded the abstract decision problem of Study 1 in a medical decision-making problem. We examined whether in a medical situation people also take others' authority into account in addition to the information that their decisions convey. The social influence model illustrates that people weight social information differentially according to the authority of other decision makers. The influence of authority was strongest when an authority's decision contrasted with private information. Both studies illustrate how the social environment provides sources of information that people integrate differently for their decisions.
Permutation tests for goodness-of-fit testing of mathematical models to experimental data.
Fişek, M Hamit; Barlas, Zeynep
2013-03-01
This paper presents statistical procedures for improving the goodness-of-fit testing of theoretical models to data obtained from laboratory experiments. We use an experimental study in the expectation states research tradition which has been carried out in the "standardized experimental situation" associated with the program to illustrate the application of our procedures. We briefly review the expectation states research program and the fundamentals of resampling statistics as we develop our procedures in the resampling context. The first procedure we develop is a modification of the chi-square test which has been the primary statistical tool for assessing goodness of fit in the EST research program, but has problems associated with its use. We discuss these problems and suggest a procedure to overcome them. The second procedure we present, the "Average Absolute Deviation" test, is a new test and is proposed as an alternative to the chi square test, as being simpler and more informative. The third and fourth procedures are permutation versions of Jonckheere's test for ordered alternatives, and Kendall's tau(b), a rank order correlation coefficient. The fifth procedure is a new rank order goodness-of-fit test, which we call the "Deviation from Ideal Ranking" index, which we believe may be more useful than other rank order tests for assessing goodness-of-fit of models to experimental data. The application of these procedures to the sample data is illustrated in detail. We then present another laboratory study from an experimental paradigm different from the expectation states paradigm - the "network exchange" paradigm, and describe how our procedures may be applied to this data set. Copyright © 2012 Elsevier Inc. All rights reserved.
Strategies to address management challenges in larger intensive care units.
Matlakala, M C; Bezuidenhout, M C; Botha, A D H
2015-10-01
To illustrate the need for and suggest strategies that will enhance sustainable management of a large intensive care unit (ICU). The challenges faced by intensive care nursing in South Africa are well documented. However, there appear to be no strategies available to assist nurses to manage large ICUs or for ICU managers to deal with problems as they arise. Data sources to illustrate the need for strategies were challenges described by ICU managers in the management of large ICUs. A purposive sample of managers was included in individual interviews during compilation of evidence regarding the challenges experienced in the management of large ICUs. The challenges were presented at the Critical Care Society of Southern Africa Congress held on 28 August to 2 September 2012 in Sun City North-West province, South Africa. Five strategies are suggested for the challenges identified: divide the units into sections; develop a highly skilled and effective nursing workforce to ensure delivery of quality nursing care; create a culture to retain an effective ICU nursing team; manage assets; and determine the needs of ICU nurses. ICUs need measures to drive the desired strategies into actions to continuously improve the management of the unit. Future research should be aimed at investigating the effectiveness of the strategies identified. This research highlights issues relating to large ICUs and the strategies will assist ICU managers to deal with problems related to large unit sizes, shortage of trained ICU nurses, use of agency nurses, shortage of equipment and supplies and stressors in the ICU. The article will make a contribution to the body of nursing literature on management of ICUs. © 2014 John Wiley & Sons Ltd.
The American Indian High School Dropout: The Magnitude of the Problem.
ERIC Educational Resources Information Center
Selinger, Alphonse D.
The magnitude of the dropout problem among Indians was illustrated by a study which followed students registered in grade 8 as of November 1962 through June 1967. Statistics were gathered by area, state, type of school, tribal group, and majority-minority position of Indian students in the 6-state area of Oregon, Washington, Idaho, Montana, South…
The Solution of Large Time-Dependent Problems Using Reduced Coordinates.
1987-06-01
numerical intergration schemes for dynamic problems, the algorithm known as Newmark’s Method. The behavior of the Newmark scheme, as well as the basic...T’he horizontal displacements at the mid-height and the bottom of the buildin- are shown in f igure 4. 13. The solution history illustrated is for a
Tracking Problem Solving by Multivariate Pattern Analysis and Hidden Markov Model Algorithms
ERIC Educational Resources Information Center
Anderson, John R.
2012-01-01
Multivariate pattern analysis can be combined with Hidden Markov Model algorithms to track the second-by-second thinking as people solve complex problems. Two applications of this methodology are illustrated with a data set taken from children as they interacted with an intelligent tutoring system for algebra. The first "mind reading" application…
Female circumcision: obstetric issues.
Baker, C A; Gilson, G J; Vill, M D; Curet, L B
1993-12-01
Female circumcision is a problem unfamiliar to most Western obstetrician-gynecologists. We present a case illustrative of the unique management problems posed by these patients during labor. A method of releasing the anterior vulvar scar tissue to allow vaginal delivery is described. Sensitivity and a nonjudgmental approach as to what is culturally appropriate care for these women are of paramount importance.
Hydraulics in civil engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chadwick, A.; Morfett, J.
1986-01-01
This undergraduate text combines fundamental theoretical concepts with design applications to provide coverage of hydraulics in civil engineering. The authors have incorporated the results of research in many areas and have taken advantage of the availability of microcomputers in the presentation and solution of problems. In addition, the text embodies a set of worked examples to illustrate the theoretical concepts, and typical problems.
Water Pollution, A Scientists' Institute for Public Information Workbook.
ERIC Educational Resources Information Center
Berg, George G.
Analyzed are the reasons why present mechanisms for the control of water purity are inadequate. The control of waterborne epidemics is discussed to illustrate a problem which has been solved, then degradation of the environment is presented as an unsolved problem. Case histories are given of pollution and attempts at control in rivers, lakes,…
Parsing Protocols Using Problem Solving Grammars. AI Memo 385.
ERIC Educational Resources Information Center
Miller, Mark L.; Goldstein, Ira P.
A theory of the planning and debugging of computer programs is formalized as a context free grammar, which is used to reveal the constituent structure of problem solving episodes by parsing protocols in which programs are written, tested, and debugged. This is illustrated by the detailed analysis of an actual session with a beginning student…
ERIC Educational Resources Information Center
Metz, Dale Evan; And Others
1980-01-01
The paper presents four research projects in process in the Communication Sciences Laboratory at the National Technical Institute for the Deaf. These projects illustrate four broad areas of research on the relationships between higher order information processing systems and the communication skills and problems exhibited by deaf people. (Author)
On convergence of solutions to variational-hemivariational inequalities
NASA Astrophysics Data System (ADS)
Zeng, Biao; Liu, Zhenhai; Migórski, Stanisław
2018-06-01
In this paper we investigate the convergence behavior of the solutions to the time-dependent variational-hemivariational inequalities with respect to the data. First, we give an existence and uniqueness result for the problem, and then, deliver a continuous dependence result when all the data are subjected to perturbations. A semipermeability problem is given to illustrate our main results.
Backus-Gilbert inversion of travel time data
NASA Technical Reports Server (NTRS)
Johnson, L. E.
1972-01-01
Application of the Backus-Gilbert theory for geophysical inverse problems to the seismic body wave travel-time problem is described. In particular, it is shown how to generate earth models that fit travel-time data to within one standard error and having generated such models how to describe their degree of uniqueness. An example is given to illustrate the process.
Rotordynamic Instability Problems in High-Performance Turbomachinery
NASA Technical Reports Server (NTRS)
1984-01-01
Rotordynamics and predictions on the stability of characteristics of high performance turbomachinery were discussed. Resolutions of problems on experimental validation of the forces that influence rotordynamics were emphasized. The programs to predict or measure forces and force coefficients in high-performance turbomachinery are illustrated. Data to design new machines with enhanced stability characteristics or upgrading existing machines are presented.
Section 504 and Student Health Problems: The Pivotal Position of the School Nurse
ERIC Educational Resources Information Center
Zirkel, Perry A.; Granthom, Margarita Fernan; Lovato, Leanna
2012-01-01
News reports illustrate controversies between parents and schools in response to student health problems. Today's school nurse is in a pivotal position for the avoidance and resolution of disputes not only by increasing awareness of student health conditions but also by having a working knowledge of legal developments under Section 504 and its…
ERIC Educational Resources Information Center
Chazan, Daniel; Sela, Hagit; Herbst, Patricio
2012-01-01
We illustrate a method, which is modeled on "breaching experiments," for studying tacit norms that govern classroom interaction around particular mathematical content. Specifically, this study explores norms that govern teachers' expectations for the doing of word problems in school algebra. Teacher study groups discussed representations of…
ERIC Educational Resources Information Center
Niaz, Mansoor
2001-01-01
Illustrates how a novel problem of chemical equilibrium based on a closely related sequence of items can facilitate students' conceptual understanding. Students were presented with a chemical reaction in equilibrium to which a reactant was added as an external effect. Three studies were conducted to assess alternative conceptions. (Author/SAH)
ERIC Educational Resources Information Center
Shearer, Christopher A.
This booklet provides examples of how students have been helped through the provision of school-based health care. The stories, submitted by principals, school nurses, nurse practitioners, doctors, health center directors, and students, illustrate the pressing health problems faced by students today. The problems addressed in these personal…
Time-Shared Control Systems: Promises and Problems
ERIC Educational Resources Information Center
King, John F.
1975-01-01
As an illustration of an attempt at dealing with the problem of time-sharing small computers for laboratory control resulting from conflicts between real-time responsiveness needs and the matter of priorities and administration of the system as a whole, a description is provided of a time-shared system that is used to control and service multiple…
Numerical solutions of a control problem governed by functional differential equations
NASA Technical Reports Server (NTRS)
Banks, H. T.; Thrift, P. R.; Burns, J. A.; Cliff, E. M.
1978-01-01
A numerical procedure is proposed for solving optimal control problems governed by linear retarded functional differential equations. The procedure is based on the idea of 'averaging approximations', due to Banks and Burns (1975). For illustration, numerical results generated on an IBM 370/158 computer, which demonstrate the rapid convergence of the method are presented.
How To Evaluate Lessons from the Past with Illustrations from the Case of Pearl Harbor.
ERIC Educational Resources Information Center
Durfee, Mary
Policy makers use past experience and history to think about current and potential problems and to explain policies and problems to others. Decision makers may be overly influenced by significant personally-experienced events that loom so large in their eyes that details and related relevant information may pale in comparison. Deficiencies in…
A Cross-Curricular, Problem-Based Project to Promote Understanding of Poverty in Urban Communities
ERIC Educational Resources Information Center
Gardner, Daniel S.; Tuchman, Ellen; Hawkins, Robert
2010-01-01
This article describes the use of problem-based learning to teach students about the scope and consequences of urban poverty through an innovative cross-curricular project. We illustrate the process, goals, and tasks of the Community Assessment Project, which incorporates community-level assessment, collection and analysis of public data, and…
A note on the modelling of circular smallholder migration.
Bigsten, A
1988-01-01
"It is argued that circular migration [in Africa] should be seen as an optimization problem, where the household allocates its labour resources across activities, including work which requires migration, so as to maximize the joint family utility function. The migration problem is illustrated in a simple diagram, which makes it possible to analyse economic aspects of migration." excerpt
Inversion of Robin coefficient by a spectral stochastic finite element approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin Bangti; Zou Jun
2008-03-01
This paper investigates a variational approach to the nonlinear stochastic inverse problem of probabilistically calibrating the Robin coefficient from boundary measurements for the steady-state heat conduction. The problem is formulated into an optimization problem, and mathematical properties relevant to its numerical computations are investigated. The spectral stochastic finite element method using polynomial chaos is utilized for the discretization of the optimization problem, and its convergence is analyzed. The nonlinear conjugate gradient method is derived for the optimization system. Numerical results for several two-dimensional problems are presented to illustrate the accuracy and efficiency of the stochastic finite element method.
Structural testing for static failure, flutter and other scary things
NASA Technical Reports Server (NTRS)
Ricketts, R. H.
1983-01-01
Ground test and flight test methods are described that may be used to highlight potential structural problems that occur on aircraft. Primary interest is focused on light-weight general aviation airplanes. The structural problems described include static strength failure, aileron reversal, static divergence, and flutter. An example of each of the problems is discussed to illustrate how the data acquired during the tests may be used to predict the occurrence of the structural problem. While some rules of thumb for the prediction of structural problems are given the report is not intended to be used explicitly as a structural analysis handbook.
Parallel solution of sparse one-dimensional dynamic programming problems
NASA Technical Reports Server (NTRS)
Nicol, David M.
1989-01-01
Parallel computation offers the potential for quickly solving large computational problems. However, it is often a non-trivial task to effectively use parallel computers. Solution methods must sometimes be reformulated to exploit parallelism; the reformulations are often more complex than their slower serial counterparts. We illustrate these points by studying the parallelization of sparse one-dimensional dynamic programming problems, those which do not obviously admit substantial parallelization. We propose a new method for parallelizing such problems, develop analytic models which help us to identify problems which parallelize well, and compare the performance of our algorithm with existing algorithms on a multiprocessor.
The principle of superposition and its application in ground-water hydraulics
Reilly, T.E.; Franke, O.L.; Bennett, G.D.
1984-01-01
The principle of superposition, a powerful methematical technique for analyzing certain types of complex problems in many areas of science and technology, has important application in ground-water hydraulics and modeling of ground-water systems. The principle of superposition states that solutions to individual problems can be added together to obtain solutions to complex problems. This principle applies to linear systems governed by linear differential equations. This report introduces the principle of superposition as it applies to groundwater hydrology and provides background information, discussion, illustrative problems with solutions, and problems to be solved by the reader. (USGS)
Color doppler in clinical cardiology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duncan, W.J.
1987-01-01
A presentation of color doppler, which enables physicians to pinpoint problems and develop effective treatment. State-of-the-art illustrations and layout, with color images and explanatory text are included.
A new neural network model for solving random interval linear programming problems.
Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza
2017-05-01
This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yang, Wen; Fung, Richard Y. K.
2014-06-01
This article considers an order acceptance problem in a make-to-stock manufacturing system with multiple demand classes in a finite time horizon. Demands in different periods are random variables and are independent of one another, and replenishments of inventory deviate from the scheduled quantities. The objective of this work is to maximize the expected net profit over the planning horizon by deciding the fraction of the demand that is going to be fulfilled. This article presents a stochastic order acceptance optimization model and analyses the existence of the optimal promising policies. An example of a discrete problem is used to illustrate the policies by applying the dynamic programming method. In order to solve the continuous problems, a heuristic algorithm based on stochastic approximation (HASA) is developed. Finally, the computational results of a case example illustrate the effectiveness and efficiency of the HASA approach, and make the application of the proposed model readily acceptable.
DataView: a computational visualisation system for multidisciplinary design and analysis
NASA Astrophysics Data System (ADS)
Wang, Chengen
2016-01-01
Rapidly processing raw data and effectively extracting underlining information from huge volumes of multivariate data become essential to all decision-making processes in sectors like finance, government, medical care, climate analysis, industries, science, etc. Remarkably, visualisation is recognised as a fundamental technology that props up human comprehension, cognition and utilisation of burgeoning amounts of heterogeneous data. This paper presents a computational visualisation system, named DataView, which has been developed for graphically displaying and capturing outcomes of multiphysics problem-solvers widely used in engineering fields. The DataView is functionally composed of techniques for table/diagram representation, and graphical illustration of scalar, vector and tensor fields. The field visualisation techniques are implemented on the basis of a range of linear and non-linear meshes, which flexibly adapts to disparate data representation schemas adopted by a variety of disciplinary problem-solvers. The visualisation system has been successfully applied to a number of engineering problems, of which some illustrations are presented to demonstrate effectiveness of the visualisation techniques.
Multiobjective optimization approach: thermal food processing.
Abakarov, A; Sushkov, Y; Almonacid, S; Simpson, R
2009-01-01
The objective of this study was to utilize a multiobjective optimization technique for the thermal sterilization of packaged foods. The multiobjective optimization approach used in this study is based on the optimization of well-known aggregating functions by an adaptive random search algorithm. The applicability of the proposed approach was illustrated by solving widely used multiobjective test problems taken from the literature. The numerical results obtained for the multiobjective test problems and for the thermal processing problem show that the proposed approach can be effectively used for solving multiobjective optimization problems arising in the food engineering field.
A robust optimisation approach to the problem of supplier selection and allocation in outsourcing
NASA Astrophysics Data System (ADS)
Fu, Yelin; Keung Lai, Kin; Liang, Liang
2016-03-01
We formulate the supplier selection and allocation problem in outsourcing under an uncertain environment as a stochastic programming problem. Both the decision-maker's attitude towards risk and the penalty parameters for demand deviation are considered in the objective function. A service level agreement, upper bound for each selected supplier's allocation and the number of selected suppliers are considered as constraints. A novel robust optimisation approach is employed to solve this problem under different economic situations. Illustrative examples are presented with managerial implications highlighted to support decision-making.
The Sharma-Parthasarathy stochastic two-body problem
NASA Astrophysics Data System (ADS)
Cresson, J.; Pierret, F.; Puig, B.
2015-03-01
We study the Sharma-Parthasarathy stochastic two-body problem introduced by Sharma and Parthasarathy in ["Dynamics of a stochastically perturbed two-body problem," Proc. R. Soc. A 463, 979-1003 (2007)]. In particular, we focus on the preservation of some fundamental features of the classical two-body problem like the Hamiltonian structure and first integrals in the stochastic case. Numerical simulations are performed which illustrate the dynamical behaviour of the osculating elements as the semi-major axis, the eccentricity, and the pericenter. We also derive a stochastic version of Gauss's equations in the planar case.
Distributed Control with Collective Intelligence
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Wheeler, Kevin R.; Tumer, Kagan
1998-01-01
We consider systems of interacting reinforcement learning (RL) algorithms that do not work at cross purposes , in that their collective behavior maximizes a global utility function. We call such systems COllective INtelligences (COINs). We present the theory of designing COINs. Then we present experiments validating that theory in the context of two distributed control problems: We show that COINs perform near-optimally in a difficult variant of Arthur's bar problem [Arthur] (and in particular avoid the tragedy of the commons for that problem), and we also illustrate optimal performance in the master-slave problem.
Fast Detection of Material Deformation through Structural Dissimilarity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ushizima, Daniela; Perciano, Talita; Parkinson, Dilworth
2015-10-29
Designing materials that are resistant to extreme temperatures and brittleness relies on assessing structural dynamics of samples. Algorithms are critically important to characterize material deformation under stress conditions. Here, we report on our design of coarse-grain parallel algorithms for image quality assessment based on structural information and on crack detection of gigabyte-scale experimental datasets. We show how key steps can be decomposed into distinct processing flows, one based on structural similarity (SSIM) quality measure, and another on spectral content. These algorithms act upon image blocks that fit into memory, and can execute independently. We discuss the scientific relevance of themore » problem, key developments, and decomposition of complementary tasks into separate executions. We show how to apply SSIM to detect material degradation, and illustrate how this metric can be allied to spectral analysis for structure probing, while using tiled multi-resolution pyramids stored in HDF5 chunked multi-dimensional arrays. Results show that the proposed experimental data representation supports an average compression rate of 10X, and data compression scales linearly with the data size. We also illustrate how to correlate SSIM to crack formation, and how to use our numerical schemes to enable fast detection of deformation from 3D datasets evolving in time.« less
Control of Finite-State, Finite Memory Stochastic Systems
NASA Technical Reports Server (NTRS)
Sandell, Nils R.
1974-01-01
A generalized problem of stochastic control is discussed in which multiple controllers with different data bases are present. The vehicle for the investigation is the finite state, finite memory (FSFM) stochastic control problem. Optimality conditions are obtained by deriving an equivalent deterministic optimal control problem. A FSFM minimum principle is obtained via the equivalent deterministic problem. The minimum principle suggests the development of a numerical optimization algorithm, the min-H algorithm. The relationship between the sufficiency of the minimum principle and the informational properties of the problem are investigated. A problem of hypothesis testing with 1-bit memory is investigated to illustrate the application of control theoretic techniques to information processing problems.
Semantic Annotation of Complex Text Structures in Problem Reports
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Throop, David R.; Fleming, Land D.
2011-01-01
Text analysis is important for effective information retrieval from databases where the critical information is embedded in text fields. Aerospace safety depends on effective retrieval of relevant and related problem reports for the purpose of trend analysis. The complex text syntax in problem descriptions has limited statistical text mining of problem reports. The presentation describes an intelligent tagging approach that applies syntactic and then semantic analysis to overcome this problem. The tags identify types of problems and equipment that are embedded in the text descriptions. The power of these tags is illustrated in a faceted searching and browsing interface for problem report trending that combines automatically generated tags with database code fields and temporal information.
A Sampling of Success Stories of Federal Investment in University Research.
ERIC Educational Resources Information Center
2003
This volume illustrates how Canadian federal investment is advancing key measures of success in building research capacity at Ontario universities. Program descriptions illustrate federal investment in university-based research at 18 institutions of higher education in Ontario. A look at these programs shows that government-funded research is…
Roh, Kum-Hwan; Kim, Ji Yeoun; Shin, Yong Hyun
2017-01-01
In this paper, we investigate the optimal consumption and portfolio selection problem with negative wealth constraints for an economic agent who has a quadratic utility function of consumption and receives a constant labor income. Due to the property of the quadratic utility function, we separate our problem into two cases and derive the closed-form solutions for each case. We also illustrate some numerical implications of the optimal consumption and portfolio.
Urological considerations in space medicine.
NASA Technical Reports Server (NTRS)
Cockett, A. T. K.; Adey, W. R.; Roberts, A. P.
1972-01-01
Urological problems encountered during the preparation phases of Biosatellite III, flight of Bonny the Space Monkey, are detailed. The solution to each problem is detailed. The catheter system employed, antibiotic coverage used, and bacteria encountered in the urine of the five animals are detailed. Urinary calcium levels in three ground based animals are illustrated. Testicular alterations encountered in all animals are mentioned. It is concluded that space flights of duration beyond nine days may present serious problems of a urological nature.
NASA Astrophysics Data System (ADS)
Bai, Yunru; Baleanu, Dumitru; Wu, Guo-Cheng
2018-06-01
We investigate a class of generalized differential optimization problems driven by the Caputo derivative. Existence of weak Carathe ´odory solution is proved by using Weierstrass existence theorem, fixed point theorem and Filippov implicit function lemma etc. Then a numerical approximation algorithm is introduced, and a convergence theorem is established. Finally, a nonlinear programming problem constrained by the fractional differential equation is illustrated and the results verify the validity of the algorithm.
Impact of future fuel properties on aircraft engines and fuel systems
NASA Technical Reports Server (NTRS)
Rudey, R. A.; Grobman, J. S.
1978-01-01
This paper describes and discusses the propulsion-system problems that will most likely be encountered if the specifications of hydrocarbon-based jet fuels must undergo significant changes in the future and, correspondingly, the advances in technology that will be required to minimize the adverse impact of these problems. Several investigations conducted are summarized. Illustrations are used to describe the relative effects of selected fuel properties on the behavior of propulsion-system components and fuel systems. The selected fuel properties are those that are most likely to be relaxed in future fuel specifications. Illustrations are also used to describe technological advances that may be needed in the future. Finally, the technological areas needing the most attention are described, and programs that are under way to address these needs are briefly discussed.
The Role of Medication in Supporting Emotional Wellbeing in Young People with Long-Term Needs
Gray, Nicola J.; Wood, Damian M.
2017-01-01
Young people frequently use and access prescribed medications for a range of health problems. Medications aimed at treating both common health problems and long-term physical and mental health needs in adolescence can have a significant effect on a young person’s emotional well-being. We use a series of case studies to illustrate the challenges for healthcare professionals supporting young people with medication use. The studies illustrate the efficacy and limitations of medication on improving emotional well-being by alleviating illness and distress, and how this efficacy must be balanced against both the adverse effects and the burden of treatment. There are specific challenges for medication management during adolescence including issues of adherence/concordance, facilitating autonomy and participation in decision making, and promoting independence. PMID:29099742
Li, Ben; Sun, Zhaonan; He, Qing; Zhu, Yu; Qin, Zhaohui S
2016-03-01
Modern high-throughput biotechnologies such as microarray are capable of producing a massive amount of information for each sample. However, in a typical high-throughput experiment, only limited number of samples were assayed, thus the classical 'large p, small n' problem. On the other hand, rapid propagation of these high-throughput technologies has resulted in a substantial collection of data, often carried out on the same platform and using the same protocol. It is highly desirable to utilize the existing data when performing analysis and inference on a new dataset. Utilizing existing data can be carried out in a straightforward fashion under the Bayesian framework in which the repository of historical data can be exploited to build informative priors and used in new data analysis. In this work, using microarray data, we investigate the feasibility and effectiveness of deriving informative priors from historical data and using them in the problem of detecting differentially expressed genes. Through simulation and real data analysis, we show that the proposed strategy significantly outperforms existing methods including the popular and state-of-the-art Bayesian hierarchical model-based approaches. Our work illustrates the feasibility and benefits of exploiting the increasingly available genomics big data in statistical inference and presents a promising practical strategy for dealing with the 'large p, small n' problem. Our method is implemented in R package IPBT, which is freely available from https://github.com/benliemory/IPBT CONTACT: yuzhu@purdue.edu; zhaohui.qin@emory.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.
Xie, Yanmei; Zhang, Biao
2017-04-20
Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and Nutrition Examination Survey (NHANES).
Differentially Private Frequent Sequence Mining via Sampling-based Candidate Pruning
Xu, Shengzhi; Cheng, Xiang; Li, Zhengyi; Xiong, Li
2016-01-01
In this paper, we study the problem of mining frequent sequences under the rigorous differential privacy model. We explore the possibility of designing a differentially private frequent sequence mining (FSM) algorithm which can achieve both high data utility and a high degree of privacy. We found, in differentially private FSM, the amount of required noise is proportionate to the number of candidate sequences. If we could effectively reduce the number of unpromising candidate sequences, the utility and privacy tradeoff can be significantly improved. To this end, by leveraging a sampling-based candidate pruning technique, we propose a novel differentially private FSM algorithm, which is referred to as PFS2. The core of our algorithm is to utilize sample databases to further prune the candidate sequences generated based on the downward closure property. In particular, we use the noisy local support of candidate sequences in the sample databases to estimate which sequences are potentially frequent. To improve the accuracy of such private estimations, a sequence shrinking method is proposed to enforce the length constraint on the sample databases. Moreover, to decrease the probability of misestimating frequent sequences as infrequent, a threshold relaxation method is proposed to relax the user-specified threshold for the sample databases. Through formal privacy analysis, we show that our PFS2 algorithm is ε-differentially private. Extensive experiments on real datasets illustrate that our PFS2 algorithm can privately find frequent sequences with high accuracy. PMID:26973430
Refining lunar impact chronology through high spatial resolution 40Ar/39Ar dating of impact melts
Mercer, Cameron M.; Young, Kelsey E.; Weirich, John R.; Hodges, Kip V.; Jolliff, Bradley L.; Wartho, Jo-Anne; van Soest, Matthijs C.
2015-01-01
Quantitative constraints on the ages of melt-forming impact events on the Moon are based primarily on isotope geochronology of returned samples. However, interpreting the results of such studies can often be difficult because the provenance region of any sample returned from the lunar surface may have experienced multiple impact events over the course of billions of years of bombardment. We illustrate this problem with new laser microprobe 40Ar/39Ar data for two Apollo 17 impact melt breccias. Whereas one sample yields a straightforward result, indicating a single melt-forming event at ca. 3.83 Ga, data from the other sample document multiple impact melt–forming events between ca. 3.81 Ga and at least as young as ca. 3.27 Ga. Notably, published zircon U/Pb data indicate the existence of even older melt products in the same sample. The revelation of multiple impact events through 40Ar/39Ar geochronology is likely not to have been possible using standard incremental heating methods alone, demonstrating the complementarity of the laser microprobe technique. Evidence for 3.83 Ga to 3.81 Ga melt components in these samples reinforces emerging interpretations that Apollo 17 impact breccia samples include a significant component of ejecta from the Imbrium basin impact. Collectively, our results underscore the need to quantitatively resolve the ages of different melt generations from multiple samples to improve our current understanding of the lunar impact record, and to establish the absolute ages of important impact structures encountered during future exploration missions in the inner Solar System. PMID:26601128
Mercer, Cameron M; Young, Kelsey E; Weirich, John R; Hodges, Kip V; Jolliff, Bradley L; Wartho, Jo-Anne; van Soest, Matthijs C
2015-02-01
Quantitative constraints on the ages of melt-forming impact events on the Moon are based primarily on isotope geochronology of returned samples. However, interpreting the results of such studies can often be difficult because the provenance region of any sample returned from the lunar surface may have experienced multiple impact events over the course of billions of years of bombardment. We illustrate this problem with new laser microprobe (40)Ar/(39)Ar data for two Apollo 17 impact melt breccias. Whereas one sample yields a straightforward result, indicating a single melt-forming event at ca. 3.83 Ga, data from the other sample document multiple impact melt-forming events between ca. 3.81 Ga and at least as young as ca. 3.27 Ga. Notably, published zircon U/Pb data indicate the existence of even older melt products in the same sample. The revelation of multiple impact events through (40)Ar/(39)Ar geochronology is likely not to have been possible using standard incremental heating methods alone, demonstrating the complementarity of the laser microprobe technique. Evidence for 3.83 Ga to 3.81 Ga melt components in these samples reinforces emerging interpretations that Apollo 17 impact breccia samples include a significant component of ejecta from the Imbrium basin impact. Collectively, our results underscore the need to quantitatively resolve the ages of different melt generations from multiple samples to improve our current understanding of the lunar impact record, and to establish the absolute ages of important impact structures encountered during future exploration missions in the inner Solar System.
Confidence Preserving Machine for Facial Action Unit Detection
Zeng, Jiabei; Chu, Wen-Sheng; De la Torre, Fernando; Cohn, Jeffrey F.; Xiong, Zhang
2016-01-01
Facial action unit (AU) detection from video has been a long-standing problem in automated facial expression analysis. While progress has been made, accurate detection of facial AUs remains challenging due to ubiquitous sources of errors, such as inter-personal variability, pose, and low-intensity AUs. In this paper, we refer to samples causing such errors as hard samples, and the remaining as easy samples. To address learning with the hard samples, we propose the Confidence Preserving Machine (CPM), a novel two-stage learning framework that combines multiple classifiers following an “easy-to-hard” strategy. During the training stage, CPM learns two confident classifiers. Each classifier focuses on separating easy samples of one class from all else, and thus preserves confidence on predicting each class. During the testing stage, the confident classifiers provide “virtual labels” for easy test samples. Given the virtual labels, we propose a quasi-semi-supervised (QSS) learning strategy to learn a person-specific (PS) classifier. The QSS strategy employs a spatio-temporal smoothness that encourages similar predictions for samples within a spatio-temporal neighborhood. In addition, to further improve detection performance, we introduce two CPM extensions: iCPM that iteratively augments training samples to train the confident classifiers, and kCPM that kernelizes the original CPM model to promote nonlinearity. Experiments on four spontaneous datasets GFT [15], BP4D [56], DISFA [42], and RU-FACS [3] illustrate the benefits of the proposed CPM models over baseline methods and state-of-the-art semisupervised learning and transfer learning methods. PMID:27479964
Latent spatial models and sampling design for landscape genetics
Hanks, Ephraim M.; Hooten, Mevin B.; Knick, Steven T.; Oyler-McCance, Sara J.; Fike, Jennifer A.; Cross, Todd B.; Schwartz, Michael K.
2016-01-01
We propose a spatially-explicit approach for modeling genetic variation across space and illustrate how this approach can be used to optimize spatial prediction and sampling design for landscape genetic data. We propose a multinomial data model for categorical microsatellite allele data commonly used in landscape genetic studies, and introduce a latent spatial random effect to allow for spatial correlation between genetic observations. We illustrate how modern dimension reduction approaches to spatial statistics can allow for efficient computation in landscape genetic statistical models covering large spatial domains. We apply our approach to propose a retrospective spatial sampling design for greater sage-grouse (Centrocercus urophasianus) population genetics in the western United States.
NASA Technical Reports Server (NTRS)
Fleischer, G. E.; Likins, P. W.
1975-01-01
Three computer subroutines designed to solve the vector-dyadic differential equations of rotational motion for systems that may be idealized as a collection of hinge-connected rigid bodies assembled in a tree topology, with an optional flexible appendage attached to each body are reported. Deformations of the appendages are mathematically represented by modal coordinates and are assumed small. Within these constraints, the subroutines provide equation solutions for (1) the most general case of unrestricted hinge rotations, with appendage base bodies nominally rotating at a constant speed, (2) the case of unrestricted hinge rotations between rigid bodies, with the restriction that those rigid bodies carrying appendages are nominally nonspinning, and (3) the case of small hinge rotations and nominally nonrotating appendages. Sample problems and their solutions are presented to illustrate the utility of the computer programs.
Modelling of thick composites using a layerwise laminate theory
NASA Technical Reports Server (NTRS)
Robbins, D. H., Jr.; Reddy, J. N.
1993-01-01
The layerwise laminate theory of Reddy (1987) is used to develop a layerwise, two-dimensional, displacement-based, finite element model of laminated composite plates that assumes a piecewise continuous distribution of the tranverse strains through the laminate thickness. The resulting layerwise finite element model is capable of computing interlaminar stresses and other localized effects with the same level of accuracy as a conventional 3D finite element model. Although the total number of degrees of freedom are comparable in both models, the layerwise model maintains a 2D-type data structure that provides several advantages over a conventional 3D finite element model, e.g. simplified input data, ease of mesh alteration, and faster element stiffness matrix formulation. Two sample problems are provided to illustrate the accuracy of the present model in computing interlaminar stresses for laminates in bending and extension.
Structural nested mean models for assessing time-varying effect moderation.
Almirall, Daniel; Ten Have, Thomas; Murphy, Susan A
2010-03-01
This article considers the problem of assessing causal effect moderation in longitudinal settings in which treatment (or exposure) is time varying and so are the covariates said to moderate its effect. Intermediate causal effects that describe time-varying causal effects of treatment conditional on past covariate history are introduced and considered as part of Robins' structural nested mean model. Two estimators of the intermediate causal effects, and their standard errors, are presented and discussed: The first is a proposed two-stage regression estimator. The second is Robins' G-estimator. The results of a small simulation study that begins to shed light on the small versus large sample performance of the estimators, and on the bias-variance trade-off between the two estimators are presented. The methodology is illustrated using longitudinal data from a depression study.
NASA Technical Reports Server (NTRS)
Roller, N. E. G.
1977-01-01
The concept of using remote sensing to inventory wetlands and the related topics of proper inventory design and data collection are discussed. The material presented shows that aerial photography is the form of remote sensing from which the greatest amount of wetlands information can be derived. For extensive, general-purpose wetlands inventories, however, the use of LANDSAT data may be more cost-effective. Airborne multispectral scanners and radar are, in the main, too expensive to use - unless the information that these sensors alone can gather remotely is absolutely required. Multistage sampling employing space and high altitude remote sensing data in the initial stages appears to be an efficient survey strategy for gathering non-point specific wetlands inventory data over large areas. The operational role of remote sensing insupplying inventory data for application to several typical wetlands management problems is illustrated by summary descriptions of past ERIM projects.
NASA Astrophysics Data System (ADS)
Gelß, Patrick; Matera, Sebastian; Schütte, Christof
2016-06-01
In multiscale modeling of heterogeneous catalytic processes, one crucial point is the solution of a Markovian master equation describing the stochastic reaction kinetics. Usually, this is too high-dimensional to be solved with standard numerical techniques and one has to rely on sampling approaches based on the kinetic Monte Carlo method. In this study we break the curse of dimensionality for the direct solution of the Markovian master equation by exploiting the Tensor Train Format for this purpose. The performance of the approach is demonstrated on a first principles based, reduced model for the CO oxidation on the RuO2(110) surface. We investigate the complexity for increasing system size and for various reaction conditions. The advantage over the stochastic simulation approach is illustrated by a problem with increased stiffness.
MosaicSolver: a tool for determining recombinants of viral genomes from pileup data
Wood, Graham R.; Ryabov, Eugene V.; Fannon, Jessica M.; Moore, Jonathan D.; Evans, David J.; Burroughs, Nigel
2014-01-01
Viral recombination is a key evolutionary mechanism, aiding escape from host immunity, contributing to changes in tropism and possibly assisting transmission across species barriers. The ability to determine whether recombination has occurred and to locate associated specific recombination junctions is thus of major importance in understanding emerging diseases and pathogenesis. This paper describes a method for determining recombinant mosaics (and their proportions) originating from two parent genomes, using high-throughput sequence data. The method involves setting the problem geometrically and the use of appropriately constrained quadratic programming. Recombinants of the honeybee deformed wing virus and the Varroa destructor virus-1 are inferred to illustrate the method from both siRNAs and reads sampling the viral genome population (cDNA library); our results are confirmed experimentally. Matlab software (MosaicSolver) is available. PMID:25120266
Sieve estimation in a Markov illness-death process under dual censoring.
Boruvka, Audrey; Cook, Richard J
2016-04-01
Semiparametric methods are well established for the analysis of a progressive Markov illness-death process observed up to a noninformative right censoring time. However, often the intermediate and terminal events are censored in different ways, leading to a dual censoring scheme. In such settings, unbiased estimation of the cumulative transition intensity functions cannot be achieved without some degree of smoothing. To overcome this problem, we develop a sieve maximum likelihood approach for inference on the hazard ratio. A simulation study shows that the sieve estimator offers improved finite-sample performance over common imputation-based alternatives and is robust to some forms of dependent censoring. The proposed method is illustrated using data from cancer trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Numerical methods in acoustics
NASA Astrophysics Data System (ADS)
Candel, S. M.
This paper presents a survey of some computational techniques applicable to acoustic wave problems. Recent advances in wave extrapolation methods, spectral methods and boundary integral methods are discussed and illustrated by specific calculations.
Cognitive Psychology and Mathematical Thinking.
ERIC Educational Resources Information Center
Greer, Brian
1981-01-01
This review illustrates aspects of cognitive psychology relevant to the understanding of how people think mathematically. Developments in memory research, artificial intelligence, visually mediated processes, and problem-solving research are discussed. (MP)