Sample records for functional form assumptions

  1. Regularity Results for a Class of Functionals with Non-Standard Growth

    NASA Astrophysics Data System (ADS)

    Acerbi, Emilio; Mingione, Giuseppe

    We consider the integral functional under non-standard growth assumptions that we call p(x) type: namely, we assume that a relevant model case being the functional Under sharp assumptions on the continuous function p(x)>1 we prove regularity of minimizers. Energies exhibiting this growth appear in several models from mathematical physics.

  2. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  3. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  4. On equations of motion of a nonlinear hydroelastic structure

    NASA Astrophysics Data System (ADS)

    Plotnikov, P. I.; Kuznetsov, I. V.

    2008-07-01

    Formal derivation of equations of a nonlinear hydroelastic structure, which is a volume of an ideal incompressible fluid covered by a shell, is proposed. The study is based on two assumptions. The first assumption implies that the energy stored in the shell is completely determined by the mean curvature and by the elementary area. In a three-dimensional case, the energy stored in the shell is chosen in the form of the Willmore functional. In a two-dimensional case, a more generic form of the functional can be considered. The second assumption implies that the equations of motionhave a Hamiltonian structure and can be obtained from the Lagrangian variational principle. In a two-dimensional case, a condition for the hydroelastic structure is derived, which relates the external pressure and the curvature of the elastic shell.

  5. Assumption-aware tools and agency; an interrogation of the primary artifacts of the program evaluation and design profession in working with complex evaluands and complex contexts.

    PubMed

    Morrow, Nathan; Nkwake, Apollo M

    2016-12-01

    Like artisans in a professional guild, we evaluators create tools to suit our ever evolving practice. The tools we use as evaluators are the primary artifacts of our profession, reflect our practice and embody an amalgamation of paradigms and assumptions. With the increasing shifts in evaluation purposes from judging program worth to understanding how programs work, the evaluator's role is changing to that of facilitating stakeholders in a learning process. This involves clarifying purposes and choices, as well as unearthing critical assumptions. In such a role, evaluators become major tool-users and begin to innovate with small refinements or produce completely new tools to fit a specific challenge or context. We interrogate the form and function of 12 tools used by evaluators when working with complex evaluands and complex contexts. The form is described in terms of traditional qualitative techniques and particular characteristics of the elements, use and presentation of each tool. Then the function of each tool is analyzed with respect to articulating assumptions and affecting the agency of evaluators and stakeholders in complex contexts. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Characterization of nonGaussian atmospheric turbulence for prediction of aircraft response statistics

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1977-01-01

    Mathematical expressions were derived for the exceedance rates and probability density functions of aircraft response variables using a turbulence model that consists of a low frequency component plus a variance modulated Gaussian turbulence component. The functional form of experimentally observed concave exceedance curves was predicted theoretically, the strength of the concave contribution being governed by the coefficient of variation of the time fluctuating variance of the turbulence. Differences in the functional forms of response exceedance curves and probability densities also were shown to depend primarily on this same coefficient of variation. Criteria were established for the validity of the local stationary assumption that is required in the derivations of the exceedance curves and probability density functions. These criteria are shown to depend on the relative time scale of the fluctuations in the variance, the fluctuations in the turbulence itself, and on the nominal duration of the relevant aircraft impulse response function. Metrics that can be generated from turbulence recordings for testing the validity of the local stationary assumption were developed.

  7. Model Considerations for Memory-based Automatic Music Transcription

    NASA Astrophysics Data System (ADS)

    Albrecht, Štěpán; Šmídl, Václav

    2009-12-01

    The problem of automatic music description is considered. The recorded music is modeled as a superposition of known sounds from a library weighted by unknown weights. Similar observation models are commonly used in statistics and machine learning. Many methods for estimation of the weights are available. These methods differ in the assumptions imposed on the weights. In Bayesian paradigm, these assumptions are typically expressed in the form of prior probability density function (pdf) on the weights. In this paper, commonly used assumptions about music signal are summarized and complemented by a new assumption. These assumptions are translated into pdfs and combined into a single prior density using combination of pdfs. Validity of the model is tested in simulation using synthetic data.

  8. Reaction μ-+6Li-->3H+3H+νμ and the axial current form factor in the timelike region

    NASA Astrophysics Data System (ADS)

    Mintz, S. L.

    1983-09-01

    The differential muon-capture rate dΓdET is obtained for the reaction μ-+6Li-->3H+3H+νμ over the allowed range of ET, the tritium energy, for two assumptions concerning the behavior of FA, the axial current form factor, in the timelike region; analytic continuation from the spacelike region and mirror behavior, FA(q2, timelike)=FA(q2, spacelike). The values of dΓdET under these two assumptions are found to vary substantially in the timelike region as a function of the mass MA in the dipole fit to FA. Values of dΓdET are given for MA2=2mπ2, 4.95mπ2, and 8mπ2. NUCLEAR REACTIONS Muon capture 6Li(μ-, νμ)3H3H, Γ, dΓdET calculated for two assumptions concerning the axial current form factor behavior in timelike region.

  9. A class of simple bouncing and late-time accelerating cosmologies in f(R) gravity

    NASA Astrophysics Data System (ADS)

    Kuiroukidis, A.

    We consider the field equations for a flat FRW cosmological model, given by Eq. (??), in an a priori generic f(R) gravity model and cast them into a, completely normalized and dimensionless, system of ODEs for the scale factor and the function f(R), with respect to the scalar curvature R. It is shown that under reasonable assumptions, namely for power-law functional form for the f(R) gravity model, one can produce simple analytical and numerical solutions describing bouncing cosmological models where in addition there are late-time accelerating. The power-law form for the f(R) gravity model is typically considered in the literature as the most concrete, reasonable, practical and viable assumption [see S. D. Odintsov and V. K. Oikonomou, Phys. Rev. D 90 (2014) 124083, arXiv:1410.8183 [gr-qc

  10. Mission Command in the Age of Network-Enabled Operations: Social Network Analysis of Information Sharing and Situation Awareness

    DTIC Science & Technology

    2016-06-22

    this assumption in a large-scale, 2-week military training exercise. We conducted a social network analysis of email communications among the multi...exponential random graph models challenge the aforementioned assumption, as increased email output was associated with lower individual situation... email links were more commonly formed among members of the command staff with both similar functions and levels of situation awareness, than between

  11. Cognitive Linguistics and the Second Language Classroom

    ERIC Educational Resources Information Center

    Holme, Randal

    2012-01-01

    Cognitive Linguistics (CL) makes the functional assumption that form is motivated by meaning. CL also analyses form-meaning pairings as products of how cognition structures perception. CL thus helps teachers to fit language to the nature of the cognition that learns whilst devising modes of instruction that are better attuned to the nature of the…

  12. Score Equating and Nominally Parallel Language Tests.

    ERIC Educational Resources Information Center

    Moy, Raymond

    Score equating requires that the forms to be equated are functionally parallel. That is, the two test forms should rank order examinees in a similar fashion. In language proficiency testing situations, this assumption is often put into doubt because of the numerous tests that have been proposed as measures of language proficiency and the…

  13. On pseudo-spectral time discretizations in summation-by-parts form

    NASA Astrophysics Data System (ADS)

    Ruggiu, Andrea A.; Nordström, Jan

    2018-05-01

    Fully-implicit discrete formulations in summation-by-parts form for initial-boundary value problems must be invertible in order to provide well functioning procedures. We prove that, under mild assumptions, pseudo-spectral collocation methods for the time derivative lead to invertible discrete systems when energy-stable spatial discretizations are used.

  14. The specification of a hospital cost function. A comment on the recent literature.

    PubMed

    Breyer, F

    1987-06-01

    In the empirical estimation of hospital cost functions, two radically different types of specifications have been chosen to date, ad-hoc forms and flexible functional forms based on neoclassical production theory. This paper discusses the respective strengths and weaknesses of both approaches and emphasizes the apparently unreconcilable conflict between the goals of maintaining functional flexibility and keeping the number of variables manageable if at the same time patient heterogeneity is to be adequately reflected in the case mix variables. A new specification is proposed which strikes a compromise between these goals, and the underlying assumptions are discussed critically.

  15. Estimating Scale Economies and the Optimal Size of School Districts: A Flexible Form Approach

    ERIC Educational Resources Information Center

    Schiltz, Fritz; De Witte, Kristof

    2017-01-01

    This paper investigates estimation methods to model the relationship between school district size, costs per student and the organisation of school districts. We show that the assumptions on the functional form strongly affect the estimated scale economies and offer two possible solutions to allow for more flexibility in the estimation method.…

  16. Approximate Bayesian computation in large-scale structure: constraining the galaxy-halo connection

    NASA Astrophysics Data System (ADS)

    Hahn, ChangHoon; Vakili, Mohammadjavad; Walsh, Kilian; Hearin, Andrew P.; Hogg, David W.; Campbell, Duncan

    2017-08-01

    Standard approaches to Bayesian parameter inference in large-scale structure assume a Gaussian functional form (chi-squared form) for the likelihood. This assumption, in detail, cannot be correct. Likelihood free inferences such as approximate Bayesian computation (ABC) relax these restrictions and make inference possible without making any assumptions on the likelihood. Instead ABC relies on a forward generative model of the data and a metric for measuring the distance between the model and data. In this work, we demonstrate that ABC is feasible for LSS parameter inference by using it to constrain parameters of the halo occupation distribution (HOD) model for populating dark matter haloes with galaxies. Using specific implementation of ABC supplemented with population Monte Carlo importance sampling, a generative forward model using HOD and a distance metric based on galaxy number density, two-point correlation function and galaxy group multiplicity function, we constrain the HOD parameters of mock observation generated from selected 'true' HOD parameters. The parameter constraints we obtain from ABC are consistent with the 'true' HOD parameters, demonstrating that ABC can be reliably used for parameter inference in LSS. Furthermore, we compare our ABC constraints to constraints we obtain using a pseudo-likelihood function of Gaussian form with MCMC and find consistent HOD parameter constraints. Ultimately, our results suggest that ABC can and should be applied in parameter inference for LSS analyses.

  17. Adding Design Elements to Improve Time Series Designs: No Child Left behind as an Example of Causal Pattern-Matching

    ERIC Educational Resources Information Center

    Wong, Manyee; Cook, Thomas D.; Steiner, Peter M.

    2015-01-01

    Some form of a short interrupted time series (ITS) is often used to evaluate state and national programs. An ITS design with a single treatment group assumes that the pretest functional form can be validly estimated and extrapolated into the postintervention period where it provides a valid counterfactual. This assumption is problematic. Ambiguous…

  18. Glass dissolution as a function of pH and its implications for understanding mechanisms and future experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strachan, Denis

    For years, we have been using a certain form of the glass dissolution rate equation. In this article, I examine the assumptions that have been made and suggest that the rate equation may be more complex than originally thought. Suggestions of experiments that are needed to correct or validate the exisiting form of the rate equation are made.

  19. Global high-frequency source imaging accounting for complexity in Green's functions

    NASA Astrophysics Data System (ADS)

    Lambert, V.; Zhan, Z.

    2017-12-01

    The general characterization of earthquake source processes at long periods has seen great success via seismic finite fault inversion/modeling. Complementary techniques, such as seismic back-projection, extend the capabilities of source imaging to higher frequencies and reveal finer details of the rupture process. However, such high frequency methods are limited by the implicit assumption of simple Green's functions, which restricts the use of global arrays and introduces artifacts (e.g., sweeping effects, depth/water phases) that require careful attention. This motivates the implementation of an imaging technique that considers the potential complexity of Green's functions at high frequencies. We propose an alternative inversion approach based on the modest assumption that the path effects contributing to signals within high-coherency subarrays share a similar form. Under this assumption, we develop a method that can combine multiple high-coherency subarrays to invert for a sparse set of subevents. By accounting for potential variability in the Green's functions among subarrays, our method allows for the utilization of heterogeneous global networks for robust high resolution imaging of the complex rupture process. The approach also provides a consistent framework for examining frequency-dependent radiation across a broad frequency spectrum.

  20. On the formulation of the aerodynamic characteristics in aircraft dynamics

    NASA Technical Reports Server (NTRS)

    Tobak, M.; Schiff, L. B.

    1976-01-01

    The theory of functionals is used to reformulate the notions of aerodynamic indicial functions and superposition. Integral forms for the aerodynamic response to arbitrary motions are derived that are free of dependence on a linearity assumption. Simplifications of the integral forms lead to practicable nonlinear generalizations of the linear superpositions and stability derivative formulations. Applied to arbitrary nonplanar motions, the generalization yields a form for the aerodynamic response that can be compounded of the contributions from a limited number of well-defined characteristic motions, in principle reproducible in the wind tunnel. Further generalizations that would enable the consideration of random fluctuations and multivalued aerodynamic responses are indicated.

  1. Discriminating Among Probability Weighting Functions Using Adaptive Design Optimization

    PubMed Central

    Cavagnaro, Daniel R.; Pitt, Mark A.; Gonzalez, Richard; Myung, Jay I.

    2014-01-01

    Probability weighting functions relate objective probabilities and their subjective weights, and play a central role in modeling choices under risk within cumulative prospect theory. While several different parametric forms have been proposed, their qualitative similarities make it challenging to discriminate among them empirically. In this paper, we use both simulation and choice experiments to investigate the extent to which different parametric forms of the probability weighting function can be discriminated using adaptive design optimization, a computer-based methodology that identifies and exploits model differences for the purpose of model discrimination. The simulation experiments show that the correct (data-generating) form can be conclusively discriminated from its competitors. The results of an empirical experiment reveal heterogeneity between participants in terms of the functional form, with two models (Prelec-2, Linear in Log Odds) emerging as the most common best-fitting models. The findings shed light on assumptions underlying these models. PMID:24453406

  2. Predicting Diameter Distributions of Longleaf Pine Plantations: A Comparison Between Artificial Neural Networks and Other Accepted Methodologies

    Treesearch

    Daniel J. Leduc; Thomas G. Matney; Keith L. Belli; V. Clark Baldwin

    2001-01-01

    Artificial neural networks (NN) are becoming a popular estimation tool. Because they require no assumptions about the form of a fitting function, they can free the modeler from reliance on parametric approximating functions that may or may not satisfactorily fit the observed data. To date there have been few applications in forestry science, but as better NN software...

  3. The Use of Propensity Scores in Mediation Analysis

    ERIC Educational Resources Information Center

    Jo, Booil; Stuart, Elizabeth A.; MacKinnon, David P.; Vinokur, Amiram D.

    2011-01-01

    Mediation analysis uses measures of hypothesized mediating variables to test theory for how a treatment achieves effects on outcomes and to improve subsequent treatments by identifying the most efficient treatment components. Most current mediation analysis methods rely on untested distributional and functional form assumptions for valid…

  4. 78 FR 38976 - Proposed Agency Information Collection Activities; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-28

    ... Reserve's functions; including whether the information has practical utility; b. The accuracy of the... the methodology and assumptions used; c. Ways to enhance the quality, utility, and clarity of the... Report Report title: Report of Selected Money Market Rates. Agency form number: FR 2420. OMB control...

  5. Welch Science Process Inventory, Form D. Revised.

    ERIC Educational Resources Information Center

    Welch, Wayne W.

    This inventory, developed for use with the Harvard Project Physics curriculum, consists of 135 two-choice (agree-disagree) items. Items cover perceptions of the role of scientists, the nature and functions of theories, underlying assumptions made by scientists, and other aspects of the scientific process. The test is suitable for high school…

  6. Robust Decision Making in a Nonlinear World

    ERIC Educational Resources Information Center

    Dougherty, Michael R.; Thomas, Rick P.

    2012-01-01

    The authors propose a general modeling framework called the general monotone model (GeMM), which allows one to model psychological phenomena that manifest as nonlinear relations in behavior data without the need for making (overly) precise assumptions about functional form. Using both simulated and real data, the authors illustrate that GeMM…

  7. Behavioral Variability of Choices versus Structural Inconsistency of Preferences

    ERIC Educational Resources Information Center

    Regenwetter, Michel; Davis-Stober, Clintin P.

    2012-01-01

    Theories of rational choice often make the structural consistency assumption that every decision maker's binary strict preference among choice alternatives forms a "strict weak order". Likewise, the very concept of a "utility function" over lotteries in normative, prescriptive, and descriptive theory is mathematically equivalent to strict weak…

  8. A Socioanalytic Model of Maturity

    ERIC Educational Resources Information Center

    Hogan, Robert; Roberts, Brent W.

    2004-01-01

    K0 describes a point of view on maturity that departs from earlier treatments in two ways. First, it rejects the popular assumption from humanistic psychology that maturity is a function of self-actualization and stipulates that maturity is related to certain performance capacities--namely, the ability to form lasting relationships and to achieve…

  9. Uniqueness and characterization theorems for generalized entropies

    NASA Astrophysics Data System (ADS)

    Enciso, Alberto; Tempesta, Piergiulio

    2017-12-01

    The requirement that an entropy function be composable is key: it means that the entropy of a compound system can be calculated in terms of the entropy of its independent components. We prove that, under mild regularity assumptions, the only composable generalized entropy in trace form is the Tsallis one-parameter family (which contains Boltzmann-Gibbs as a particular case). This result leads to the use of generalized entropies that are not of trace form, such as Rényi’s entropy, in the study of complex systems. In this direction, we also present a characterization theorem for a large class of composable non-trace-form entropy functions with features akin to those of Rényi’s entropy.

  10. Sorption of small molecules in polymeric media

    NASA Astrophysics Data System (ADS)

    Camboni, Federico; Sokolov, Igor M.

    2016-12-01

    We discuss the sorption of penetrant molecules from the gas phase by a polymeric medium within a model which is very close in spirit to the dual sorption mode model: the penetrant molecules are partly dissolved within the polymeric matrix, partly fill the preexisting voids. The only difference with the initial dual sorption mode situation is the assumption that the two populations of molecules are in equilibrium with each other. Applying basic thermodynamics principles we obtain the dependence of the penetrant concentration on the pressure in the gas phase and find that this is expressed via the Lambert W-function, a different functional form than the one proposed by dual sorption mode model. The Lambert-like isotherms appear universally at low and moderate pressures and originate from the assumption that the internal energy in a polymer-penetrant-void ternary mixture is (in the lowest order) a bilinear form in the concentrations of the three components. Fitting the existing data shows that in the domain of parameters where the dual sorption mode model is typically applied, the Lambert function, which describes the same behavior as the one proposed by the gas-polymer matrix model, fits the data equally well.

  11. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics.

    PubMed

    Walmsley, Christopher W; McCurry, Matthew R; Clausen, Phillip D; McHenry, Colin R

    2013-01-01

    Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be 'reasonable' are often assumed to have little influence on the results and their interpretation. HERE WE REPORT AN EXTENSIVE SENSITIVITY ANALYSIS WHERE HIGH RESOLUTION FINITE ELEMENT (FE) MODELS OF MANDIBLES FROM SEVEN SPECIES OF CROCODILE WERE ANALYSED UNDER LOADS TYPICAL FOR COMPARATIVE ANALYSIS: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results.

  12. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics

    PubMed Central

    McCurry, Matthew R.; Clausen, Phillip D.; McHenry, Colin R.

    2013-01-01

    Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation. Here we report an extensive sensitivity analysis where high resolution finite element (FE) models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results. PMID:24255817

  13. Bianchi type I in f(T) gravitational theories

    NASA Astrophysics Data System (ADS)

    M, I. Wanas; G, G. L. Nashed; O, A. Ibrahim

    2016-05-01

    A tetrad field that is homogeneous and anisotropic which contains two unknown functions A(t) and B(t) of cosmic time is applied to the field equations of f (T), where T is the torsion scalar, T = T μ νρ S μ νρ . We calculate the equation of continuity and rewrite it as a product of two brackets, the first is a function of f (T) and the second is a function of the two unknowns A(t) and B(t). We use two different relations between the two unknown functions A(t) and B(t) in the second bracket to solve it. Both of these relations give constant scalar torsion and solutions coincide with the de Sitter one. So, another assumption related to the contents of the matter fields is postulated. This assumption enables us to drive a solution with a non-constant value of the scalar torsion and a form of f (T) which represents ΛCDM. Project supported by the Egyptian Ministry of Scientific Research (Project No. 24-2-12).

  14. A General Linear Method for Equating with Small Samples

    ERIC Educational Resources Information Center

    Albano, Anthony D.

    2015-01-01

    Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…

  15. Generativity in College Students: Comparing and Explaining the Impact of Mentoring

    ERIC Educational Resources Information Center

    Hastings, Lindsay J.; Griesen, James V.; Hoover, Richard E.; Creswell, John W.; Dlugosh, Larry L.

    2015-01-01

    Preparing college students to be active contributors to the next generation is an important function of higher education. This assumption about generativity forms a cornerstone in this mixed methods study that examined generativity levels among 273 college students at a 4-year public university. MANCOVA results indicated that college students who…

  16. Galaxy and Mass Assembly (GAMA): the star formation rate dependence of the stellar initial mass function

    NASA Astrophysics Data System (ADS)

    Gunawardhana, M. L. P.; Hopkins, A. M.; Sharp, R. G.; Brough, S.; Taylor, E.; Bland-Hawthorn, J.; Maraston, C.; Tuffs, R. J.; Popescu, C. C.; Wijesinghe, D.; Jones, D. H.; Croom, S.; Sadler, E.; Wilkins, S.; Driver, S. P.; Liske, J.; Norberg, P.; Baldry, I. K.; Bamford, S. P.; Loveday, J.; Peacock, J. A.; Robotham, A. S. G.; Zucker, D. B.; Parker, Q. A.; Conselice, C. J.; Cameron, E.; Frenk, C. S.; Hill, D. T.; Kelvin, L. S.; Kuijken, K.; Madore, B. F.; Nichol, B.; Parkinson, H. R.; Pimbblet, K. A.; Prescott, M.; Sutherland, W. J.; Thomas, D.; van Kampen, E.

    2011-08-01

    The stellar initial mass function (IMF) describes the distribution in stellar masses produced from a burst of star formation. For more than 50 yr, the implicit assumption underpinning most areas of research involving the IMF has been that it is universal, regardless of time and environment. We measure the high-mass IMF slope for a sample of low-to-moderate redshift galaxies from the Galaxy and Mass Assembly survey. The large range in luminosities and galaxy masses of the sample permits the exploration of underlying IMF dependencies. A strong IMF-star formation rate dependency is discovered, which shows that highly star-forming galaxies form proportionally more massive stars (they have IMFs with flatter power-law slopes) than galaxies with low star formation rates. This has a significant impact on a wide variety of galaxy evolution studies, all of which rely on assumptions about the slope of the IMF. Our result is supported by, and provides an explanation for, the results of numerous recent explorations suggesting a variation of or evolution in the IMF.

  17. Two models for evaluating landslide hazards

    USGS Publications Warehouse

    Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.

    2006-01-01

    Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.

  18. Cluster analysis of word frequency dynamics

    NASA Astrophysics Data System (ADS)

    Maslennikova, Yu S.; Bochkarev, V. V.; Belashova, I. A.

    2015-01-01

    This paper describes the analysis and modelling of word usage frequency time series. During one of previous studies, an assumption was put forward that all word usage frequencies have uniform dynamics approaching the shape of a Gaussian function. This assumption can be checked using the frequency dictionaries of the Google Books Ngram database. This database includes 5.2 million books published between 1500 and 2008. The corpus contains over 500 billion words in American English, British English, French, German, Spanish, Russian, Hebrew, and Chinese. We clustered time series of word usage frequencies using a Kohonen neural network. The similarity between input vectors was estimated using several algorithms. As a result of the neural network training procedure, more than ten different forms of time series were found. They describe the dynamics of word usage frequencies from birth to death of individual words. Different groups of word forms were found to have different dynamics of word usage frequency variations.

  19. Speaker-Versus Listener-Oriented Disfluency: A Re-Examination of Arguments and Assumptions from Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Engelhardt, Paul E.; Alfridijanta, Oliver; McMullon, Mhairi E. G.; Corley, Martin

    2017-01-01

    We re-evaluate conclusions about disfluency production in high-functioning forms of autism spectrum disorder (HFA). Previous studies examined individuals with HFA to address a theoretical question regarding speaker- and listener-oriented disfluencies. Individuals with HFA tend to be self-centric and have poor pragmatic language skills, and should…

  20. Finite element model predictions of static deformation from dislocation sources in a subduction zone: Sensitivities to homogeneous, isotropic, Poisson-solid, and half-space assumptions

    USGS Publications Warehouse

    Masterlark, Timothy

    2003-01-01

    Dislocation models can simulate static deformation caused by slip along a fault. These models usually take the form of a dislocation embedded in a homogeneous, isotropic, Poisson-solid half-space (HIPSHS). However, the widely accepted HIPSHS assumptions poorly approximate subduction zone systems of converging oceanic and continental crust. This study uses three-dimensional finite element models (FEMs) that allow for any combination (including none) of the HIPSHS assumptions to compute synthetic Green's functions for displacement. Using the 1995 Mw = 8.0 Jalisco-Colima, Mexico, subduction zone earthquake and associated measurements from a nearby GPS array as an example, FEM-generated synthetic Green's functions are combined with standard linear inverse methods to estimate dislocation distributions along the subduction interface. Loading a forward HIPSHS model with dislocation distributions, estimated from FEMs that sequentially relax the HIPSHS assumptions, yields the sensitivity of predicted displacements to each of the HIPSHS assumptions. For the subduction zone models tested and the specific field situation considered, sensitivities to the individual Poisson-solid, isotropy, and homogeneity assumptions can be substantially greater than GPS. measurement uncertainties. Forward modeling quantifies stress coupling between the Mw = 8.0 earthquake and a nearby Mw = 6.3 earthquake that occurred 63 days later. Coulomb stress changes predicted from static HIPSHS models cannot account for the 63-day lag time between events. Alternatively, an FEM that includes a poroelastic oceanic crust, which allows for postseismic pore fluid pressure recovery, can account for the lag time. The pore fluid pressure recovery rate puts an upper limit of 10-17 m2 on the bulk permeability of the oceanic crust. Copyright 2003 by the American Geophysical Union.

  1. Retaining the equilibrium point hypothesis as an abstract description of the neuromuscular system.

    PubMed

    Tresilian, J R

    1999-01-01

    The lambda version of the equilibrium point (EP) hypothesis for motor control is examined in light of recent criticisms of its various instantiations. Four important assumptions that have formed the basis for recent criticism are analyzed: First, the assumption that intact muscles possess invariant force-length characteristics (ICs). Second, that these ICs are of the same form in agonist-antagonist pairs. Third, that muscle control is monoparametric and that the control parameter, lambda, can be given a neurophysiological interpretation. Fourth, that reflex loop time delays and the known, asymmetric, nonlinear mechanical properties of muscles can be ignored. Mechanical and neurophysiological investigations of the neuromuscular system suggests that none of these assumptions is likely to be correct. This has been taken to mean that the EP hypothesis is oversimplified and a new approach is needed. It is argued that such an approach can be provided without rejecting the EP hypothesis, rather to regard it as an input-output description of muscle and associated segmental circuits. The operation of the segmental circuitry can be interpreted as having the function, at least in part, of compensating for a variety of nonlinearities and asymmetries such that the overall system implements the lambda-EP model equations.

  2. On Fluctuations of Eigenvalues of Random Band Matrices

    NASA Astrophysics Data System (ADS)

    Shcherbina, M.

    2015-10-01

    We consider the fluctuations of linear eigenvalue statistics of random band matrices whose entries have the form with i.i.d. possessing the th moment, where the function u has a finite support , so that M has only nonzero diagonals. The parameter b (called the bandwidth) is assumed to grow with n in a way such that . Without any additional assumptions on the growth of b we prove CLT for linear eigenvalue statistics for a rather wide class of test functions. Thus we improve and generalize the results of the previous papers (Jana et al., arXiv:1412.2445; Li et al. Random Matrices 2:04, 2013), where CLT was proven under the assumption . Moreover, we develop a method which allows to prove automatically the CLT for linear eigenvalue statistics of the smooth test functions for almost all classical models of random matrix theory: deformed Wigner and sample covariance matrices, sparse matrices, diluted random matrices, matrices with heavy tales etc.

  3. Dispensing Pollen via Catapult: Explosive Pollen Release in Mountain Laurel (Kalmia latifolia).

    PubMed

    Switzer, Callin M; Combes, Stacey A; Hopkins, Robin

    2018-06-01

    The astonishing amount of floral diversity has inspired countless assumptions about the function of diverse forms and their adaptive significance, yet many of these hypothesized functions are untested. We investigated an often-repeated adaptive hypothesis about how an extreme floral form functions. In this study, we conducted four investigations to understand the adaptive function of explosive pollination in Kalmia latifolia, the mountain laurel. We first performed a kinematic analysis of anther movement. Second, we constructed a heat map of pollen trajectories in three-dimensional space. Third, we conducted field observations of pollinators and their behaviors while visiting K. latifolia. Finally, we conducted a pollination experiment to investigate the importance of pollinators for fertilization success. Our results suggest that insect visitation dramatically improves fertilization success and that bees are the primary pollinators that trigger explosive pollen release.

  4. Effects of spatial grouping on the functional response of predators

    USGS Publications Warehouse

    Cosner, C.; DeAngelis, D.L.; Ault, J.S.; Olson, D.B.

    1999-01-01

    A unified mechanistic approach is given for the derivation of various forms of functional response in predator-prey models. The derivation is based on the principle-of-mass action but with the crucial refinement that the nature of the spatial distribution of predators and/or opportunities for predation are taken into account in an implicit way. If the predators are assumed to have a homogeneous spatial distribution, then the derived functional response is prey-dependent. If the predators are assumed to form a dense colony or school in a single (possibly moving) location, or if the region where predators can encounter prey is assumed to be of limited size, then the functional response depends on both predator and prey densities in a manner that reflects feeding interference between predators. Depending on the specific assumptions, the resulting functional response may be of Beddington-DeAngelis type, of Hassell-Varley type, or ratio-dependent.

  5. Some Classes of Imperfect Information Finite State-Space Stochastic Games with Finite-Dimensional Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McEneaney, William M.

    2004-08-15

    Stochastic games under imperfect information are typically computationally intractable even in the discrete-time/discrete-state case considered here. We consider a problem where one player has perfect information.A function of a conditional probability distribution is proposed as an information state.In the problem form here, the payoff is only a function of the terminal state of the system,and the initial information state is either linear ora sum of max-plus delta functions.When the initial information state belongs to these classes, its propagation is finite-dimensional.The state feedback value function is also finite-dimensional,and obtained via dynamic programming,but has a nonstandard form due to the necessity ofmore » an expanded state variable.Under a saddle point assumption,Certainty Equivalence is obtained and the proposed function is indeed an information state.« less

  6. SIMPL Systems, or: Can We Design Cryptographic Hardware without Secret Key Information?

    NASA Astrophysics Data System (ADS)

    Rührmair, Ulrich

    This paper discusses a new cryptographic primitive termed SIMPL system. Roughly speaking, a SIMPL system is a special type of Physical Unclonable Function (PUF) which possesses a binary description that allows its (slow) public simulation and prediction. Besides this public key like functionality, SIMPL systems have another advantage: No secret information is, or needs to be, contained in SIMPL systems in order to enable cryptographic protocols - neither in the form of a standard binary key, nor as secret information hidden in random, analog features, as it is the case for PUFs. The cryptographic security of SIMPLs instead rests on (i) a physical assumption on their unclonability, and (ii) a computational assumption regarding the complexity of simulating their output. This novel property makes SIMPL systems potentially immune against many known hardware and software attacks, including malware, side channel, invasive, or modeling attacks.

  7. Data Transmission Signal Design and Analysis

    NASA Technical Reports Server (NTRS)

    Moore, J. D.

    1972-01-01

    The error performances of several digital signaling methods are determined as a function of a specified signal-to-noise ratio. Results are obtained for Gaussian noise and impulse noise. Performance of a receiver for differentially encoded biphase signaling is obtained by extending the results of differential phase shift keying. The analysis presented obtains a closed-form answer through the use of some simplifying assumptions. The results give an insight into the analysis problem, however, the actual error performance may show a degradation because of the assumptions made in the analysis. Bipolar signaling decision-threshold selection is investigated. The optimum threshold depends on the signal-to-noise ratio and requires the use of an adaptive receiver.

  8. Receptor theory and biological constraints on value.

    PubMed

    Berns, Gregory S; Capra, C Monica; Noussair, Charles

    2007-05-01

    Modern economic theories of value derive from expected utility theory. Behavioral evidence points strongly toward departures from linear value weighting, which has given rise to alternative formulations that include prospect theory and rank-dependent utility theory. Many of the nonlinear forms for value assumed by these theories can be derived from the assumption that value is signaled by neurotransmitters in the brain, which obey simple laws of molecular movement. From the laws of mass action and receptor occupancy, we show how behaviorally observed forms of nonlinear value functions can arise.

  9. A generating function approach to HIV transmission with dynamic contact rates

    DOE PAGES

    Romero-Severson, Ethan O.; Meadors, Grant D.; Volz, Erik M.

    2014-04-24

    The basic reproduction number, R 0, is often defined as the average number of infections generated by a newly infected individual in a fully susceptible population. The interpretation, meaning, and derivation of R 0 are controversial. However, in the context of mean field models, R 0 demarcates the epidemic threshold below which the infected population approaches zero in the limit of time. In this manner, R 0 has been proposed as a method for understanding the relative impact of public health interventions with respect to disease eliminations from a theoretical perspective. The use of R 0 is made more complexmore » by both the strong dependency of R 0 on the model form and the stochastic nature of transmission. A common assumption in models of HIV transmission that have closed form expressions for R 0 is that a single individual’s behavior is constant over time. For this research, we derive expressions for both R 0 and probability of an epidemic in a finite population under the assumption that people periodically change their sexual behavior over time. We illustrate the use of generating functions as a general framework to model the effects of potentially complex assumptions on the number of transmissions generated by a newly infected person in a susceptible population. In conclusion, we find that the relationship between the probability of an epidemic and R 0 is not straightforward, but, that as the rate of change in sexual behavior increases both R 0 and the probability of an epidemic also decrease.« less

  10. A generating function approach to HIV transmission with dynamic contact rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero-Severson, Ethan O.; Meadors, Grant D.; Volz, Erik M.

    The basic reproduction number, R 0, is often defined as the average number of infections generated by a newly infected individual in a fully susceptible population. The interpretation, meaning, and derivation of R 0 are controversial. However, in the context of mean field models, R 0 demarcates the epidemic threshold below which the infected population approaches zero in the limit of time. In this manner, R 0 has been proposed as a method for understanding the relative impact of public health interventions with respect to disease eliminations from a theoretical perspective. The use of R 0 is made more complexmore » by both the strong dependency of R 0 on the model form and the stochastic nature of transmission. A common assumption in models of HIV transmission that have closed form expressions for R 0 is that a single individual’s behavior is constant over time. For this research, we derive expressions for both R 0 and probability of an epidemic in a finite population under the assumption that people periodically change their sexual behavior over time. We illustrate the use of generating functions as a general framework to model the effects of potentially complex assumptions on the number of transmissions generated by a newly infected person in a susceptible population. In conclusion, we find that the relationship between the probability of an epidemic and R 0 is not straightforward, but, that as the rate of change in sexual behavior increases both R 0 and the probability of an epidemic also decrease.« less

  11. A very strong difference property for semisimple compact connected lie groups

    NASA Astrophysics Data System (ADS)

    Shtern, A. I.

    2011-06-01

    Let G be a topological group. For a function f: G → ℝ and h ∈ G, the difference function Δ h f is defined by the rule Δ h f( x) = f( xh) - f( x) ( x ∈ G). A function H: G → ℝ is said to be additive if it satisfies the Cauchy functional equation H( x + y) = H( x) + H( y) for every x, y ∈ G. A class F of real-valued functions defined on G is said to have the difference property if, for every function f: G → ℝ satisfying Δ h f ∈ F for each h ∈ G, there is an additive function H such that f - H ∈ F. Erdős' conjecture claiming that the class of continuous functions on ℝ has the difference property was proved by N. G. de Bruijn; later on, F. W. Carroll and F. S. Koehl obtained a similar result for compact Abelian groups and, under the additional assumption that the other one-sided difference function ∇ h f defined by ∇ h f( x) = f( xh) - f( x) ( x ∈ G, h ∈ G) is measurable for any h ∈ G, also for noncommutative compact metric groups. In the present paper, we consider a narrower class of groups, namely, the family of semisimple compact connected Lie groups. It turns out that these groups admit a significantly stronger difference property. Namely, if a function f: G → ℝ on a semisimple compact connected Lie group has continuous difference functions Δ h f for any h ∈ G (without the additional assumption concerning the measurability of the functions of the form ∇ h f), then f is automatically continuous, and no nontrivial additive function of the form H is needed. Some applications are indicated, including difference theorems for homogeneous spaces of compact connected Lie groups.

  12. Weierstrass traveling wave solutions for dissipative Benjamin, Bona, and Mahony (BBM) equation

    NASA Astrophysics Data System (ADS)

    Mancas, Stefan C.; Spradlin, Greg; Khanal, Harihar

    2013-08-01

    In this paper the effect of a small dissipation on waves is included to find exact solutions to the modified Benjamin, Bona, and Mahony (BBM) equation by viscosity. Using Lyapunov functions and dynamical systems theory, we prove that when viscosity is added to the BBM equation, in certain regions there still exist bounded traveling wave solutions in the form of solitary waves, periodic, and elliptic functions. By using the canonical form of Abel equation, the polynomial Appell invariant makes the equation integrable in terms of Weierstrass ℘ functions. We will use a general formalism based on Ince's transformation to write the general solution of dissipative BBM in terms of ℘ functions, from which all the other known solutions can be obtained via simplifying assumptions. Using ODE (ordinary differential equations) analysis we show that the traveling wave speed is a bifurcation parameter that makes transition between different classes of waves.

  13. Nonlinear bulging factor based on R-curve data

    NASA Technical Reports Server (NTRS)

    Jeong, David Y.; Tong, Pin

    1994-01-01

    In this paper, a nonlinear bulging factor is derived using a strain energy approach combined with dimensional analysis. The functional form of the bulging factor contains an empirical constant that is determined using R-curve data from unstiffened flat and curved panel tests. The determination of this empirical constant is based on the assumption that the R-curve is the same for both flat and curved panels.

  14. The Application of Nonstandard Analysis to the Study of Inviscid Shock Wave Jump Conditions

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Baty, R. S.

    1998-01-01

    The use of conservation laws in nonconservative form for deriving shock jump conditions by Schwartz distribution theory leads to ambiguous products of generalized functions. Nonstandard analysis is used to define a class of Heaviside functions where the jump from zero to one occurs on an infinitesimal interval. These Heaviside functions differ by their microstructure near x = 0, i.e., by the nature of the rise within the infinitesimal interval it is shown that the conservation laws in nonconservative form can relate the different Heaviside functions used to define jumps in different flow parameters. There are no mathematical or logical ambiguities in the derivation of the jump conditions. An important result is that the microstructure of the Heaviside function of the jump in entropy has a positive peak greater than one within the infinitesimal interval where the jump occurs. This phenomena is known from more sophisticated studies of the structure of shock waves using viscous fluid assumption. However, the present analysis is simpler and more direct.

  15. Breaking the polar-nonpolar division in solvation free energy prediction.

    PubMed

    Wang, Bao; Wang, Chengzhang; Wu, Kedi; Wei, Guo-Wei

    2018-02-05

    Implicit solvent models divide solvation free energies into polar and nonpolar additive contributions, whereas polar and nonpolar interactions are inseparable and nonadditive. We present a feature functional theory (FFT) framework to break this ad hoc division. The essential ideas of FFT are as follows: (i) representability assumption: there exists a microscopic feature vector that can uniquely characterize and distinguish one molecule from another; (ii) feature-function relationship assumption: the macroscopic features, including solvation free energy, of a molecule is a functional of microscopic feature vectors; and (iii) similarity assumption: molecules with similar microscopic features have similar macroscopic properties, such as solvation free energies. Based on these assumptions, solvation free energy prediction is carried out in the following protocol. First, we construct a molecular microscopic feature vector that is efficient in characterizing the solvation process using quantum mechanics and Poisson-Boltzmann theory. Microscopic feature vectors are combined with macroscopic features, that is, physical observable, to form extended feature vectors. Additionally, we partition a solvation dataset into queries according to molecular compositions. Moreover, for each target molecule, we adopt a machine learning algorithm for its nearest neighbor search, based on the selected microscopic feature vectors. Finally, from the extended feature vectors of obtained nearest neighbors, we construct a functional of solvation free energy, which is employed to predict the solvation free energy of the target molecule. The proposed FFT model has been extensively validated via a large dataset of 668 molecules. The leave-one-out test gives an optimal root-mean-square error (RMSE) of 1.05 kcal/mol. FFT predictions of SAMPL0, SAMPL1, SAMPL2, SAMPL3, and SAMPL4 challenge sets deliver the RMSEs of 0.61, 1.86, 1.64, 0.86, and 1.14 kcal/mol, respectively. Using a test set of 94 molecules and its associated training set, the present approach was carefully compared with a classic solvation model based on weighted solvent accessible surface area. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  16. HUMAN CAPITAL GROWTH AND POVERTY: EVIDENCE FROM ETHIOPIA AND PERU

    PubMed Central

    ATTANASIO, ORAZIO; MEGHIR, COSTAS; NIX, EMILY; SALVATI, FRANCESCA

    2017-01-01

    In this paper we use high quality data from two developing countries, Ethiopia and Peru, to estimate the production functions of human capital from age 1 to age 15. We characterize the nature of persistence and dynamic complementarities between two components of human capital: health and cognition. We also explore the implications of different functional form assumptions for the production functions. We find that more able and higher income parents invest more, particularly at younger ages when investments have the greatest impacts. These differences in investments by parental income lead to large gaps in inequality by age 8 that persist through age 15. PMID:28579736

  17. A new water retention and hydraulic conductivity model accounting for contact angle

    NASA Astrophysics Data System (ADS)

    Diamantopoulos, Efstathios; Durner, Wolfgang

    2013-04-01

    The description of soil water transport in the unsaturated zone requires the knowledge of the soil hydraulic properties, i.e. the water retention and the hydraulic conductivity function. A great amount of parameterizations for this can be found in the literature, the majority of which represent the complex pore space of soils as a bundle of cylindrical capillary tubes of various sizes. The assumption of zero contact angles between water and surface of the grains is also made. However, these assumptions limit the predictive capabilities of these models, leading often to enormous errors in the prediction of water dynamics in soils. We present a pore scale analysis for equilibrium liquid configurations (retention) in angular pores taking the effect of contact angle into account. Furthermore, we propose an alternative derivation of the hydraulic conductivity function, again as a function of the contact angle, assuming flow perpendicular to pore cross sections. Finally, we upscale our model from the pore to the sample scale by assuming a gamma statistical distribution of the pore sizes. Closed form expressions are derived for both sample water retention and conductivity functions. The new model was tested against experimental data from multistep inflow/outflow (MSI/MSO) experiments for a sandy material. They were conducted using ethanol and water as the wetting liquid. Ethanol was assumed to form a zero contact angle with the soil grains. The proposed model described both imbibition and drainage of water and ethanol very well. Lastly, the consideration of the contact angle allowed the description of the observed hysteresis.

  18. "What Makes My Image of Him into an Image of Him?": Philosophers on Film and the Question of Educational Meaning

    ERIC Educational Resources Information Center

    Gibbs, Alexis

    2017-01-01

    This paper proceeds from the premise that film can be educational in a broader sense than its current use in classrooms for illustrative purposes, and explores the idea that film might function as a form of education in itself. To investigate the phenomenon of film as education, it is necessary to first address a number of assumptions about film,…

  19. Effects of Adaptive Antenna Arrays on Broadband Signals.

    DTIC Science & Technology

    1980-06-01

    dimensional array geometry. The signal impinging on the antenna array elements is assumed to have originated from a point source in the far field , or...tg9 (4) The assumptions used to identify the far field region of an array also lead to an approximation for ti(6) . It is ti (0 ) x i sin(e) (5) c...implementing the open form transfer function and coefficients of Eqs (16) 53 .. ... ... .. . . .. . . .. ... .. . ..... . .... . . .. through (21). For a

  20. Prediction of the turbulent wake with second-order closure

    NASA Technical Reports Server (NTRS)

    Taulbee, D. B.; Lumley, J. L.

    1981-01-01

    A turbulence was envisioned whose energy containing scales would be Gaussian in the absence of inhomogeneity, gravity, etc. An equation was constructed for a function equivalent to the probability density, the second moment of which corresponded to the accepted modeled form of the Reynolds stress equation. The third moment equations obtained from this were simplified by the assumption of weak inhomogeneity. Calculations are presented with this model as well as interpretations of the results.

  1. A latent variable approach to study gene-environment interactions in the presence of multiple correlated exposures.

    PubMed

    Sánchez, Brisa N; Kang, Shan; Mukherjee, Bhramar

    2012-06-01

    Many existing cohort studies initially designed to investigate disease risk as a function of environmental exposures have collected genomic data in recent years with the objective of testing for gene-environment interaction (G × E) effects. In environmental epidemiology, interest in G × E arises primarily after a significant effect of the environmental exposure has been documented. Cohort studies often collect rich exposure data; as a result, assessing G × E effects in the presence of multiple exposure markers further increases the burden of multiple testing, an issue already present in both genetic and environment health studies. Latent variable (LV) models have been used in environmental epidemiology to reduce dimensionality of the exposure data, gain power by reducing multiplicity issues via condensing exposure data, and avoid collinearity problems due to presence of multiple correlated exposures. We extend the LV framework to characterize gene-environment interaction in presence of multiple correlated exposures and genotype categories. Further, similar to what has been done in case-control G × E studies, we use the assumption of gene-environment (G-E) independence to boost the power of tests for interaction. The consequences of making this assumption, or the issue of how to explicitly model G-E association has not been previously investigated in LV models. We postulate a hierarchy of assumptions about the LV model regarding the different forms of G-E dependence and show that making such assumptions may influence inferential results on the G, E, and G × E parameters. We implement a class of shrinkage estimators to data adaptively trade-off between the most restrictive to most flexible form of G-E dependence assumption and note that such class of compromise estimators can serve as a benchmark of model adequacy in LV models. We demonstrate the methods with an example from the Early Life Exposures in Mexico City to Neuro-Toxicants Study of lead exposure, iron metabolism genes, and birth weight. © 2011, The International Biometric Society.

  2. Cost characteristics of hospitals.

    PubMed

    Smet, Mike

    2002-09-01

    Modern hospitals are complex multi-product organisations. The analysis of a hospital's production and/or cost structure should therefore use the appropriate techniques. Flexible functional forms based on the neo-classical theory of the firm seem to be most suitable. Using neo-classical cost functions implicitly assumes minimisation of (variable) costs given that input prices and outputs are exogenous. Local and global properties of flexible functional forms and short-run versus long-run equilibrium are further issues that require thorough investigation. In order to put the results based on econometric estimations of cost functions in the right perspective, it is important to keep these considerations in mind when using flexible functional forms. The more recent studies seem to agree that hospitals generally do not operate in their long-run equilibrium (they tend to over-invest in capital (capacity and equipment)) and that it is therefore appropriate to estimate a short-run variable cost function. However, few studies explicitly take into account the implicit assumptions and restrictions embedded in the models they use. An alternative method to explain differences in costs uses management accounting techniques to identify the cost drivers of overhead costs. Related issues such as cost-shifting and cost-adjusting behaviour of hospitals and the influence of market structure on competition, prices and costs are also discussed shortly.

  3. Computational models of basal-ganglia pathway functions: focus on functional neuroanatomy

    PubMed Central

    Schroll, Henning; Hamker, Fred H.

    2013-01-01

    Over the past 15 years, computational models have had a considerable impact on basal-ganglia research. Most of these models implement multiple distinct basal-ganglia pathways and assume them to fulfill different functions. As there is now a multitude of different models, it has become complex to keep track of their various, sometimes just marginally different assumptions on pathway functions. Moreover, it has become a challenge to oversee to what extent individual assumptions are corroborated or challenged by empirical data. Focusing on computational, but also considering non-computational models, we review influential concepts of pathway functions and show to what extent they are compatible with or contradict each other. Moreover, we outline how empirical evidence favors or challenges specific model assumptions and propose experiments that allow testing assumptions against each other. PMID:24416002

  4. Expanded explorations into the optimization of an energy function for protein design

    PubMed Central

    Huang, Yao-ming; Bystroff, Christopher

    2014-01-01

    Nature possesses a secret formula for the energy as a function of the structure of a protein. In protein design, approximations are made to both the structural representation of the molecule and to the form of the energy equation, such that the existence of a general energy function for proteins is by no means guaranteed. Here we present new insights towards the application of machine learning to the problem of finding a general energy function for protein design. Machine learning requires the definition of an objective function, which carries with it the implied definition of success in protein design. We explored four functions, consisting of two functional forms, each with two criteria for success. Optimization was carried out by a Monte Carlo search through the space of all variable parameters. Cross-validation of the optimized energy function against a test set gave significantly different results depending on the choice of objective function, pointing to relative correctness of the built-in assumptions. Novel energy cross-terms correct for the observed non-additivity of energy terms and an imbalance in the distribution of predicted amino acids. This paper expands on the work presented at ACM-BCB, Orlando FL , October 2012. PMID:24384706

  5. A Markov chain model for reliability growth and decay

    NASA Technical Reports Server (NTRS)

    Siegrist, K.

    1982-01-01

    A mathematical model is developed to describe a complex system undergoing a sequence of trials in which there is interaction between the internal states of the system and the outcomes of the trials. For example, the model might describe a system undergoing testing that is redesigned after each failure. The basic assumptions for the model are that the state of the system after a trial depends probabilistically only on the state before the trial and on the outcome of the trial and that the outcome of a trial depends probabilistically only on the state of the system before the trial. It is shown that under these basic assumptions, the successive states form a Markov chain and the successive states and outcomes jointly form a Markov chain. General results are obtained for the transition probabilities, steady-state distributions, etc. A special case studied in detail describes a system that has two possible state ('repaired' and 'unrepaired') undergoing trials that have three possible outcomes ('inherent failure', 'assignable-cause' 'failure' and 'success'). For this model, the reliability function is computed explicitly and an optimal repair policy is obtained.

  6. Design data needs modular high-temperature gas-cooled reactor. Revision 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1987-03-01

    The Design Data Needs (DDNs) provide summary statements for program management, of the designer`s need for experimental data to confirm or validate assumptions made in the design. These assumptions were developed using the Integrated Approach and are tabulated in the Functional Analysis Report. These assumptions were also necessary in the analyses or trade studies (A/TS) to develop selections of hardware design or design requirements. Each DDN includes statements providing traceability to the function and the associated assumption that requires the need.

  7. On the development of a semi-nonparametric generalized multinomial logit model for travel-related choices

    PubMed Central

    Ye, Xin; Pendyala, Ram M.; Zou, Yajie

    2017-01-01

    A semi-nonparametric generalized multinomial logit model, formulated using orthonormal Legendre polynomials to extend the standard Gumbel distribution, is presented in this paper. The resulting semi-nonparametric function can represent a probability density function for a large family of multimodal distributions. The model has a closed-form log-likelihood function that facilitates model estimation. The proposed method is applied to model commute mode choice among four alternatives (auto, transit, bicycle and walk) using travel behavior data from Argau, Switzerland. Comparisons between the multinomial logit model and the proposed semi-nonparametric model show that violations of the standard Gumbel distribution assumption lead to considerable inconsistency in parameter estimates and model inferences. PMID:29073152

  8. On the development of a semi-nonparametric generalized multinomial logit model for travel-related choices.

    PubMed

    Wang, Ke; Ye, Xin; Pendyala, Ram M; Zou, Yajie

    2017-01-01

    A semi-nonparametric generalized multinomial logit model, formulated using orthonormal Legendre polynomials to extend the standard Gumbel distribution, is presented in this paper. The resulting semi-nonparametric function can represent a probability density function for a large family of multimodal distributions. The model has a closed-form log-likelihood function that facilitates model estimation. The proposed method is applied to model commute mode choice among four alternatives (auto, transit, bicycle and walk) using travel behavior data from Argau, Switzerland. Comparisons between the multinomial logit model and the proposed semi-nonparametric model show that violations of the standard Gumbel distribution assumption lead to considerable inconsistency in parameter estimates and model inferences.

  9. Actuarial calculation for PSAK-24 purposes post-employment benefit using market-consistent approach

    NASA Astrophysics Data System (ADS)

    Effendie, Adhitya Ronnie

    2015-12-01

    In this paper we use a market-consistent approach to calculate present value of obligation of a companies' post-employment benefit in accordance with PSAK-24 (the Indonesian accounting standard). We set some actuarial assumption such as Indonesian TMI 2011 mortality tables for mortality assumptions, accumulated salary function for wages assumption, a scaled (to mortality) disability assumption and a pre-defined turnover rate for termination assumption. For economic assumption, we use binomial tree method with estimated discount rate as its average movement. In accordance with PSAK-24, the Projected Unit Credit method has been adapted to determine the present value of obligation (actuarial liability), so we use this method with a modification in its discount function.

  10. Discrete Thermodynamics

    DOE PAGES

    Margolin, L. G.; Hunter, A.

    2017-10-18

    Here, we consider the dependence of velocity probability distribution functions on the finite size of a thermodynamic system. We are motivated by applications to computational fluid dynamics, hence discrete thermodynamics. We then begin by describing a coarsening process that represents geometric renormalization. Then, based only on the requirements of conservation, we demonstrate that the pervasive assumption of local thermodynamic equilibrium is not form invariant. We develop a perturbative correction that restores form invariance to second-order in a small parameter associated with macroscopic gradients. Finally, we interpret the corrections in terms of unresolved kinetic energy and discuss the implications of ourmore » results both in theory and as applied to numerical simulation.« less

  11. Discrete Thermodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margolin, L. G.; Hunter, A.

    Here, we consider the dependence of velocity probability distribution functions on the finite size of a thermodynamic system. We are motivated by applications to computational fluid dynamics, hence discrete thermodynamics. We then begin by describing a coarsening process that represents geometric renormalization. Then, based only on the requirements of conservation, we demonstrate that the pervasive assumption of local thermodynamic equilibrium is not form invariant. We develop a perturbative correction that restores form invariance to second-order in a small parameter associated with macroscopic gradients. Finally, we interpret the corrections in terms of unresolved kinetic energy and discuss the implications of ourmore » results both in theory and as applied to numerical simulation.« less

  12. What Mathematics Education Can Learn from Art: The Assumptions, Values, and Vision of Mathematics Education

    ERIC Educational Resources Information Center

    Dietiker, Leslie

    2015-01-01

    Elliot Eisner proposed that educational challenges can be met by applying an artful lens. This article draws from Eisner's proposal to consider the assumptions, values, and vision of mathematics education by theorizing mathematics curriculum as an art form. By conceptualizing mathematics curriculum (both in written and enacted forms) as stories…

  13. 7 CFR 1980.476 - Transfer and assumptions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...-354 449-30 to recover its pro rata share of the actual loss at that time. In completing Form FmHA or... the lender on liquidations and property management. A. The State Director may approve all transfer and... Director will notify the Finance Office of all approved transfer and assumption cases on Form FmHA or its...

  14. Numerical evaluation of longitudinal motions of Wigley hulls advancing in waves by using Bessho form translating-pulsating source Green'S function

    NASA Astrophysics Data System (ADS)

    Xiao, Wenbin; Dong, Wencai

    2016-06-01

    In the framework of 3D potential flow theory, Bessho form translating-pulsating source Green's function in frequency domain is chosen as the integral kernel in this study and hybrid source-and-dipole distribution model of the boundary element method is applied to directly solve the velocity potential for advancing ship in regular waves. Numerical characteristics of the Green function show that the contribution of local-flow components to velocity potential is concentrated at the nearby source point area and the wave component dominates the magnitude of velocity potential in the far field. Two kinds of mathematical models, with or without local-flow components taken into account, are adopted to numerically calculate the longitudinal motions of Wigley hulls, which demonstrates the applicability of translating-pulsating source Green's function method for various ship forms. In addition, the mesh analysis of discrete surface is carried out from the perspective of ship-form characteristics. The study shows that the longitudinal motion results by the simplified model are somewhat greater than the experimental data in the resonant zone, and the model can be used as an effective tool to predict ship seakeeping properties. However, translating-pulsating source Green function method is only appropriate for the qualitative analysis of motion response in waves if the ship geometrical shape fails to satisfy the slender-body assumption.

  15. Turbulence simulation mechanization for Space Shuttle Orbiter dynamics and control studies

    NASA Technical Reports Server (NTRS)

    Tatom, F. B.; King, R. L.

    1977-01-01

    The current version of the NASA turbulent simulation model in the form of a digital computer program, TBMOD, is described. The logic of the program is discussed and all inputs and outputs are defined. An alternate method of shear simulation suitable for incorporation into the model is presented. The simulation is based on a von Karman spectrum and the assumption of isotropy. The resulting spectral density functions for the shear model are included.

  16. Optimum nonparametric estimation of population density based on ordered distances

    USGS Publications Warehouse

    Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.

    1982-01-01

    The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.

  17. Black hole binaries dynamically formed in globular clusters

    NASA Astrophysics Data System (ADS)

    Park, Dawoo; Kim, Chunglee; Lee, Hyung Mok; Bae, Yeong-Bok; Belczynski, Krzysztof

    2017-08-01

    We investigate properties of black hole (BH) binaries formed in globular clusters via dynamical processes, using directN-body simulations. We pay attention to effects of BH mass function on the total mass and mass ratio distributions of BH binaries ejected from clusters. First, we consider BH populations with two different masses in order to learn basic differences from models with single-mass BHs only. Secondly, we consider continuous BH mass functions adapted from recent studies on massive star evolution in a low metallicity environment, where globular clusters are formed. In this work, we consider only binaries that are formed by three-body processes and ignore stellar evolution and primordial binaries for simplicity. Our results imply that most BH binary mergers take place after they get ejected from the cluster. Also, mass ratios of dynamically formed binaries should be close to 1 or likely to be less than 2:1. Since the binary formation efficiency is larger for higher-mass BHs, it is likely that a BH mass function sampled by gravitational-wave observations would be weighed towards higher masses than the mass function of single BHs for a dynamically formed population. Applying conservative assumptions regarding globular cluster populations such as small BH mass fraction and no primordial binaries, the merger rate of BH binaries originated from globular clusters is estimated to be at least 6.5 yr-1 Gpc-3. Actual rate can be up to more than several times of our conservative estimate.

  18. Acuity of a Cryptochrome and Vision-Based Magnetoreception System in Birds

    PubMed Central

    Solov'yov, Ilia A.; Mouritsen, Henrik; Schulten, Klaus

    2010-01-01

    Abstract The magnetic compass of birds is embedded in the visual system and it has been hypothesized that the primary sensory mechanism is based on a radical pair reaction. Previous models of magnetoreception have assumed that the radical pair-forming molecules are rigidly fixed in space, and this assumption has been a major objection to the suggested hypothesis. In this article, we investigate theoretically how much disorder is permitted for the radical pair-forming, protein-based magnetic compass in the eye to remain functional. Our study shows that only one rotational degree of freedom of the radical pair-forming protein needs to be partially constrained, while the other two rotational degrees of freedom do not impact the magnetoreceptive properties of the protein. The result implies that any membrane-associated protein is sufficiently restricted in its motion to function as a radical pair-based magnetoreceptor. We relate our theoretical findings to the cryptochromes, currently considered the likeliest candidate to furnish radical pair-based magnetoreception. PMID:20655831

  19. Employing the Components of the Human Development Index to Drive Resources to Educational Policies

    ERIC Educational Resources Information Center

    Sant'Anna, Annibal Parracho; de Araujo Ribeiro, Rodrigo Otavio; Dutt-Ross, Steven

    2011-01-01

    A new form of composition of the indicators employed to generate the United Nations Human Development Index (HDI) is presented here. This form of composition is based on the assumption that random errors affect the measurement of each indicator. This assumption allows for replacing the vector of evaluations according to each indicator by vectors…

  20. Three statistical models for estimating length of stay.

    PubMed Central

    Selvin, S

    1977-01-01

    The probability density functions implied by three methods of collecting data on the length of stay in an institution are derived. The expected values associated with these density functions are used to calculate unbiased estimates of the expected length of stay. Two of the methods require an assumption about the form of the underlying distribution of length of stay; the third method does not. The three methods are illustrated with hypothetical data exhibiting the Poisson distribution, and the third (distribution-independent) method is used to estimate the length of stay in a skilled nursing facility and in an intermediate care facility for patients enrolled in California's MediCal program. PMID:914532

  1. Three statistical models for estimating length of stay.

    PubMed

    Selvin, S

    1977-01-01

    The probability density functions implied by three methods of collecting data on the length of stay in an institution are derived. The expected values associated with these density functions are used to calculate unbiased estimates of the expected length of stay. Two of the methods require an assumption about the form of the underlying distribution of length of stay; the third method does not. The three methods are illustrated with hypothetical data exhibiting the Poisson distribution, and the third (distribution-independent) method is used to estimate the length of stay in a skilled nursing facility and in an intermediate care facility for patients enrolled in California's MediCal program.

  2. A flexible model for the mean and variance functions, with application to medical cost data.

    PubMed

    Chen, Jinsong; Liu, Lei; Zhang, Daowen; Shih, Ya-Chen T

    2013-10-30

    Medical cost data are often skewed to the right and heteroscedastic, having a nonlinear relation with covariates. To tackle these issues, we consider an extension to generalized linear models by assuming nonlinear associations of covariates in the mean function and allowing the variance to be an unknown but smooth function of the mean. We make no further assumption on the distributional form. The unknown functions are described by penalized splines, and the estimation is carried out using nonparametric quasi-likelihood. Simulation studies show the flexibility and advantages of our approach. We apply the model to the annual medical costs of heart failure patients in the clinical data repository at the University of Virginia Hospital System. Copyright © 2013 John Wiley & Sons, Ltd.

  3. Dark energy cosmology with tachyon field in teleparallel gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Motavalli, H., E-mail: Motavalli@Tabrizu.ac.ir; Akbarieh, A. Rezaei; Nasiry, M.

    2016-07-15

    We construct a tachyon teleparallel dark energy model for a homogeneous and isotropic flat universe in which a tachyon as a non-canonical scalar field is non-minimally coupled to gravity in the framework of teleparallel gravity. The explicit form of potential and coupling functions are obtained under the assumption that the Lagrangian admits the Noether symmetry approach. The dynamical behavior of the basic cosmological observables is compared to recent observational data, which implies that the tachyon field may serve as a candidate for dark energy.

  4. Peptides at Membrane Surfaces and their Role in the Origin of Life

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew; Wilson, Michael A.; DeVincenzi, D. (Technical Monitor)

    2002-01-01

    All ancestors of contemporary cells (protocells) had to transport ions and organic matter across membranous walls, capture and utilize energy and transduce environmental signals. In modern organisms, all these functions are preformed by membrane proteins. We make the parsimonious assumption that in the protobiological milieu the same functions were carried out by their simple analogs - peptides. This, however, required that simple peptides could self-organize into ordered, functional structures. In a series of detailed, molecular-level computer simulations we demonstrated how this is possible. One example is the peptide (LSLLLSL)3 which forms a trameric bundle capable of transporting protons across membranes. Another example is the transmembrane pore of the influenza M2 protein. This aggregate of four identical alpha-helices, each built of 25 amino acids, forms an efficient and selective voltage-gated proton channel. Our simulations explain the gating mechanism in this channel. The channel can be re-engineered into a simple proton pump.

  5. Decision making generalized by a cumulative probability weighting function

    NASA Astrophysics Data System (ADS)

    dos Santos, Lindomar Soares; Destefano, Natália; Martinez, Alexandre Souto

    2018-01-01

    Typical examples of intertemporal decision making involve situations in which individuals must choose between a smaller reward, but more immediate, and a larger one, delivered later. Analogously, probabilistic decision making involves choices between options whose consequences differ in relation to their probability of receiving. In Economics, the expected utility theory (EUT) and the discounted utility theory (DUT) are traditionally accepted normative models for describing, respectively, probabilistic and intertemporal decision making. A large number of experiments confirmed that the linearity assumed by the EUT does not explain some observed behaviors, as nonlinear preference, risk-seeking and loss aversion. That observation led to the development of new theoretical models, called non-expected utility theories (NEUT), which include a nonlinear transformation of the probability scale. An essential feature of the so-called preference function of these theories is that the probabilities are transformed by decision weights by means of a (cumulative) probability weighting function, w(p) . We obtain in this article a generalized function for the probabilistic discount process. This function has as particular cases mathematical forms already consecrated in the literature, including discount models that consider effects of psychophysical perception. We also propose a new generalized function for the functional form of w. The limiting cases of this function encompass some parametric forms already proposed in the literature. Far beyond a mere generalization, our function allows the interpretation of probabilistic decision making theories based on the assumption that individuals behave similarly in the face of probabilities and delays and is supported by phenomenological models.

  6. Abductive Equivalential Translation and its application to Natural Language Database Interfacing

    NASA Astrophysics Data System (ADS)

    Rayner, Manny

    1994-05-01

    The thesis describes a logical formalization of natural-language database interfacing. We assume the existence of a ``natural language engine'' capable of mediating between surface linguistic string and their representations as ``literal'' logical forms: the focus of interest will be the question of relating ``literal'' logical forms to representations in terms of primitives meaningful to the underlying database engine. We begin by describing the nature of the problem, and show how a variety of interface functionalities can be considered as instances of a type of formal inference task which we call ``Abductive Equivalential Translation'' (AET); functionalities which can be reduced to this form include answering questions, responding to commands, reasoning about the completeness of answers, answering meta-questions of type ``Do you know...'', and generating assertions and questions. In each case, a ``linguistic domain theory'' (LDT) Γ and an input formula F are given, and the goal is to construct a formula with certain properties which is equivalent to F, given Γ and a set of permitted assumptions. If the LDT is of a certain specified type, whose formulas are either conditional equivalences or Horn-clauses, we show that the AET problem can be reduced to a goal-directed inference method. We present an abstract description of this method, and sketch its realization in Prolog. The relationship between AET and several problems previously discussed in the literature is discussed. In particular, we show how AET can provide a simple and elegant solution to the so-called ``Doctor on Board'' problem, and in effect allows a ``relativization'' of the Closed World Assumption. The ideas in the thesis have all been implemented concretely within the SRI CLARE project, using a real projects and payments database. The LDT for the example database is described in detail, and examples of the types of functionality that can be achieved within the example domain are presented.

  7. From crater functions to partial differential equations: a new approach to ion bombardment induced nonequilibrium pattern formation.

    PubMed

    Norris, Scott A; Brenner, Michael P; Aziz, Michael J

    2009-06-03

    We develop a methodology for deriving continuum partial differential equations for the evolution of large-scale surface morphology directly from molecular dynamics simulations of the craters formed from individual ion impacts. Our formalism relies on the separation between the length scale of ion impact and the characteristic scale of pattern formation, and expresses the surface evolution in terms of the moments of the crater function. We demonstrate that the formalism reproduces the classical Bradley-Harper results, as well as ballistic atomic drift, under the appropriate simplifying assumptions. Given an actual set of converged molecular dynamics moments and their derivatives with respect to the incidence angle, our approach can be applied directly to predict the presence and absence of surface morphological instabilities. This analysis represents the first work systematically connecting molecular dynamics simulations of ion bombardment to partial differential equations that govern topographic pattern-forming instabilities.

  8. Ice cream and orbifold Riemann-Roch

    NASA Astrophysics Data System (ADS)

    Buckley, Anita; Reid, Miles; Zhou, Shengtian

    2013-06-01

    We give an orbifold Riemann-Roch formula in closed form for the Hilbert series of a quasismooth polarized n-fold (X,D), under the assumption that X is projectively Gorenstein with only isolated orbifold points. Our formula is a sum of parts each of which is integral and Gorenstein symmetric of the same canonical weight; the orbifold parts are called ice cream functions. This form of the Hilbert series is particularly useful for computer algebra, and we illustrate it on examples of {K3} surfaces and Calabi-Yau 3-folds. These results apply also with higher dimensional orbifold strata (see [1] and [2]), although the precise statements are considerably trickier. We expect to return to this in future publications.

  9. Propagation of sound waves through a linear shear layer: A closed form solution

    NASA Technical Reports Server (NTRS)

    Scott, J. N.

    1978-01-01

    Closed form solutions are presented for sound propagation from a line source in or near a shear layer. The analysis was exact for all frequencies and was developed assuming a linear velocity profile in the shear layer. This assumption allowed the solution to be expressed in terms of parabolic cyclinder functions. The solution is presented for a line monopole source first embedded in the uniform flow and then in the shear layer. Solutions are also discussed for certain types of dipole and quadrupole sources. Asymptotic expansions of the exact solutions for small and large values of Strouhal number gave expressions which correspond to solutions previously obtained for these limiting cases.

  10. The Excursion Set Theory of Halo Mass Functions, Halo Clustering, and Halo Growth

    NASA Astrophysics Data System (ADS)

    Zentner, Andrew R.

    I review the excursion set theory with particular attention toward applications to cold dark matter halo formation and growth, halo abundance, and halo clustering. After a brief introduction to notation and conventions, I begin by recounting the heuristic argument leading to the mass function of bound objects given by Press and Schechter. I then review the more formal derivation of the Press-Schechter halo mass function that makes use of excursion sets of the density field. The excursion set formalism is powerful and can be applied to numerous other problems. I review the excursion set formalism for describing both halo clustering and bias and the properties of void regions. As one of the most enduring legacies of the excursion set approach and one of its most common applications, I spend considerable time reviewing the excursion set theory of halo growth. This section of the review culminates with the description of two Monte Carlo methods for generating ensembles of halo mass accretion histories. In the last section, I emphasize that the standard excursion set approach is the result of several simplifying assumptions. Dropping these assumptions can lead to more faithful predictions and open excursion set theory to new applications. One such assumption is that the height of the barriers that define collapsed objects is a constant function of scale. I illustrate the implementation of the excursion set approach for barriers of arbitrary shape. One such application is the now well-known improvement of the excursion set mass function derived from the "moving" barrier for ellipsoidal collapse. I also emphasize that the statement that halo accretion histories are independent of halo environment in the excursion set approach is not a general prediction of the theory. It is a simplifying assumption. I review the method for constructing correlated random walks of the density field in the more general case. I construct a simple toy model to illustrate that excursion set theory (with a constant barrier height) makes a simple and general prediction for the relation between halo accretion histories and the large-scale environments of halos: regions of high density preferentially contain late-forming halos and conversely for regions of low density. I conclude with a brief discussion of the importance of this prediction relative to recent numerical studies of the environmental dependence of halo properties.

  11. Basic lubrication equations

    NASA Technical Reports Server (NTRS)

    Hamrock, B. J.; Dowson, D.

    1981-01-01

    Lubricants, usually Newtonian fluids, are assumed to experience laminar flow. The basic equations used to describe the flow are the Navier-Stokes equation of motion. The study of hydrodynamic lubrication is, from a mathematical standpoint, the application of a reduced form of these Navier-Stokes equations in association with the continuity equation. The Reynolds equation can also be derived from first principles, provided of course that the same basic assumptions are adopted in each case. Both methods are used in deriving the Reynolds equation, and the assumptions inherent in reducing the Navier-Stokes equations are specified. Because the Reynolds equation contains viscosity and density terms and these properties depend on temperature and pressure, it is often necessary to couple the Reynolds with energy equation. The lubricant properties and the energy equation are presented. Film thickness, a parameter of the Reynolds equation, is a function of the elastic behavior of the bearing surface. The governing elasticity equation is therefore presented.

  12. Elucidating the Dark Side of Envy: Distinctive Links of Benign and Malicious Envy With Dark Personalities

    PubMed Central

    Lange, Jens; Paulhus, Delroy L.; Crusius, Jan

    2017-01-01

    Researchers have recently drawn a contrast between two forms of envy: benign and malicious envy. In three studies (total N = 3,123), we challenge the assumption that malicious envy is destructive, whereas benign envy is entirely constructive. Instead, both forms have links with the Dark Triad of personality. Benign envy is associated with Machiavellian behaviors, whereas malicious envy is associated with both Machiavellian and psychopathic behaviors. In Study 1, this pattern emerged from meta-analyzed trait correlations. In Study 2, a manipulation affecting the envy forms mediated an effect on antisocial behavioral intentions. Study 3 replicated these patterns by linking envy to specific antisocial behaviors and their impact on status in the workplace. Together, our correlational and experimental results suggest that the two forms of envy can both be malevolent. Instead of evaluating envy’s morality, we propose to focus on its functional value. PMID:29271287

  13. Extension of the method of moments for population balances involving fractional moments and application to a typical agglomeration problem.

    PubMed

    Alexiadis, Alessio; Vanni, Marco; Gardin, Pascal

    2004-08-01

    The method of moment (MOM) is a powerful tool for solving population balance. Nevertheless it cannot be used in every circumstance. Sometimes, in fact, it is not possible to write the governing equations in closed form. Higher moments, for instance, could appear in the evolution of the lower ones. This obstacle has often been resolved by prescribing some functional form for the particle size distribution. Another example is the occurrence of fractional moment, usually connected with the presence of fractal aggregates. For this case we propose a procedure that does not need any assumption on the form of the distribution but it is based on the "moments generating function" (that is the Laplace transform of the distribution). An important result of probability theory is that the kth derivative of the moments generating function represents the kth moment of the original distribution. This result concerns integer moments but, taking in account the Weyl fractional derivative, could be extended to fractional orders. Approximating fractional derivative makes it possible to express the fractional moments in terms of the integer ones and so to use regularly the method of moments.

  14. Evaluation of psychometric properties and differential item functioning of 8-item Child Perceptions Questionnaires using item response theory.

    PubMed

    Yau, David T W; Wong, May C M; Lam, K F; McGrath, Colman

    2015-08-19

    Four-factor structure of the two 8-item short forms of Child Perceptions Questionnaire CPQ11-14 (RSF:8 and ISF:8) has been confirmed. However, the sum scores are typically reported in practice as a proxy of Oral health-related Quality of Life (OHRQoL), which implied a unidimensional structure. This study first assessed the unidimensionality of 8-item short forms of CPQ11-14. Item response theory (IRT) was employed to offer an alternative and complementary approach of validation and to overcome the limitations of classical test theory assumptions. A random sample of 649 12-year-old school children in Hong Kong was analyzed. Unidimensionality of the scale was tested by confirmatory factor analysis (CFA), principle component analysis (PCA) and local dependency (LD) statistic. Graded response model was fitted to the data. Contribution of each item to the scale was assessed by item information function (IIF). Reliability of the scale was assessed by test information function (TIF). Differential item functioning (DIF) across gender was identified by Wald test and expected score functions. Both CPQ11-14 RSF:8 and ISF:8 did not deviate much from the unidimensionality assumption. Results from CFA indicated acceptable fit of the one-factor model. PCA indicated that the first principle component explained >30 % of the total variation with high factor loadings for both RSF:8 and ISF:8. Almost all LD statistic <10 indicated the absence of local dependency. Flat and low IIFs were observed in the oral symptoms items suggesting little contribution of information to the scale and item removal caused little practical impact. Comparing the TIFs, RSF:8 showed slightly better information than ISF:8. In addition to oral symptoms items, the item "Concerned with what other people think" demonstrated a uniform DIF (p < 0.001). The expected score functions were not much different between boys and girls. Items related to oral symptoms were not informative to OHRQoL and deletion of these items is suggested. The impact of DIF across gender on the overall score was minimal. CPQ11-14 RSF:8 performed slightly better than ISF:8 in measurement precision. The 6-item short forms suggested by IRT validation should be further investigated to ensure their robustness, responsiveness and discriminative performance.

  15. Consistency tests for the extraction of the Boer-Mulders and Sivers functions

    NASA Astrophysics Data System (ADS)

    Christova, E.; Leader, E.; Stoilov, M.

    2018-03-01

    At present, the Boer-Mulders (BM) function for a given quark flavor is extracted from data on semi-inclusive deep inelastic scattering (SIDIS) using the simplifying assumption that it is proportional to the Sivers function for that flavor. In a recent paper, we suggested that the consistency of this assumption could be tested using information on so-called difference asymmetries i.e. the difference between the asymmetries in the production of particles and their antiparticles. In this paper, using the SIDIS COMPASS deuteron data on the ⟨cos ϕh⟩ , ⟨cos 2 ϕh⟩ and Sivers difference asymmetries, we carry out two independent consistency tests of the assumption of proportionality, but here applied to the sum of the valence-quark contributions. We find that such an assumption is compatible with the data. We also show that the proportionality assumptions made in the existing parametrizations of the BM functions are not compatible with our analysis, which suggests that the published results for the Boer-Mulders functions for individual flavors are unreliable. The ⟨cos ϕh⟩ and ⟨cos 2 ϕh⟩ asymmetries receive contributions also from the, in principle, calculable Cahn effect. We succeed in extracting the Cahn contributions from experiment (we believe for the first time) and compare with their calculated values, with interesting implications.

  16. Joint estimation of preferential attachment and node fitness in growing complex networks

    NASA Astrophysics Data System (ADS)

    Pham, Thong; Sheridan, Paul; Shimodaira, Hidetoshi

    2016-09-01

    Complex network growth across diverse fields of science is hypothesized to be driven in the main by a combination of preferential attachment and node fitness processes. For measuring the respective influences of these processes, previous approaches make strong and untested assumptions on the functional forms of either the preferential attachment function or fitness function or both. We introduce a Bayesian statistical method called PAFit to estimate preferential attachment and node fitness without imposing such functional constraints that works by maximizing a log-likelihood function with suitably added regularization terms. We use PAFit to investigate the interplay between preferential attachment and node fitness processes in a Facebook wall-post network. While we uncover evidence for both preferential attachment and node fitness, thus validating the hypothesis that these processes together drive complex network evolution, we also find that node fitness plays the bigger role in determining the degree of a node. This is the first validation of its kind on real-world network data. But surprisingly the rate of preferential attachment is found to deviate from the conventional log-linear form when node fitness is taken into account. The proposed method is implemented in the R package PAFit.

  17. Joint estimation of preferential attachment and node fitness in growing complex networks

    PubMed Central

    Pham, Thong; Sheridan, Paul; Shimodaira, Hidetoshi

    2016-01-01

    Complex network growth across diverse fields of science is hypothesized to be driven in the main by a combination of preferential attachment and node fitness processes. For measuring the respective influences of these processes, previous approaches make strong and untested assumptions on the functional forms of either the preferential attachment function or fitness function or both. We introduce a Bayesian statistical method called PAFit to estimate preferential attachment and node fitness without imposing such functional constraints that works by maximizing a log-likelihood function with suitably added regularization terms. We use PAFit to investigate the interplay between preferential attachment and node fitness processes in a Facebook wall-post network. While we uncover evidence for both preferential attachment and node fitness, thus validating the hypothesis that these processes together drive complex network evolution, we also find that node fitness plays the bigger role in determining the degree of a node. This is the first validation of its kind on real-world network data. But surprisingly the rate of preferential attachment is found to deviate from the conventional log-linear form when node fitness is taken into account. The proposed method is implemented in the R package PAFit. PMID:27601314

  18. Discovering functional interdependence relationship in PPI networks for protein complex identification.

    PubMed

    Lam, Winnie W M; Chan, Keith C C

    2012-04-01

    Protein molecules interact with each other in protein complexes to perform many vital functions, and different computational techniques have been developed to identify protein complexes in protein-protein interaction (PPI) networks. These techniques are developed to search for subgraphs of high connectivity in PPI networks under the assumption that the proteins in a protein complex are highly interconnected. While these techniques have been shown to be quite effective, it is also possible that the matching rate between the protein complexes they discover and those that are previously determined experimentally be relatively low and the "false-alarm" rate can be relatively high. This is especially the case when the assumption of proteins in protein complexes being more highly interconnected be relatively invalid. To increase the matching rate and reduce the false-alarm rate, we have developed a technique that can work effectively without having to make this assumption. The name of the technique called protein complex identification by discovering functional interdependence (PCIFI) searches for protein complexes in PPI networks by taking into consideration both the functional interdependence relationship between protein molecules and the network topology of the network. The PCIFI works in several steps. The first step is to construct a multiple-function protein network graph by labeling each vertex with one or more of the molecular functions it performs. The second step is to filter out protein interactions between protein pairs that are not functionally interdependent of each other in the statistical sense. The third step is to make use of an information-theoretic measure to determine the strength of the functional interdependence between all remaining interacting protein pairs. Finally, the last step is to try to form protein complexes based on the measure of the strength of functional interdependence and the connectivity between proteins. For performance evaluation, PCIFI was used to identify protein complexes in real PPI network data and the protein complexes it found were matched against those that were previously known in MIPS. The results show that PCIFI can be an effective technique for the identification of protein complexes. The protein complexes it found can match more known protein complexes with a smaller false-alarm rate and can provide useful insights into the understanding of the functional interdependence relationships between proteins in protein complexes.

  19. Response of a rigid aircraft to nonstationary atmospheric turbulence.

    NASA Technical Reports Server (NTRS)

    Verdon, J. M.; Steiner, R.

    1973-01-01

    The plunging response of an aircraft to a type of nonstationary turbulent excitation is considered. The latter consists of stationary Gaussian noise modulated by a well-defined envelope function. The intent of the investigation is to model the excitation experienced by an airplane flying through turbulence of varying intensity and to examine the influence of intensity variations on exceedance frequencies of the gust velocity and the airplane's plunging velocity and acceleration. One analytical advantage of the proposed model is that the Gaussian assumption for the gust excitation is retained. The analysis described herein is developed in terms of an envelope function of arbitrary form; however, numerical calculations are limited to the case of harmonic modulation.

  20. Neurobiological roots of language in primate audition: common computational properties.

    PubMed

    Bornkessel-Schlesewsky, Ina; Schlesewsky, Matthias; Small, Steven L; Rauschecker, Josef P

    2015-03-01

    Here, we present a new perspective on an old question: how does the neurobiology of human language relate to brain systems in nonhuman primates? We argue that higher-order language combinatorics, including sentence and discourse processing, can be situated in a unified, cross-species dorsal-ventral streams architecture for higher auditory processing, and that the functions of the dorsal and ventral streams in higher-order language processing can be grounded in their respective computational properties in primate audition. This view challenges an assumption, common in the cognitive sciences, that a nonhuman primate model forms an inherently inadequate basis for modeling higher-level language functions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. A machine learning approach to predicting protein-ligand binding affinity with applications to molecular docking.

    PubMed

    Ballester, Pedro J; Mitchell, John B O

    2010-05-01

    Accurately predicting the binding affinities of large sets of diverse protein-ligand complexes is an extremely challenging task. The scoring functions that attempt such computational prediction are essential for analysing the outputs of molecular docking, which in turn is an important technique for drug discovery, chemical biology and structural biology. Each scoring function assumes a predetermined theory-inspired functional form for the relationship between the variables that characterize the complex, which also include parameters fitted to experimental or simulation data and its predicted binding affinity. The inherent problem of this rigid approach is that it leads to poor predictivity for those complexes that do not conform to the modelling assumptions. Moreover, resampling strategies, such as cross-validation or bootstrapping, are still not systematically used to guard against the overfitting of calibration data in parameter estimation for scoring functions. We propose a novel scoring function (RF-Score) that circumvents the need for problematic modelling assumptions via non-parametric machine learning. In particular, Random Forest was used to implicitly capture binding effects that are hard to model explicitly. RF-Score is compared with the state of the art on the demanding PDBbind benchmark. Results show that RF-Score is a very competitive scoring function. Importantly, RF-Score's performance was shown to improve dramatically with training set size and hence the future availability of more high-quality structural and interaction data is expected to lead to improved versions of RF-Score. pedro.ballester@ebi.ac.uk; jbom@st-andrews.ac.uk Supplementary data are available at Bioinformatics online.

  2. Maximum likelihood solution for inclination-only data in paleomagnetism

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2010-08-01

    We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.

  3. Statistical power as a function of Cronbach alpha of instrument questionnaire items.

    PubMed

    Heo, Moonseong; Kim, Namhee; Faith, Myles S

    2015-10-14

    In countless number of clinical trials, measurements of outcomes rely on instrument questionnaire items which however often suffer measurement error problems which in turn affect statistical power of study designs. The Cronbach alpha or coefficient alpha, here denoted by C(α), can be used as a measure of internal consistency of parallel instrument items that are developed to measure a target unidimensional outcome construct. Scale score for the target construct is often represented by the sum of the item scores. However, power functions based on C(α) have been lacking for various study designs. We formulate a statistical model for parallel items to derive power functions as a function of C(α) under several study designs. To this end, we assume fixed true score variance assumption as opposed to usual fixed total variance assumption. That assumption is critical and practically relevant to show that smaller measurement errors are inversely associated with higher inter-item correlations, and thus that greater C(α) is associated with greater statistical power. We compare the derived theoretical statistical power with empirical power obtained through Monte Carlo simulations for the following comparisons: one-sample comparison of pre- and post-treatment mean differences, two-sample comparison of pre-post mean differences between groups, and two-sample comparison of mean differences between groups. It is shown that C(α) is the same as a test-retest correlation of the scale scores of parallel items, which enables testing significance of C(α). Closed-form power functions and samples size determination formulas are derived in terms of C(α), for all of the aforementioned comparisons. Power functions are shown to be an increasing function of C(α), regardless of comparison of interest. The derived power functions are well validated by simulation studies that show that the magnitudes of theoretical power are virtually identical to those of the empirical power. Regardless of research designs or settings, in order to increase statistical power, development and use of instruments with greater C(α), or equivalently with greater inter-item correlations, is crucial for trials that intend to use questionnaire items for measuring research outcomes. Further development of the power functions for binary or ordinal item scores and under more general item correlation strutures reflecting more real world situations would be a valuable future study.

  4. Religion and psychosis: a common evolutionary trajectory?

    PubMed

    Dein, Simon; Littlewood, Roland

    2011-07-01

    In this article we propose that schizophrenia and religious cognition engage cognate mental modules in the over-attribution of agency and the overextension of theory of mind. We argue similarities and differences between assumptions of ultrahuman agents with omniscient minds and certain ''pathological'' forms of thinking in schizophrenia: thought insertion, withdrawal and broadcasting, and delusions of reference. In everyday religious cognition agency detection and theory of mind modules function ''normally,'' whereas in schizophrenia both modules are impaired. It is suggested that religion and schizophrenia have perhaps had a related evolutionary trajectory.

  5. Blade Tip Rubbing Stress Prediction

    NASA Technical Reports Server (NTRS)

    Davis, Gary A.; Clough, Ray C.

    1991-01-01

    An analytical model was constructed to predict the magnitude of stresses produced by rubbing a turbine blade against its tip seal. This model used a linearized approach to the problem, after a parametric study, found that the nonlinear effects were of insignificant magnitude. The important input parameters to the model were: the arc through which rubbing occurs, the turbine rotor speed, normal force exerted on the blade, and the rubbing coefficient of friction. Since it is not possible to exactly specify some of these parameters, values were entered into the model which bracket likely values. The form of the forcing function was another variable which was impossible to specify precisely, but the assumption of a half-sine wave with a period equal to the duration of the rub was taken as a realistic assumption. The analytical model predicted resonances between harmonics of the forcing function decomposition and known harmonics of the blade. Thus, it seemed probable that blade tip rubbing could be at least a contributor to the blade-cracking phenomenon. A full-scale, full-speed test conducted on the space shuttle main engine high pressure fuel turbopump Whirligig tester was conducted at speeds between 33,000 and 28,000 RPM to confirm analytical predictions.

  6. Semi-supervised learning via regularized boosting working on multiple semi-supervised assumptions.

    PubMed

    Chen, Ke; Wang, Shihai

    2011-01-01

    Semi-supervised learning concerns the problem of learning in the presence of labeled and unlabeled data. Several boosting algorithms have been extended to semi-supervised learning with various strategies. To our knowledge, however, none of them takes all three semi-supervised assumptions, i.e., smoothness, cluster, and manifold assumptions, together into account during boosting learning. In this paper, we propose a novel cost functional consisting of the margin cost on labeled data and the regularization penalty on unlabeled data based on three fundamental semi-supervised assumptions. Thus, minimizing our proposed cost functional with a greedy yet stagewise functional optimization procedure leads to a generic boosting framework for semi-supervised learning. Extensive experiments demonstrate that our algorithm yields favorite results for benchmark and real-world classification tasks in comparison to state-of-the-art semi-supervised learning algorithms, including newly developed boosting algorithms. Finally, we discuss relevant issues and relate our algorithm to the previous work.

  7. Modeling the effects of AADT on predicting multiple-vehicle crashes at urban and suburban signalized intersections.

    PubMed

    Chen, Chen; Xie, Yuanchang

    2016-06-01

    Annual Average Daily Traffic (AADT) is often considered as a main covariate for predicting crash frequencies at urban and suburban intersections. A linear functional form is typically assumed for the Safety Performance Function (SPF) to describe the relationship between the natural logarithm of expected crash frequency and covariates derived from AADTs. Such a linearity assumption has been questioned by many researchers. This study applies Generalized Additive Models (GAMs) and Piecewise Linear Negative Binomial (PLNB) regression models to fit intersection crash data. Various covariates derived from minor-and major-approach AADTs are considered. Three different dependent variables are modeled, which are total multiple-vehicle crashes, rear-end crashes, and angle crashes. The modeling results suggest that a nonlinear functional form may be more appropriate. Also, the results show that it is important to take into consideration the joint safety effects of multiple covariates. Additionally, it is found that the ratio of minor to major-approach AADT has a varying impact on intersection safety and deserves further investigations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Motions about a fixed point by hypergeometric functions: new non-complex analytical solutions and integration of the herpolhode

    NASA Astrophysics Data System (ADS)

    Mingari Scarpello, Giovanni; Ritelli, Daniele

    2018-06-01

    The present study highlights the dynamics of a body moving about a fixed point and provides analytical closed form solutions. Firstly, for the symmetrical heavy body, that is the Lagrange-Poisson case, we compute the second (precession, ψ ) and third (spin, φ) Euler angles in explicit and real form by means of multiple hypergeometric (Lauricella) functions. Secondly, releasing the weight assumption but adding the complication of the asymmetry, by means of elliptic integrals of third kind, we provide the precession angle ψ completing the treatment of the Euler-Poinsot case. Thirdly, by integrating the relevant differential equation, we reach the finite polar equation of a special motion trajectory named the herpolhode. Finally, we keep the symmetry of the first problem, but without weight, and take into account a viscous dissipation. The use of motion first integrals—adopted for the first two problems—is no longer practicable in this situation; therefore, the Euler equations, faced directly, are driving to particular occurrences of Bessel functions of order - 1/2.

  9. The distribution of the zeros of the Hermite-Padé polynomials for a pair of functions forming a Nikishin system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rakhmanov, E A; Suetin, S P

    2013-09-30

    The distribution of the zeros of the Hermite-Padé polynomials of the first kind for a pair of functions with an arbitrary even number of common branch points lying on the real axis is investigated under the assumption that this pair of functions forms a generalized complex Nikishin system. It is proved (Theorem 1) that the zeros have a limiting distribution, which coincides with the equilibrium measure of a certain compact set having the S-property in a harmonic external field. The existence problem for S-compact sets is solved in Theorem 2. The main idea of the proof of Theorem 1 consists in replacing a vector equilibrium problem in potentialmore » theory by a scalar problem with an external field and then using the general Gonchar-Rakhmanov method, which was worked out in the solution of the '1/9'-conjecture. The relation of the result obtained here to some results and conjectures due to Nuttall is discussed. Bibliography: 51 titles.« less

  10. [Amplitude modulation in sound signals by mammals].

    PubMed

    Nikol'skiĭ, A A

    2012-01-01

    Periodic variations in amplitude of a signal, or amplitude modulation (AM), affect the structure of communicative messages spectrum. Within the spectrum of AM-signals, side frequencies are formed both above and below the carrier frequency that is subjected to modulation. In case of harmonic signal structure they are presented near fundamental frequency as well as near harmonics. Thus, AM may by viewed as a relatively simple mechanism for controlling the spectrum of messages transmitted by mammals. Examples of AM affecting the spectrum structure of functionally different sound signals are discussed as applied to representatives of four orders of mammals: rodents (Reodentia), duplicidentates (Lagomorpha), pinnipeds (Pinnipedia), and paridigitates (Artiodactia). For the first time, the classification of AM in animals' sound signals is given. Five forms of AM are picked out in sound signals by mammals: absence of AM, continuous AM, fragmented, heterogeneous, and multilevel one. AM presence/absence is related neither with belonging to any specific order nor with some particular function of a signal. Similar forms of AM can occur in different orders of mammals in parallel. On the contrary, different forms of AM can be detected in signals meant for similar functions. The assumption is made about AM-signals facilitating information encoding and jamprotection of messages transmitted by mammals. Preliminry analysis indicates that hard-driving amplitude modulation is incompatible with hard-driving frequency modulation.

  11. Impact buckling of thin bars in the elastic range for any end condition

    NASA Technical Reports Server (NTRS)

    Taub, Josef

    1934-01-01

    Following a qualitative discussion of the complicated process involved in a short-period, longitudinal force applied to an originally not quite straight bar, the actual process is substituted by an idealized process for the purpose of analytical treatment. The simplifications are: the assumption of an infinitely high rate of propagation of the elastic longitudinal waves in the bar, limitation to slender bars, disregard of material damping and of rotatory inertia, the assumption of consistently small elastic deformations, the assumption of cross-sectional dimensions constant along the bar axis, the assumption of a shock-load constant in time, and the assumption of eccentricities on one plane. Then follow the mathematical principles for resolving the differential equation of the simplified problem, particularly the developability of arbitrary functions with steady first and second and intermittently steady third and fourth derivatives into one convergent series, according to the natural functions of the homogeneous differential equation.

  12. 20 CFR 404.1690 - Assumption when we make a finding of substantial failure.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Assumption when we make a finding of substantial failure. 404.1690 Section 404.1690 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD... responsibility for performing the disability determination function from the State agency, whether the assumption...

  13. 20 CFR 416.1090 - Assumption when we make a finding of substantial failure.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Assumption when we make a finding of substantial failure. 416.1090 Section 416.1090 Employees' Benefits SOCIAL SECURITY ADMINISTRATION SUPPLEMENTAL... responsibility for performing the disability determination function from the State agency, whether the assumption...

  14. A potential functional association between mutant BMPR2 and primary ovarian insufficiency.

    PubMed

    Patiño, Liliana Catherine; Silgado, Daniel; Laissue, Paul

    2017-06-01

    Primary ovarian insufficiency (POI) affects ~1% of women in the general population. Despite numerous attempts at identifying POI genetic aetiology, coding mutations in only a few genes have been functionally related to POI pathogenesis. It has been suggested that mutant BMPR2 might contribute towards the phenotype. Several BMP15 (a BMPR2 ligand) coding mutations in human species have been related to POI pathogenesis. The BMPR2 p.Ser987Phe mutation, previously identified in a woman with POI, might therefore lead to cellular dysfunction contributing to the phenotype. To explore such an assumption, the present study assessed potential pathogenic subcellular localization/aggregation patterns associated with the p.Ser987Phe mutant form of BMPR2 in a relevant model for studying ovarian function. A significant increase in protein-like aggregation patterns was identified at the endoplasmic reticulum (ER) which permitted us to establish, for the first time, a potential functional association between mutant BMPR2 and POI aetiology. Since BMPR2 mutant forms were previously related to idiopathic pulmonary arterial hypertension, BMPR2 mutations may be related to an as-yet-to-be described syndromic form of POI involving pulmonary dysfunction. Additional assays are necessary to confirm that BMPR2 abnormal subcellular patterns are composed by aggregates. POI: primary ovarian insufficiency; ER: endoplasmic reticulum; NGS: next generation sequencing.

  15. Developing animals flout prominent assumptions of ecological physiology.

    PubMed

    Burggren, Warren W

    2005-08-01

    Every field of biology has its assumptions, but when they grow to be dogma, they can become constraining. This essay presents data-based challenges to several prominent assumptions of developmental physiologists. The ubiquity of allometry is such an assumption, yet animal development is characterized by rate changes that are counter to allometric predictions. Physiological complexity is assumed to increase with development, but examples are provided showing that complexity can be greatest at intermediate developmental stages. It is assumed that organs have functional equivalency in embryos and adults, yet embryonic structures can have quite different functions than inferred from adults. Another assumption challenged is the duality of neural control (typically sympathetic and parasympathetic), since one of these two regulatory mechanisms typically considerably precedes in development the appearance of the other. A final assumption challenged is the notion that divergent phylogeny creates divergent physiologies in embryos just as in adults, when in fact early in development disparate vertebrate taxa show great quantitative as well as qualitative similarity. Collectively, the inappropriateness of these prominent assumptions based on adult studies suggests that investigation of embryos, larvae and fetuses be conducted with appreciation for their potentially unique physiologies.

  16. An entropic framework for modeling economies

    NASA Astrophysics Data System (ADS)

    Caticha, Ariel; Golan, Amos

    2014-08-01

    We develop an information-theoretic framework for economic modeling. This framework is based on principles of entropic inference that are designed for reasoning on the basis of incomplete information. We take the point of view of an external observer who has access to limited information about broad macroscopic economic features. We view this framework as complementary to more traditional methods. The economy is modeled as a collection of agents about whom we make no assumptions of rationality (in the sense of maximizing utility or profit). States of statistical equilibrium are introduced as those macrostates that maximize entropy subject to the relevant information codified into constraints. The basic assumption is that this information refers to supply and demand and is expressed in the form of the expected values of certain quantities (such as inputs, resources, goods, production functions, utility functions and budgets). The notion of economic entropy is introduced. It provides a measure of the uniformity of the distribution of goods and resources. It captures both the welfare state of the economy as well as the characteristics of the market (say, monopolistic, concentrated or competitive). Prices, which turn out to be the Lagrange multipliers, are endogenously generated by the economy. Further studies include the equilibrium between two economies and the conditions for stability. As an example, the case of the nonlinear economy that arises from linear production and utility functions is treated in some detail.

  17. Evolution of surface characteristics in material removal simulation with subaperture tools

    NASA Astrophysics Data System (ADS)

    Kim, Sug-Whan; Jee, Myung-Kook

    2002-02-01

    Over the last decade, we have witnessed that the fabrication of 200 - 2000 mm scale have received relatively little attention from the fabrication technology development, compared to those of smaller than 200 mm and of larger than 2000 mm in diameter. As a result, the optical surfaces of these scales are still predominantly completed by small optics shops where opticians apply the traditional technique for polishing. Lack of tools in aiding opticians for planning, executing and analyzing their polishing work is a root cause for long and, sometimes, unpredictable delivery and high manufacturing cost for such optical surfaces. We present the on-going development of a software simulation environment called Surface Analysis and Fabrication Environment (SAFE). It is primarily intended to increase the throughput of polishing and testing cycles by allowing opticians to simulate the resulting surface form and roughness with input polishing variables. A brief review of current polishing techniques and their target optics clarifies the need for such simulation tool. This is followed by the development targets and a preliminary simulation plan using the developmental version of SAFE. Among many polishing variables, two removal assumptions and three different types of removal functions we used for the polishing simulation presented. The simulations show that the Gaussian removal function with the proportional removal assumption resulted in the fastest, though marginal, convergence to a super-polished surface of 0.56 micron Peat- to-Valley in form accuracy and of 0.02 nanometer in surface roughness Ra. Other meaningful results and their implications are also presented.

  18. Improvement and comparison of likelihood functions for model calibration and parameter uncertainty analysis within a Markov chain Monte Carlo scheme

    NASA Astrophysics Data System (ADS)

    Cheng, Qin-Bo; Chen, Xi; Xu, Chong-Yu; Reinhardt-Imjela, Christian; Schulte, Achim

    2014-11-01

    In this study, the likelihood functions for uncertainty analysis of hydrological models are compared and improved through the following steps: (1) the equivalent relationship between the Nash-Sutcliffe Efficiency coefficient (NSE) and the likelihood function with Gaussian independent and identically distributed residuals is proved; (2) a new estimation method of the Box-Cox transformation (BC) parameter is developed to improve the effective elimination of the heteroscedasticity of model residuals; and (3) three likelihood functions-NSE, Generalized Error Distribution with BC (BC-GED) and Skew Generalized Error Distribution with BC (BC-SGED)-are applied for SWAT-WB-VSA (Soil and Water Assessment Tool - Water Balance - Variable Source Area) model calibration in the Baocun watershed, Eastern China. Performances of calibrated models are compared using the observed river discharges and groundwater levels. The result shows that the minimum variance constraint can effectively estimate the BC parameter. The form of the likelihood function significantly impacts on the calibrated parameters and the simulated results of high and low flow components. SWAT-WB-VSA with the NSE approach simulates flood well, but baseflow badly owing to the assumption of Gaussian error distribution, where the probability of the large error is low, but the small error around zero approximates equiprobability. By contrast, SWAT-WB-VSA with the BC-GED or BC-SGED approach mimics baseflow well, which is proved in the groundwater level simulation. The assumption of skewness of the error distribution may be unnecessary, because all the results of the BC-SGED approach are nearly the same as those of the BC-GED approach.

  19. A theory of biological relativity: no privileged level of causation.

    PubMed

    Noble, Denis

    2012-02-06

    Must higher level biological processes always be derivable from lower level data and mechanisms, as assumed by the idea that an organism is completely defined by its genome? Or are higher level properties necessarily also causes of lower level behaviour, involving actions and interactions both ways? This article uses modelling of the heart, and its experimental basis, to show that downward causation is necessary and that this form of causation can be represented as the influences of initial and boundary conditions on the solutions of the differential equations used to represent the lower level processes. These insights are then generalized. A priori, there is no privileged level of causation. The relations between this form of 'biological relativity' and forms of relativity in physics are discussed. Biological relativity can be seen as an extension of the relativity principle by avoiding the assumption that there is a privileged scale at which biological functions are determined.

  20. A theory of biological relativity: no privileged level of causation

    PubMed Central

    Noble, Denis

    2012-01-01

    Must higher level biological processes always be derivable from lower level data and mechanisms, as assumed by the idea that an organism is completely defined by its genome? Or are higher level properties necessarily also causes of lower level behaviour, involving actions and interactions both ways? This article uses modelling of the heart, and its experimental basis, to show that downward causation is necessary and that this form of causation can be represented as the influences of initial and boundary conditions on the solutions of the differential equations used to represent the lower level processes. These insights are then generalized. A priori, there is no privileged level of causation. The relations between this form of ‘biological relativity’ and forms of relativity in physics are discussed. Biological relativity can be seen as an extension of the relativity principle by avoiding the assumption that there is a privileged scale at which biological functions are determined. PMID:23386960

  1. Stability of equations with a distributed delay, monotone production and nonlinear mortality

    NASA Astrophysics Data System (ADS)

    Berezansky, Leonid; Braverman, Elena

    2013-10-01

    We consider population dynamics models dN/dt = f(N(tτ)) - d(N(t)) with an increasing fecundity function f and any mortality function d which can be quadratic, as in the logistic equation, or have a different form provided that the equation has at most one positive equilibrium. Here the delay in the production term can be distributed and unbounded. It is demonstrated that the positive equilibrium is globally attractive if it exists, otherwise all positive solutions tend to zero. Moreover, we demonstrate that solutions of the equation are intrinsically non-oscillatory: once the initial function is less/greater than the equilibrium K > 0, so is the solution for any positive time value. The assumptions on f, d and the delay are rather nonrestrictive, and several examples demonstrate that none of them can be omitted.

  2. The momentum of an electromagnetic wave inside a dielectric derived from the Snell refraction law

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Torchigin, V.P., E-mail: v_torchigin@mail.ru; Torchigin, A.V.

    2014-12-15

    Author of the paper [M. Testa, Ann. Physics 336 (2013) 1] has derived a conclusion that there is a connection between the Snell refraction law and the Abraham form of the momentum of light in matter. In other words, author derived the Snell law on assumption that the momentum of light in matter decreases by n times as compared with that in free space. The conclusion is derived under assumption that the forces exerted on an optical medium by an electromagnetic field do not distinguish between polarization and free charges. We show that, on the contrary, the Minkowski form ofmore » the momentum of light in matter directly follows from the Snell law. No previous assumption is required for this purpose.« less

  3. Robust discovery of genetic associations incorporating gene-environment interaction and independence.

    PubMed

    Tchetgen Tchetgen, Eric

    2011-03-01

    This article considers the detection and evaluation of genetic effects incorporating gene-environment interaction and independence. Whereas ordinary logistic regression cannot exploit the assumption of gene-environment independence, the proposed approach makes explicit use of the independence assumption to improve estimation efficiency. This method, which uses both cases and controls, fits a constrained retrospective regression in which the genetic variant plays the role of the response variable, and the disease indicator and the environmental exposure are the independent variables. The regression model constrains the association of the environmental exposure with the genetic variant among the controls to be null, thus explicitly encoding the gene-environment independence assumption, which yields substantial gain in accuracy in the evaluation of genetic effects. The proposed retrospective regression approach has several advantages. It is easy to implement with standard software, and it readily accounts for multiple environmental exposures of a polytomous or of a continuous nature, while easily incorporating extraneous covariates. Unlike the profile likelihood approach of Chatterjee and Carroll (Biometrika. 2005;92:399-418), the proposed method does not require a model for the association of a polytomous or continuous exposure with the disease outcome, and, therefore, it is agnostic to the functional form of such a model and completely robust to its possible misspecification.

  4. Finding Every Root of a Broad Class of Real, Continuous Functions in a Given Interval

    NASA Technical Reports Server (NTRS)

    Tausworthe, Robert C.; Wolgast, Paul A.

    2011-01-01

    One of the most pervasive needs within the Deep Space Network (DSN) Metric Prediction Generator (MPG) view period event generation is that of finding solutions to given occurrence conditions. While the general form of an equation expresses equivalence between its left-hand and right-hand expressions, the traditional treatment of the subject subtracts the two sides, leaving an expression of the form Integral of(x) = 0. Values of the independent variable x satisfying this condition are roots, or solutions. Generally speaking, there may be no solutions, a unique solution, multiple solutions, or a continuum of solutions to a given equation. In particular, all view period events are modeled as zero crossings of various metrics; for example, the time at which the elevation of a spacecraft reaches its maximum value, as viewed from a Deep Space Station (DSS), is found by locating that point at which the derivative of the elevation function becomes zero. Moreover, each event type may have several occurrences within a given time interval of interest. For example, a spacecraft in a low Moon orbit will experience several possible occultations per day, each of which must be located in time. The MPG is charged with finding all specified event occurrences that take place within a given time interval (or pass ), without any special clues from operators as to when they may occur, for the entire spectrum of missions undertaken by the DSN. For each event type, the event metric function is a known form that can be computed for any instant within the interval. A method has been created for a mathematical root finder to be capable of finding all roots of an arbitrary continuous function, within a given interval, to be subject to very lenient, parameterized assumptions. One assumption is that adjacent roots are separated at least by a given amount, xGuard. Any point whose function value is less than ef in magnitude is considered to be a root, and the function values at distances xGuard away from a root are larger than ef, unless there is another root located in this vicinity. A root is considered found if, during iteration, two root candidates differ by less than a pre-specified ex, and the optimum cubic polynomial matching the function at the end and at two interval points (that is within a relative error fraction L at its midpoint) is reliable in indicating whether the function has extrema within the interval. The robustness of this method depends solely on choosing these four parameters that control the search. The roots of discontinuous functions were also found, but at degraded performance.

  5. Calculating system reliability with SRFYDO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morzinski, Jerome; Anderson - Cook, Christine M; Klamann, Richard M

    2010-01-01

    SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for themore » system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.« less

  6. Practicing Sociological Imagination through Writing Sociological Autobiography

    ERIC Educational Resources Information Center

    Kebede, Alem

    2009-01-01

    Sociological imagination is a quality of mind that cannot be adopted by simply teaching students its discursive assumptions. Rather, it is a disposition, in competition with other forms of sensibility, which can be acquired only when it is practiced. Adhering to this important pedagogical assumption, students were assigned to write their…

  7. 41 CFR 60-3.9 - No assumption of validity.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 41 Public Contracts and Property Management 1 2012-07-01 2009-07-01 true No assumption of validity. 60-3.9 Section 60-3.9 Public Contracts and Property Management Other Provisions Relating to Public... of validity based on a procedure's name or descriptive labels; all forms of promotional literature...

  8. Identification of Extraterrestrial Microbiology

    NASA Technical Reports Server (NTRS)

    Flynn, Michael; Rasky, Daniel J. (Technical Monitor)

    1998-01-01

    Many of the key questions addressed in the field of Astrobiology are based upon the assumption that life exists, or at one time existed, in locations throughout the universe. However, this assumption is just that, an assumption. No definitive proof exists. On Earth, life has been found to exist in many diverse environment. We believe that this tendency towards diversity supports the assumption that life could exists throughout the universe. This paper provides a summary of several innovative techniques for the detection of extraterrestrial life forms. The primary questions addressed are does life currently exist beyond Earth and if it does, is that life evolutionary related to life on Earth?

  9. Coordinating a Supply Chain with Price and Advertisement Dependent Stochastic Demand

    PubMed Central

    Li, Liying; Wang, Yong; Yan, Xiaoming

    2013-01-01

    This paper investigates pricing and ordering as well as advertising coordination issues in a single-manufacturer single-retailer supply chain, where the manufacturer sells a newsvendor-type product through the retailer who faces a stochastic demand depending on both retail price and advertising expenditure. Under the assumption that the market demand has a multiplicative functional form, the Stackelberg and cooperative game models are developed, and the closed form solution to each model is provided as well. Comparisons and insights are presented. We show that a properly designed revenue-cost-sharing contract can achieve supply chain coordination and lead to a Pareto improving win-win situation for channel members. We also discuss the allocation of the extra joint profit according to individual supply chain members' risk preferences and negotiating powers. PMID:24453832

  10. Coordinating a supply chain with price and advertisement dependent stochastic demand.

    PubMed

    Li, Liying; Wang, Yong; Yan, Xiaoming

    2013-01-01

    This paper investigates pricing and ordering as well as advertising coordination issues in a single-manufacturer single-retailer supply chain, where the manufacturer sells a newsvendor-type product through the retailer who faces a stochastic demand depending on both retail price and advertising expenditure. Under the assumption that the market demand has a multiplicative functional form, the Stackelberg and cooperative game models are developed, and the closed form solution to each model is provided as well. Comparisons and insights are presented. We show that a properly designed revenue-cost-sharing contract can achieve supply chain coordination and lead to a Pareto improving win-win situation for channel members. We also discuss the allocation of the extra joint profit according to individual supply chain members' risk preferences and negotiating powers.

  11. Physically-based model of soil hydraulic properties accounting for variable contact angle and its effect on hysteresis

    NASA Astrophysics Data System (ADS)

    Diamantopoulos, Efstathios; Durner, Wolfgang

    2013-09-01

    The description of soil water movement in the unsaturated zone requires the knowledge of the soil hydraulic properties, i.e. the water retention and the hydraulic conductivity function. A great amount of parameterizations for this can be found in the literature, the majority of which represent the complex pore space of soils as a bundle of cylindrical capillary tubes of various sizes. The assumption of zero contact angles between water and surface of the grains is also made. However, these assumptions limit the predictive capabilities of these models, leading often to errors in the prediction of water dynamics in soils. We present a pore-scale analysis for equilibrium liquid configuration in angular pores taking pore-scale hysteresis and the effect of contact angle into account. Furthermore, we propose a derivation of the hydraulic conductivity function, again as a function of the contact angle. An additional parameter was added to the conductivity function in order take into account effects which are not included in the analysis. Finally, we upscale our model from the pore to the sample scale by assuming a gamma statistical distribution of the pore sizes. Closed-form expressions are derived for both water retention and conductivity functions. The new model was tested against experimental data from multistep inflow/outflow (MSI/MSO) experiments for a sandy material. They were conducted using ethanol and water as the wetting liquid. Ethanol was assumed to form a zero contact angle with the soil grains. By keeping constant the parameters fitted from the ethanol MSO experiment we could predict the ethanol MSI dynamics based on our theory. Furthermore, by keeping constant the pore size distribution parameters from the ethanol experiments we could also predict very well the water dynamics for the MSO experiment. Lastly, we could predict the imbibition dynamics for the water MSI experiment by introducing a finite value of the contact angle. Most importantly, the predictions for both ethanol and water MSI/MSO dynamics were made by assuming a unique pore-size distribution.

  12. Efficiency at maximum power output of linear irreversible Carnot-like heat engines.

    PubMed

    Wang, Yang; Tu, Z C

    2012-01-01

    The efficiency at maximum power output of linear irreversible Carnot-like heat engines is investigated based on the assumption that the rate of irreversible entropy production of the working substance in each "isothermal" process is a quadratic form of the heat exchange rate between the working substance and the reservoir. It is found that the maximum power output corresponds to minimizing the irreversible entropy production in two isothermal processes of the Carnot-like cycle, and that the efficiency at maximum power output has the form η(mP)=η(C)/(2-γη(C)), where η(C) is the Carnot efficiency, while γ depends on the heat transfer coefficients between the working substance and two reservoirs. The value of η(mP) is bounded between η(-)≡η(C)/2 and η(+)≡η(C)/(2-η(C)). These results are consistent with those obtained by Chen and Yan [J. Chem. Phys. 90, 3740 (1989)] based on the endoreversible assumption, those obtained by Esposito et al. [Phys. Rev. Lett. 105, 150603 (2010)] based on the low-dissipation assumption, and those obtained by Schmiedl and Seifert [Europhys. Lett. 81, 20003 (2008)] for stochastic heat engines which in fact also satisfy the low-dissipation assumption. Additionally, we find that the endoreversible assumption happens to hold for Carnot-like heat engines operating at the maximum power output based on our fundamental assumption, and that the Carnot-like heat engines that we focused on do not strictly satisfy the low-dissipation assumption, which implies that the low-dissipation assumption or our fundamental assumption is a sufficient but non-necessary condition for the validity of η(mP)=η(C)/(2-γη(C)) as well as the existence of two bounds, η(-)≡η(C)/2 and η(+)≡η(C)/(2-η(C)). © 2012 American Physical Society

  13. Efficiency at maximum power output of linear irreversible Carnot-like heat engines

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Tu, Z. C.

    2012-01-01

    The efficiency at maximum power output of linear irreversible Carnot-like heat engines is investigated based on the assumption that the rate of irreversible entropy production of the working substance in each “isothermal” process is a quadratic form of the heat exchange rate between the working substance and the reservoir. It is found that the maximum power output corresponds to minimizing the irreversible entropy production in two isothermal processes of the Carnot-like cycle, and that the efficiency at maximum power output has the form ηmP=ηC/(2-γηC), where ηC is the Carnot efficiency, while γ depends on the heat transfer coefficients between the working substance and two reservoirs. The value of ηmP is bounded between η-≡ηC/2 and η+≡ηC/(2-ηC). These results are consistent with those obtained by Chen and Yan [J. Chem. Phys.JCPSA60021-960610.1063/1.455832 90, 3740 (1989)] based on the endoreversible assumption, those obtained by Esposito [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.105.150603 105, 150603 (2010)] based on the low-dissipation assumption, and those obtained by Schmiedl and Seifert [Europhys. Lett.EULEEJ0295-507510.1209/0295-5075/81/20003 81, 20003 (2008)] for stochastic heat engines which in fact also satisfy the low-dissipation assumption. Additionally, we find that the endoreversible assumption happens to hold for Carnot-like heat engines operating at the maximum power output based on our fundamental assumption, and that the Carnot-like heat engines that we focused on do not strictly satisfy the low-dissipation assumption, which implies that the low-dissipation assumption or our fundamental assumption is a sufficient but non-necessary condition for the validity of ηmP=ηC/(2-γηC) as well as the existence of two bounds, η-≡ηC/2 and η+≡ηC/(2-ηC).

  14. Assessing the skill of hydrology models at simulating the water cycle in the HJ Andrews LTER: Assumptions, strengths and weaknesses

    EPA Science Inventory

    Simulated impacts of climate on hydrology can vary greatly as a function of the scale of the input data, model assumptions, and model structure. Four models are commonly used to simulate streamflow in model assumptions, and model structure. Four models are commonly used to simu...

  15. Maternal serum alpha-fetoprotein (MSAFP) patient-specific risk reporting: its use and misuse.

    PubMed

    Macri, J N; Kasturi, R V; Krantz, D A; Cook, E J; Larsen, J W

    1990-03-01

    Fundamental to maternal serum alpha-fetoprotein screening is the clinical utility of the laboratory report. It follows that the scientific form of expression in that report is vital. Professional societies concur that patient-specific risk reporting is the preferred form. However, some intermediate steps being taken to calculate patient-specific risks are invalid because of the erroneous assumption that multiples of the median (MoMs) represent an interlaboratory common currency. The numerous methods by which MoMs may be calculated belie the foregoing assumption.

  16. Strange and Charge Symmetry Violating Electromagnetic Form Factors of the Nucleon

    NASA Astrophysics Data System (ADS)

    Shanahan, P. E.

    We summarise recent work based on lattice QCD simulations of the electromagnetic form factors of the octet baryons from the CSSM/QCDSF/UKQCD collaborations. After an analysis of the simulation results using techniques to approach the infinite volume limit and the physical pseudoscalar masses at non-zero momentum transfer, the extrapolated proton and neutron form factors are found to be in excellent agreement with those extracted from experiment. Given the success of these calculations, we describe how the strange electromagnetic form factors may be estimated from these results under the same assumption of charge symmetry used in experimental determinations of those quantities. Motivated by the necessity of that assumption, we explore a method for determining the size of charge symmetry breaking effects using the same lattice results.

  17. An upper bound on the radius of a highly electrically conducting lunar core

    NASA Technical Reports Server (NTRS)

    Hobbs, B. A.; Hood, L. L.; Herbert, F.; Sonett, C. P.

    1983-01-01

    Parker's (1980) nonlinear inverse theory for the electromagnetic sounding problem is converted to a form suitable for analysis of lunar day-side transfer function data by: (1) transforming the solution in plane geometry to that in spherical geometry; and (2) transforming the theoretical lunar transfer function in the dipole limit to an apparent resistivity function. The theory is applied to the revised lunar transfer function data set of Hood et al. (1982), which extends in frequency from 10 to the -5th to 10 to the -3rd Hz. On the assumption that an iron-rich lunar core, whether molten or solid, can be represented by a perfect conductor at the minimum sampled frequency, an upper bound of 435 km on the maximum radius of such a core is calculated. This bound is somewhat larger than values of 360-375 km previously estimated from the same data set via forward model calculations because the prior work did not consider all possible mantle conductivity functions.

  18. Evolution of enzymes in a series is driven by dissimilar functional demands.

    PubMed

    Salvador, Armindo; Savageau, Michael A

    2006-02-14

    That distinct enzyme activities in an unbranched metabolic pathway are evolutionarily tuned to a single functional requirement is a pervasive assumption. Here we test this assumption by examining the activities of two consecutively acting enzymes in human erythrocytes with an approach to quantitative evolutionary design that avoids the above-mentioned assumption. We previously found that avoidance of NADPH depletion during the pulses of oxidative load to which erythrocytes are normally exposed is the main functional requirement mediating selection for high glucose-6-phosphate dehydrogenase activity. In the present study, we find that, in contrast, the maintenance of oxidized glutathione at low concentrations is the main functional requirement mediating selection for high glutathione reductase activity. The results in this case show that, contrary to the assumption of a single functional requirement, natural selection for the normal activities of the distinct enzymes in the pathway is mediated by different requirements. On the other hand, the results agree with the more general principles that underlie our approach. Namely, that (i) the values of biochemical parameters evolve so as to fulfill the various performance requirements that are relevant to achieve high fitness, and (ii) these performance requirements can be inferred from quantitative systems theory considerations, informed by knowledge of specific aspects of the biochemistry, physiology, genetics, and ecology of the organism.

  19. Does muscle creatine phosphokinase have access to the total pool of phosphocreatine plus creatine?

    PubMed

    Hochachka, P W; Mossey, M K

    1998-03-01

    Two fundamental assumptions underlie currently accepted dogma on creatine phosphokinase (CPK) function in phosphagen-containing cells: 1) CPK always operates near equilibrium and 2) CPK has access to, and reacts with, the entire pool of phosphocreatine (PCr) and creatine (Cr). We tested the latter assumption in fish fast-twitch or white muscle (WM) by introducing [14C]Cr into the WM pool in vivo. To avoid complications arising from working with muscles formed from a mixture of fast and slow fibers, it was advantageous to work with fish WM because it is uniformly fast twitch and is anatomically separated from other fiber types. According to current theory, at steady state after [14C]Cr administration, the specific activities of PCr and Cr should be the same under essentially all conditions. In contrast, we found that, in various metabolic states between rest and recovery from exercise, the specific activity of PCr greatly exceeds that of Cr. The data imply that a significant fraction of Cr is not free to rapidly exchange with exogenously added [14C]Cr. Releasing of this unlabeled or "cold" Cr on acid extraction accounts for lowered specific activities. This unexpected and provocative result is not consistent with traditional models of phosphagen function.

  20. A Phase-Space Approach to Collisionless Stellar Systems Using a Particle Method

    NASA Astrophysics Data System (ADS)

    Hozumi, Shunsuke

    1997-10-01

    A particle method for reproducing the phase space of collisionless stellar systems is described. The key idea originates in Liouville's theorem, which states that the distribution function (DF) at time t can be derived from tracing necessary orbits back to t = 0. To make this procedure feasible, a self-consistent field (SCF) method for solving Poisson's equation is adopted to compute the orbits of arbitrary stars. As an example, for the violent relaxation of a uniform density sphere, the phase-space evolution generated by the current method is compared to that obtained with a phase-space method for integrating the collisionless Boltzmann equation, on the assumption of spherical symmetry. Excellent agreement is found between the two methods if an optimal basis set for the SCF technique is chosen. Since this reproduction method requires only the functional form of initial DFs and does not require any assumptions to be made about the symmetry of the system, success in reproducing the phase-space evolution implies that there would be no need of directly solving the collisionless Boltzmann equation in order to access phase space even for systems without any special symmetries. The effects of basis sets used in SCF simulations on the reproduced phase space are also discussed.

  1. Pendulum Motion and Differential Equations

    ERIC Educational Resources Information Center

    Reid, Thomas F.; King, Stephen C.

    2009-01-01

    A common example of real-world motion that can be modeled by a differential equation, and one easily understood by the student, is the simple pendulum. Simplifying assumptions are necessary for closed-form solutions to exist, and frequently there is little discussion of the impact if those assumptions are not met. This article presents a…

  2. Didactics and History of Mathematics: Knowledge and Self-Knowledge

    ERIC Educational Resources Information Center

    Fried, Michael N.

    2007-01-01

    The basic assumption of this paper is that mathematics and history of mathematics are both forms of knowledge and, therefore, represent different ways of knowing. This was also the basic assumption of Fried (2001) who maintained that these ways of knowing imply different conceptual and methodological commitments, which, in turn, lead to a conflict…

  3. An intelligent knowledge mining model for kidney cancer using rough set theory.

    PubMed

    Durai, M A Saleem; Acharjya, D P; Kannan, A; Iyengar, N Ch Sriman Narayana

    2012-01-01

    Medical diagnosis processes vary in the degree to which they attempt to deal with different complicating aspects of diagnosis such as relative importance of symptoms, varied symptom pattern and the relation between diseases themselves. Rough set approach has two major advantages over the other methods. First, it can handle different types of data such as categorical, numerical etc. Secondly, it does not make any assumption like probability distribution function in stochastic modeling or membership grade function in fuzzy set theory. It involves pattern recognition through logical computational rules rather than approximating them through smooth mathematical functional forms. In this paper we use rough set theory as a data mining tool to derive useful patterns and rules for kidney cancer faulty diagnosis. In particular, the historical data of twenty five research hospitals and medical college is used for validation and the results show the practical viability of the proposed approach.

  4. Diffusion of test particles in stochastic magnetic fields for small Kubo numbers.

    PubMed

    Neuer, Marcus; Spatschek, Karl H

    2006-02-01

    Motion of charged particles in a collisional plasma with stochastic magnetic field lines is investigated on the basis of the so-called A-Langevin equation. Compared to the previously used A-Langevin model, here finite Larmor radius effects are taken into account. The A-Langevin equation is solved under the assumption that the Lagrangian correlation function for the magnetic field fluctuations is related to the Eulerian correlation function (in Gaussian form) via the Corrsin approximation. The latter is justified for small Kubo numbers. The velocity correlation function, being averaged with respect to the stochastic variables including collisions, leads to an implicit differential equation for the mean square displacement. From the latter, different transport regimes, including the well-known Rechester-Rosenbluth diffusion coefficient, are derived. Finite Larmor radius contributions show a decrease of the diffusion coefficient compared to the guiding center limit. The case of small (or vanishing) mean fields is also discussed.

  5. A Unimodal Model for Double Observer Distance Sampling Surveys.

    PubMed

    Becker, Earl F; Christ, Aaron M

    2015-01-01

    Distance sampling is a widely used method to estimate animal population size. Most distance sampling models utilize a monotonically decreasing detection function such as a half-normal. Recent advances in distance sampling modeling allow for the incorporation of covariates into the distance model, and the elimination of the assumption of perfect detection at some fixed distance (usually the transect line) with the use of double-observer models. The assumption of full observer independence in the double-observer model is problematic, but can be addressed by using the point independence assumption which assumes there is one distance, the apex of the detection function, where the 2 observers are assumed independent. Aerially collected distance sampling data can have a unimodal shape and have been successfully modeled with a gamma detection function. Covariates in gamma detection models cause the apex of detection to shift depending upon covariate levels, making this model incompatible with the point independence assumption when using double-observer data. This paper reports a unimodal detection model based on a two-piece normal distribution that allows covariates, has only one apex, and is consistent with the point independence assumption when double-observer data are utilized. An aerial line-transect survey of black bears in Alaska illustrate how this method can be applied.

  6. Exact solution for the optimal neuronal layout problem.

    PubMed

    Chklovskii, Dmitri B

    2004-10-01

    Evolution perfected brain design by maximizing its functionality while minimizing costs associated with building and maintaining it. Assumption that brain functionality is specified by neuronal connectivity, implemented by costly biological wiring, leads to the following optimal design problem. For a given neuronal connectivity, find a spatial layout of neurons that minimizes the wiring cost. Unfortunately, this problem is difficult to solve because the number of possible layouts is often astronomically large. We argue that the wiring cost may scale as wire length squared, reducing the optimal layout problem to a constrained minimization of a quadratic form. For biologically plausible constraints, this problem has exact analytical solutions, which give reasonable approximations to actual layouts in the brain. These solutions make the inverse problem of inferring neuronal connectivity from neuronal layout more tractable.

  7. Semileptonic decays of Λ _c baryons in the relativistic quark model

    NASA Astrophysics Data System (ADS)

    Faustov, R. N.; Galkin, V. O.

    2016-11-01

    Motivated by recent experimental progress in studying weak decays of the Λ _c baryon we investigate its semileptonic decays in the framework of the relativistic quark model based on the quasipotential approach with the QCD-motivated potential. The form factors of the Λ _c→ Λ lν _l and Λ _c→ nlν _l decays are calculated in the whole accessible kinematical region without extrapolations and additional model assumptions. Relativistic effects are systematically taken into account including transformations of baryon wave functions from the rest to moving reference frame and contributions of the intermediate negative-energy states. Baryon wave functions found in the previous mass spectrum calculations are used for the numerical evaluation. Comprehensive predictions for decay rates, asymmetries and polarization parameters are given. They agree well with available experimental data.

  8. Brownian motion and thermophoresis effects on Peristaltic slip flow of a MHD nanofluid in a symmetric/asymmetric channel

    NASA Astrophysics Data System (ADS)

    Sucharitha, G.; Sreenadh, S.; Lakshminarayana, P.; Sushma, K.

    2017-11-01

    The slip and heat transfer effects on MHD peristaltic transport of a nanofluid in a non-uniform symmetric/asymmetric channel have studied under the assumptions of elongated wave length and negligible Reynolds number. From the simplified governing equations, the closed form solutions for velocity, stream function, temperature and concentrations are obtained. Also dual solutions are discussed for symmetric and asymmetric channel cases. The effects of important physical parameters are explained graphically. The slip parameter decreases the fluid velocity in middle of the channel whereas it increases the velocity at the channel walls. Temperature and concentration are decreasing and increasing functions of radiation parameter respectively. Moreover, velocity, temperature and concentrations are high in symmetric channel when compared with asymmetric channel.

  9. Emotional neglect in childhood shapes social dysfunctioning in adults by influencing the oxytocin and the attachment system: Results from a population-based study.

    PubMed

    Müller, Laura E; Bertsch, Katja; Bülau, Konstatin; Herpertz, Sabine C; Buchheim, Anna

    2018-06-01

    Early life maltreatment (ELM) is the major single risk factor for impairments in social functioning and mental health in adulthood. One of the most prevalent and most rapidly increasing forms of ELM is emotional neglect. According to bio-behavioral synchrony assumptions, the oxytocin and attachment systems play an important mediating role in the interplay between emotional neglect and social dysfunctioning. Therefore, the aim of the present study was to investigate whether fear and avoidance of social functioning, two important and highly prevalent facets of social dysfunctioning in adulthood, are shaped by emotional neglect, plasma oxytocin levels and attachment representations. We assessed emotional neglect as well as other forms of ELM with the Childhood Trauma Questionnaire, current attachment representations with the Adult Attachment Projective Picture System, and fear and avoidance of social situations with the Liebowitz Social Anxiety Scale in a population-based sample of N = 121 men and women. Furthermore, 4.9 ml blood samples were drawn from each participant to assess peripheral plasma oxytocin levels. Applying a sequential mediation model, results revealed that emotional neglect was associated with lower plasma oxytocin levels which in turn were associated with insecure attachment representations which were related to elevated fear and avoidance of social situations (a 1 d 21 b 2 : F 3,117  = 20.84, P < .001). Plasma oxytocin and current attachment representations hence fully and sequentially mediate the effects of emotional neglect on social fear and avoidance, two important facets of adult social dysfunctioning, confirming bio-behavioral synchrony assumptions. Copyright © 2018. Published by Elsevier B.V.

  10. Wire array Z-pinch insights for enhanced x-ray production

    NASA Astrophysics Data System (ADS)

    Sanford, T. W. L.; Mock, R. C.; Spielman, R. B.; Haines, M. G.; Chittenden, J. P.; Whitney, K. G.; Apruzese, J. P.; Peterson, D. L.; Greenly, J. B.; Sinars, D. B.; Reisman, D. B.; Mosher, D.

    1999-05-01

    Comparisons of measured total radiated x-ray power from annular wire-array z-pinches with a variety of models as a function of wire number, array mass, and load radius are reviewed. The data, which are comprehensive, have provided important insights into the features of wire-array dynamics that are critical for high x-ray power generation. Collectively, the comparisons of the data with the model calculations suggest that a number of underlying dynamical mechanisms involving cylindrical asymmetries and plasma instabilities contribute to the measured characteristics. For example, under the general assumption that the measured risetime of the total-radiated-power pulse is related to the thickness of the plasma shell formed on axis, the Heuristic Model [IEEE Trans. Plasma Sci. 26, 1275 (1998)] agrees with the measured risetime under a number of specific assumptions about the way the breakdown of the wires, the wire-plasma expansion, and the Rayleigh-Taylor instability in the r-z plane, develop. Likewise, in the high wire-number regime (where the wires are calculated to form a plasma shell prior to significant radial motion of the shell) the comparisons show that the variation in the power of the radiation generated as a function of load mass and array radius can be simulated by the two-dimensional Eulerian-radiation- magnetohydrodynamics code (E-RMHC) [Phys. Plasmas 3, 368 (1996)], using a single random-density perturbation that seeds the Rayleigh-Taylor instability in the r-z plane. For a given pulse-power generator, the comparisons suggest that (1) the smallest interwire gaps compatible with practical load construction and (2) the minimum implosion time consistent with the optimum required energy coupling of the generator to the load should produce the highest total-radiated-power levels.

  11. Teaching Critical Literacy across the Curriculum in Multimedia America.

    ERIC Educational Resources Information Center

    Semali, Ladislaus M.

    The teaching of media texts as a form of textual construction is embedded in the assumption that audiences bring individual preexisting dispositions even though the media may contribute to their shaping of basic attitudes, beliefs, values, and behavior. As summed up by D. Lusted, at the core of such textual construction are basic assumptions that…

  12. Data reduction of room tests for zone model validation

    Treesearch

    M. Janssens; H. C. Tran

    1992-01-01

    Compartment fire zone models are based on many simplifying assumptions, in particular that gases stratify in two distinct layers. Because of these assumptions, certain model output is in a form unsuitable for direct comparison to measurements made in full-scale room tests. The experimental data must first be reduced and transformed to be compatible with the model...

  13. Drying Affects the Fiber Network in Low Molecular Weight Hydrogels

    PubMed Central

    2017-01-01

    Low molecular weight gels are formed by the self-assembly of a suitable small molecule gelator into a three-dimensional network of fibrous structures. The gel properties are determined by the fiber structures, the number and type of cross-links and the distribution of the fibers and cross-links in space. Probing these structures and cross-links is difficult. Many reports rely on microscopy of dried gels (xerogels), where the solvent is removed prior to imaging. The assumption is made that this has little effect on the structures, but it is not clear that this assumption is always (or ever) valid. Here, we use small angle neutron scattering (SANS) to probe low molecular weight hydrogels formed by the self-assembly of dipeptides. We compare scattering data for wet and dried gels, as well as following the drying process. We show that the assumption that drying does not affect the network is not always correct. PMID:28631478

  14. Hepatic function imaging using dynamic Gd-EOB-DTPA enhanced MRI and pharmacokinetic modeling.

    PubMed

    Ning, Jia; Yang, Zhiying; Xie, Sheng; Sun, Yongliang; Yuan, Chun; Chen, Huijun

    2017-10-01

    To determine whether pharmacokinetic modeling parameters with different output assumptions of dynamic contrast-enhanced MRI (DCE-MRI) using Gd-EOB-DTPA correlate with serum-based liver function tests, and compare the goodness of fit of the different output assumptions. A 6-min DCE-MRI protocol was performed in 38 patients. Four dual-input two-compartment models with different output assumptions and a published one-compartment model were used to calculate hepatic function parameters. The Akaike information criterion fitting error was used to evaluate the goodness of fit. Imaging-based hepatic function parameters were compared with blood chemistry using correlation with multiple comparison correction. The dual-input two-compartment model assuming venous flow equals arterial flow plus portal venous flow and no bile duct output better described the liver tissue enhancement with low fitting error and high correlation with blood chemistry. The relative uptake rate Kir derived from this model was found to be significantly correlated with direct bilirubin (r = -0.52, P = 0.015), prealbumin concentration (r = 0.58, P = 0.015), and prothrombin time (r = -0.51, P = 0.026). It is feasible to evaluate hepatic function by proper output assumptions. The relative uptake rate has the potential to serve as a biomarker of function. Magn Reson Med 78:1488-1495, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  15. Semi-parametric regression model for survival data: graphical visualization with R

    PubMed Central

    2016-01-01

    Cox proportional hazards model is a semi-parametric model that leaves its baseline hazard function unspecified. The rationale to use Cox proportional hazards model is that (I) the underlying form of hazard function is stringent and unrealistic, and (II) researchers are only interested in estimation of how the hazard changes with covariate (relative hazard). Cox regression model can be easily fit with coxph() function in survival package. Stratified Cox model may be used for covariate that violates the proportional hazards assumption. The relative importance of covariates in population can be examined with the rankhazard package in R. Hazard ratio curves for continuous covariates can be visualized using smoothHR package. This curve helps to better understand the effects that each continuous covariate has on the outcome. Population attributable fraction is a classic quantity in epidemiology to evaluate the impact of risk factor on the occurrence of event in the population. In survival analysis, the adjusted/unadjusted attributable fraction can be plotted against survival time to obtain attributable fraction function. PMID:28090517

  16. 77 FR 11481 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-27

    ... estimate of burden including the validity of the methodology and assumptions used; (c) ways to enhance the... form FNS 698, Profile of Integrity Practices and Procedures; FNS 699, the Integrity Profile Report Form...

  17. Joint Symbol Timing and CFO Estimation for OFDM/OQAM Systems in Multipath Channels

    NASA Astrophysics Data System (ADS)

    Fusco, Tilde; Petrella, Angelo; Tanda, Mario

    2009-12-01

    The problem of data-aided synchronization for orthogonal frequency division multiplexing (OFDM) systems based on offset quadrature amplitude modulation (OQAM) in multipath channels is considered. In particular, the joint maximum-likelihood (ML) estimator for carrier-frequency offset (CFO), amplitudes, phases, and delays, exploiting a short known preamble, is derived. The ML estimators for phases and amplitudes are in closed form. Moreover, under the assumption that the CFO is sufficiently small, a closed form approximate ML (AML) CFO estimator is obtained. By exploiting the obtained closed form solutions a cost function whose peaks provide an estimate of the delays is derived. In particular, the symbol timing (i.e., the delay of the first multipath component) is obtained by considering the smallest estimated delay. The performance of the proposed joint AML estimator is assessed via computer simulations and compared with that achieved by the joint AML estimator designed for AWGN channel and that achieved by a previously derived joint estimator for OFDM systems.

  18. STRUCTURAL DYNAMICS OF METAL PARTITIONING TO MINERAL SURFACES

    EPA Science Inventory

    The conceptual understanding of surface complexation reactions that control trace element partitioning to mineral surfaces is limited by the assumption that the solid reactant possesses a finite, time-invariant population of surface functional groups. This assumption has limited...

  19. Construction of diabatic energy surfaces for LiFH with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Guan, Yafu; Fu, Bina; Zhang, Dong H.

    2017-12-01

    A new set of diabatic potential energy surfaces (PESs) for LiFH is constructed with artificial neural networks (NNs). The adiabatic PESs of the ground state and the first excited state are directly fitted with NNs. Meanwhile, the adiabatic-to-diabatic transformation (ADT) angles (mixing angles) are obtained by simultaneously fitting energy difference and interstate coupling gradients. No prior assumptions of the functional form of ADT angles are used before fitting, and the ab initio data including energy difference and interstate coupling gradients are well reproduced. Converged dynamical results show remarkable differences between adiabatic and diabatic PESs, which suggests the significance of non-adiabatic processes.

  20. Model of bidirectional reflectance distribution function for metallic materials

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Zhu, Jing-Ping; Liu, Hong; Hou, Xun

    2016-09-01

    Based on the three-component assumption that the reflection is divided into specular reflection, directional diffuse reflection, and ideal diffuse reflection, a bidirectional reflectance distribution function (BRDF) model of metallic materials is presented. Compared with the two-component assumption that the reflection is composed of specular reflection and diffuse reflection, the three-component assumption divides the diffuse reflection into directional diffuse and ideal diffuse reflection. This model effectively resolves the problem that constant diffuse reflection leads to considerable error for metallic materials. Simulation and measurement results validate that this three-component BRDF model can improve the modeling accuracy significantly and describe the reflection properties in the hemisphere space precisely for the metallic materials.

  1. Calculated viscosity-distance dependence for some actively flowing lavas

    NASA Technical Reports Server (NTRS)

    Pieri, David

    1987-01-01

    The importance of viscosity as a gauge of the various energy and momentum dissipation regimes of lava flows has been realized for a long time. Nevertheless, despite its central role in lava dynamics and kinematics, it remains among the most difficult of flow physical properties to measure in situ during an eruption. Attempts at reconstructing the actual emplacement viscosities of lava flows from their solidified topographic form are difficult. Where data are available on the position of an advancing flow front as a function of time, it is possible to calculate the effective viscosity of the front as a function of distance from the vent, under the assumptions of a steady state regime. As an application and test of an equation given, relevant parameters from five recent flows on Mauna Loa and Kilauea were utilized to infer the dynamic structure of their aggregate flow front viscosity as they advanced, up to cessation. The observed form of the viscosity-distance relation for the five active Hawaiian flows examined appears to be exponential, with a rapid increase just before the flows stopped as one would expect.

  2. Tolerable soil erosion in Europe

    NASA Astrophysics Data System (ADS)

    Verheijen, Frank; Jones, Bob; Rickson, Jane; Smith, Celina

    2010-05-01

    Soil loss by erosion has been identified as an important threat to soils in Europe* and is recognised as a contributing process to soil degradation and associated deterioration, or loss, of soil functioning. From a policy perspective, it is imperative to establish well-defined baseline values to evaluate soil erosion monitoring data against. For this purpose, accurate baseline values - i.e. tolerable soil loss - need to be differentiated at appropriate scales for monitoring and, ideally, should take soil functions and even changing environmental conditions into account. The concept of tolerable soil erosion has been interpreted in the scientific literature in two ways: i) maintaining the dynamic equilibrium of soil quantity, and ii) maintaining biomass production, at a location. The first interpretation ignores soil quality by focusing only on soil quantity. The second approach ignores many soil functions by focusing only on the biomass (particularly crop) production function of soil. Considering recognised soil functions, tolerable soil erosion may be defined as 'any mean annual cumulative (all erosion types combined) soil erosion rate at which a deterioration or loss of one or more soil functions does not occur'. Assumptions and problems of this definition will be discussed. Soil functions can generally be judged not to deteriorate as long as soil erosion does not exceed soil formation. At present, this assumption remains largely untested, but applying the precautionary principle appears to be a reasonable starting point. Considering soil formation rates by both weathering and dust deposition, it is estimated that for the majority of soil forming factors in most European situations, soil formation rates probably range from ca. 0.3 - 1.4 t ha-1 yr-1. Although the current agreement on these values seems relatively strong, how the variation within the range is spatially distributed across Europe and how this may be affected by climate, land use and land management change in the future remains largely unexplored. * http://ec.europa.eu/environment/soil/pdf/com_2006_0231_en.pdf

  3. Correlation techniques to determine model form in robust nonlinear system realization/identification

    NASA Technical Reports Server (NTRS)

    Stry, Greselda I.; Mook, D. Joseph

    1991-01-01

    The fundamental challenge in identification of nonlinear dynamic systems is determining the appropriate form of the model. A robust technique is presented which essentially eliminates this problem for many applications. The technique is based on the Minimum Model Error (MME) optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature is the ability to identify nonlinear dynamic systems without prior assumption regarding the form of the nonlinearities, in contrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. Model form is determined via statistical correlation of the MME optimal state estimates with the MME optimal model error estimates. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.

  4. The effect of errors in the assignment of the transmission functions on the accuracy of the thermal sounding of the atmosphere

    NASA Technical Reports Server (NTRS)

    Timofeyev, Y. M.

    1979-01-01

    In order to test the error of calculation in assumed values of the transmission function for Soviet and American radiometers sounding the atmosphere thermally from orbiting satellites, the assumptions of the transmission calculation is varied with respect to atmospheric CO2 content, transmission frequency, and atmospheric absorption. The error arising from variations of the assumptions from the standard basic model is calculated.

  5. In immune defense: redefining the role of the immune system in chronic disease.

    PubMed

    Rubinow, Katya B; Rubinow, David R

    2017-03-01

    The recognition of altered immune system function in many chronic disease states has proven to be a pivotal advance in biomedical research over the past decade. For many metabolic and mood disorders, this altered immune activity has been characterized as inflammation, with the attendant assumption that the immune response is aberrant. However, accumulating evidence challenges this assumption and suggests that the immune system may be mounting adaptive responses to chronic stressors. Further, the inordinate complexity of immune function renders a simplistic, binary model incapable of capturing critical mechanistic insights. In this perspective article, we propose alternative paradigms for understanding the role of the immune system in chronic disease. By invoking allostasis or systems biology rather than inflammation, we can ascribe greater functional significance to immune mediators, gain newfound appreciation of the adaptive facets of altered immune activity, and better avoid the potentially disastrous effects of translating erroneous assumptions into novel therapeutic strategies.

  6. Electromagnetic reflection from multi-layered snow models

    NASA Technical Reports Server (NTRS)

    Linlor, W. I.; Jiracek, G. R.

    1975-01-01

    The remote sensing of snow-pack characteristics with surface installations or an airborne system could have important applications in water-resource management and flood prediction. To derive some insight into such applications, the electromagnetic response of multilayered snow models is analyzed in this paper. Normally incident plane waves at frequencies ranging from 1 MHz to 10 GHz are assumed, and amplitude reflection coefficients are calculated for models having various snow-layer combinations, including ice layers. Layers are defined by thickness, permittivity, and conductivity; the electrical parameters are constant or prescribed functions of frequency. To illustrate the effect of various layering combinations, results are given in the form of curves of amplitude reflection coefficients versus frequency for a variety of models. Under simplifying assumptions, the snow thickness and effective dielectric constant can be estimated from the variations of reflection coefficient as a function of frequency.

  7. Effect of the initial domain on the dispersion dynamics of a diffusing substance

    NASA Astrophysics Data System (ADS)

    Bestuzheva, A. N.; Smirnov, A. L.

    2018-05-01

    The formulation and analysis of ecological problems involves the mathematical modeling, when some assumptions concerning the nature of the processes are introduced. These assumptions must be justified. In the present paper the effect of the form of the initial domain occupied with a diffusing substance on the process of diffusion is studied. It's shown that the form of the initial domain plays unimportant role and it may be modeled as semi-sphere, for which the problem has analytical solution. That solution may serves as the zeroth approximation in modeling of actual ecological problem taking into account the relief of the bottom and the bottom currents.

  8. Launch Collision Probability

    NASA Technical Reports Server (NTRS)

    Bollenbacher, Gary; Guptill, James D.

    1999-01-01

    This report analyzes the probability of a launch vehicle colliding with one of the nearly 10,000 tracked objects orbiting the Earth, given that an object on a near-collision course with the launch vehicle has been identified. Knowledge of the probability of collision throughout the launch window can be used to avoid launching at times when the probability of collision is unacceptably high. The analysis in this report assumes that the positions of the orbiting objects and the launch vehicle can be predicted as a function of time and therefore that any tracked object which comes close to the launch vehicle can be identified. The analysis further assumes that the position uncertainty of the launch vehicle and the approaching space object can be described with position covariance matrices. With these and some additional simplifying assumptions, a closed-form solution is developed using two approaches. The solution shows that the probability of collision is a function of position uncertainties, the size of the two potentially colliding objects, and the nominal separation distance at the point of closest approach. ne impact of the simplifying assumptions on the accuracy of the final result is assessed and the application of the results to the Cassini mission, launched in October 1997, is described. Other factors that affect the probability of collision are also discussed. Finally, the report offers alternative approaches that can be used to evaluate the probability of collision.

  9. An introduction to modeling longitudinal data with generalized additive models: applications to single-case designs.

    PubMed

    Sullivan, Kristynn J; Shadish, William R; Steiner, Peter M

    2015-03-01

    Single-case designs (SCDs) are short time series that assess intervention effects by measuring units repeatedly over time in both the presence and absence of treatment. This article introduces a statistical technique for analyzing SCD data that has not been much used in psychological and educational research: generalized additive models (GAMs). In parametric regression, the researcher must choose a functional form to impose on the data, for example, that trend over time is linear. GAMs reverse this process by letting the data inform the choice of functional form. In this article we review the problem that trend poses in SCDs, discuss how current SCD analytic methods approach trend, describe GAMs as a possible solution, suggest a GAM model testing procedure for examining the presence of trend in SCDs, present a small simulation to show the statistical properties of GAMs, and illustrate the procedure on 3 examples of different lengths. Results suggest that GAMs may be very useful both as a form of sensitivity analysis for checking the plausibility of assumptions about trend and as a primary data analysis strategy for testing treatment effects. We conclude with a discussion of some problems with GAMs and some future directions for research on the application of GAMs to SCDs. (c) 2015 APA, all rights reserved).

  10. A closed-form solution for steady-state coupled phloem/xylem flow using the Lambert-W function.

    PubMed

    Hall, A J; Minchin, P E H

    2013-12-01

    A closed-form solution for steady-state coupled phloem/xylem flow is presented. This incorporates the basic Münch flow model of phloem transport, the cohesion model of xylem flow, and local variation in the xylem water potential and lateral water flow along the transport pathway. Use of the Lambert-W function allows this solution to be obtained under much more general and realistic conditions than has previously been possible. Variation in phloem resistance (i.e. viscosity) with solute concentration, and deviations from the Van't Hoff expression for osmotic potential are included. It is shown that the model predictions match those of the equilibrium solution of a numerical time-dependent model based upon the same mechanistic assumptions. The effect of xylem flow upon phloem flow can readily be calculated, which has not been possible in any previous analytical model. It is also shown how this new analytical solution can handle multiple sources and sinks within a complex architecture, and can describe competition between sinks. The model provides new insights into Münch flow by explicitly including interactions with xylem flow and water potential in the closed-form solution, and is expected to be useful as a component part of larger numerical models of entire plants. © 2013 John Wiley & Sons Ltd.

  11. Thresholds of understanding: Exploring assumptions of scale invariance vs. scale dependence in global biogeochemical models

    NASA Astrophysics Data System (ADS)

    Wieder, W. R.; Bradford, M.; Koven, C.; Talbot, J. M.; Wood, S.; Chadwick, O.

    2016-12-01

    High uncertainty and low confidence in terrestrial carbon (C) cycle projections reflect the incomplete understanding of how best to represent biologically-driven C cycle processes at global scales. Ecosystem theories, and consequently biogeochemical models, are based on the assumption that different belowground communities function similarly and interact with the abiotic environment in consistent ways. This assumption of "Scale Invariance" posits that environmental conditions will change the rate of ecosystem processes, but the biotic response will be consistent across sites. Indeed, cross-site comparisons and global-scale analyses suggest that climate strongly controls rates of litter mass loss and soil organic matter turnover. Alternatively, activities of belowground communities are shaped by particular local environmental conditions, such as climate and edaphic conditions. Under this assumption of "Scale Dependence", relationships generated by evolutionary trade-offs in acquiring resources and withstanding environmental stress dictate the activities of belowground communities and their functional response to environmental change. Similarly, local edaphic conditions (e.g. permafrost soils or reactive minerals that physicochemically stabilize soil organic matter on mineral surfaces) may strongly constrain the availability of substrates that biota decompose—altering the trajectory of soil biogeochemical response to perturbations. Identifying when scale invariant assumptions hold vs. where local variation in biotic communities or edaphic conditions must be considered is critical to advancing our understanding and representation of belowground processes in the face of environmental change. Here we introduce data sets that support assumptions of scale invariance and scale dependent processes and discuss their application in global-scale biogeochemical models. We identify particular domains over which assumptions of scale invariance may be appropriate and potential thresholds where shifts in ecosystem function may be expected. Finally, we discuss the mechanistic insight that can be applied in process-based models and datasets that can evaluate models across spatial and temporal scales.

  12. The Emperors sham - wrong assumption that sham needling is sham.

    PubMed

    Lundeberg, Thomas; Lund, Iréne; Näslund, Jan; Thomas, Moolamanil

    2008-12-01

    During the last five years a large number of randomised controlled clinical trials (RCTs) have been published on the efficacy of acupuncture in different conditions. In most of these studies verum is compared with sham acupuncture. In general both verum and sham have been found to be effective, and often with little reported difference in outcome. This has repeatedly led to the conclusion that acupuncture is no more effective than placebo treatment. However, this conclusion is based on the assumption that sham acupuncture is inert. Since sham acupuncture evidently is merely another form of acupuncture from the physiological perspective, the assumption that sham is sham is incorrect and conclusions based on this assumption are therefore invalid. Clinical guidelines based on such conclusions may therefore exclude suffering patients from valuable treatments.

  13. "On Cloud Nine" and "On All Fours": Which Is More Transparent? Elements in EFL Learners' Transparency Assumptions

    ERIC Educational Resources Information Center

    Lin, Crystal Jia-yi

    2015-01-01

    Idiom transparency refers to how speakers think the meaning of the individual words contributes to the figurative meaning of an idiom as a whole (Gibbs, Nayak, & Cutting, 1989). However, it is not clear how speakers or language learners form their assumptions about an idiom's transparency level. This study set out to discover whether there are…

  14. Simulation of Wave and Current Processes Using Novel, Phase Resolving Models

    DTIC Science & Technology

    2013-09-30

    fundamental technical approach is to represent nearshore water wave systems by retaining Boussinesq scaling assumptions, but without any assumption of... Boussinesq approach that allows for much more freedom in determining the system properties. The resulting systems can have two forms: a classic...of a pressure-Poisson approach to Boussinesq systems . The wave generation-absorption system has now been shown to provide highly accurate results

  15. Developing Interpretive Turbulence Models from a Database with Applications to Wind Farms and Shipboard Operations

    NASA Astrophysics Data System (ADS)

    Schau, Kyle A.

    This thesis presents a complete method of modeling the autospectra of turbulence in closed form via an expansion series using the von Karman model as a basis function. It is capable of modeling turbulence in all three directions of fluid flow: longitudinal, lateral, and vertical, separately, thus eliminating the assumption of homogeneous, isotropic flow. A thorough investigation into the expansion series is presented, with the strengths and weaknesses highlighted. Furthermore, numerical aspects and theoretical derivations are provided. This method is then tested against three highly complex flow fields: wake turbulence inside wind farms, helicopter downwash, and helicopter downwash coupled with turbulence shed from a ship superstructure. These applications demonstrate that this method is remarkably robust, that the developed autospectral models are virtually tailored to the design of white noise driven shaping filters, and that these models in closed form facilitate a greater understanding of complex flow fields in wind engineering.

  16. On the equilibrium structures of self-gravitating masses of gas containing axisymmetric magnetic fields

    NASA Technical Reports Server (NTRS)

    Lerche, I.; Low, B. C.

    1980-01-01

    The general equations describing the equilibrium shapes of self-gravitating gas clouds containing axisymmetric magnetic fields are presented. The general equations admit of a large class of solutions. It is shown that if one additional (ad hoc) asumption is made that the mass be spherically symmetrically distributed, then the gas pressure and the boundary conditions are sufficiently constraining that the general topological structure of the solution is effectively determined. The further assumption of isothermal conditions for this case demands that all solutions possess force-free axisymmetric magnetic fields. It is also shown how the construction of aspherical (but axisymmetric) configurations can be achieved in some special cases, and it is demonstrated that the detailed form of the possible equilibrium shapes depends upon the arbitrary choice of the functional form of the variation of the gas pressure along the field lines.

  17. An Extension of the Chi-Square Procedure for Non-NORMAL Statistics, with Application to Solar Neutrino Data

    NASA Astrophysics Data System (ADS)

    Sturrock, P. A.

    2008-01-01

    Using the chi-square statistic, one may conveniently test whether a series of measurements of a variable are consistent with a constant value. However, that test is predicated on the assumption that the appropriate probability distribution function (pdf) is normal in form. This requirement is usually not satisfied by experimental measurements of the solar neutrino flux. This article presents an extension of the chi-square procedure that is valid for any form of the pdf. This procedure is applied to the GALLEX-GNO dataset, and it is shown that the results are in good agreement with the results of Monte Carlo simulations. Whereas application of the standard chi-square test to symmetrized data yields evidence significant at the 1% level for variability of the solar neutrino flux, application of the extended chi-square test to the unsymmetrized data yields only weak evidence (significant at the 4% level) of variability.

  18. Testing the Self-Consistency of the Excursion Set Approach to Predicting the Dark Matter Halo Mass Function

    NASA Astrophysics Data System (ADS)

    Achitouv, I.; Rasera, Y.; Sheth, R. K.; Corasaniti, P. S.

    2013-12-01

    The excursion set approach provides a framework for predicting how the abundance of dark matter halos depends on the initial conditions. A key ingredient of this formalism is the specification of a critical overdensity threshold (barrier) which protohalos must exceed if they are to form virialized halos at a later time. However, to make its predictions, the excursion set approach explicitly averages over all positions in the initial field, rather than the special ones around which halos form, so it is not clear that the barrier has physical motivation or meaning. In this Letter we show that once the statistical assumptions which underlie the excursion set approach are considered a drifting diffusing barrier model does provide a good self-consistent description both of halo abundance as well as of the initial overdensities of the protohalo patches.

  19. Clock-Work Trade-Off Relation for Coherence in Quantum Thermodynamics

    NASA Astrophysics Data System (ADS)

    Kwon, Hyukjoon; Jeong, Hyunseok; Jennings, David; Yadin, Benjamin; Kim, M. S.

    2018-04-01

    In thermodynamics, quantum coherences—superpositions between energy eigenstates—behave in distinctly nonclassical ways. Here we describe how thermodynamic coherence splits into two kinds—"internal" coherence that admits an energetic value in terms of thermodynamic work, and "external" coherence that does not have energetic value, but instead corresponds to the functioning of the system as a quantum clock. For the latter form of coherence, we provide dynamical constraints that relate to quantum metrology and macroscopicity, while for the former, we show that quantum states exist that have finite internal coherence yet with zero deterministic work value. Finally, under minimal thermodynamic assumptions, we establish a clock-work trade-off relation between these two types of coherences. This can be viewed as a form of time-energy conjugate relation within quantum thermodynamics that bounds the total maximum of clock and work resources for a given system.

  20. Telepresence for space: The state of the concept

    NASA Technical Reports Server (NTRS)

    Smith, Randy L.; Gillan, Douglas J.; Stuart, Mark A.

    1990-01-01

    The purpose here is to examine the concept of telepresence critically. To accomplish this goal, first, the assumptions that underlie telepresence and its applications are examined, and second, the issues raised by that examination are discussed. Also, these assumptions and issues are used as a means of shifting the focus in telepresence from development to user-based research. The most basic assumption of telepresence is that the information being provided to the human must be displayed in a natural fashion, i.e., the information should be displayed to the same human sensory modalities, and in the same fashion, as if the person where actually at the remote site. A further fundamental assumption for the functional use of telepresence is that a sense of being present in the work environment will produce superior performance. In other words, that sense of being there would allow the human operator of a distant machine to take greater advantage of his or her considerable perceptual, cognitive, and motor capabilities in the performance of a task than would more limited task-related feedback. Finally, a third fundamental assumption of functional telepresence is that the distant machine under the operator's control must substantially resemble a human in dexterity.

  1. Fostering deliberations about health innovation: what do we want to know from publics?

    PubMed

    Lehoux, Pascale; Daudelin, Genevieve; Demers-Payette, Olivier; Boivin, Antoine

    2009-06-01

    As more complex and uncertain forms of health innovation keep emerging, scholars are increasingly voicing arguments in favour of public involvement in health innovation policy. The current conceptualization of this involvement is, however, somewhat problematic as it tends to assume that scientific facts form a "hard," indisputable core around which "soft," relative values can be attached. This paper, by giving precedence to epistemological issues, explores what there is to know from public involvement. We argue that knowledge and normative assumptions are co-constitutive of each other and pivotal to the ways in which both experts and non-experts reason about health innovations. Because knowledge and normative assumptions are different but interrelated ways of reasoning, public involvement initiatives need to emphasise deliberative processes that maximise mutual learning within and across various groups of both experts and non-experts (who, we argue, all belong to the "publics"). Hence, we believe that what researchers might wish to know from publics is how their reasoning is anchored in normative assumptions (what makes a given innovation desirable?) and in knowledge about the plausibility of their effects (are they likely to be realised?). Accordingly, one sensible goal of greater public involvement in health innovation policy would be to refine normative assumptions and make their articulation with scientific observations explicit and openly contestable. The paper concludes that we must differentiate between normative assumptions and knowledge, rather than set up a dichotomy between them or confound them.

  2. Bell violation using entangled photons without the fair-sampling assumption.

    PubMed

    Giustina, Marissa; Mech, Alexandra; Ramelow, Sven; Wittmann, Bernhard; Kofler, Johannes; Beyer, Jörn; Lita, Adriana; Calkins, Brice; Gerrits, Thomas; Nam, Sae Woo; Ursin, Rupert; Zeilinger, Anton

    2013-05-09

    The violation of a Bell inequality is an experimental observation that forces the abandonment of a local realistic viewpoint--namely, one in which physical properties are (probabilistically) defined before and independently of measurement, and in which no physical influence can propagate faster than the speed of light. All such experimental violations require additional assumptions depending on their specific construction, making them vulnerable to so-called loopholes. Here we use entangled photons to violate a Bell inequality while closing the fair-sampling loophole, that is, without assuming that the sample of measured photons accurately represents the entire ensemble. To do this, we use the Eberhard form of Bell's inequality, which is not vulnerable to the fair-sampling assumption and which allows a lower collection efficiency than other forms. Technical improvements of the photon source and high-efficiency transition-edge sensors were crucial for achieving a sufficiently high collection efficiency. Our experiment makes the photon the first physical system for which each of the main loopholes has been closed, albeit in different experiments.

  3. Solid and liquid heat capacities of n-alkyl para-aminobenzoates near the melting point.

    PubMed

    Neau, S H; Flynn, G L

    1990-11-01

    The expression that relates the ideal mole fraction solubility of a crystalline compound to physicochemical properties of the compound includes a term involving the difference in the heat capacities of the solid and liquid forms of the solute, delta Cp. There are two alternate conventions which are employed to eliminate this term. The first assumes that the term involving delta Cp, or delta Cp itself, is zero. The alternate assumption assigns the value of the entropy of fusion to the differential heat capacity. The relative validity of these two assumptions was evaluated using the straight-chain alkyl para-aminobenzoates as test compounds. The heat capacities of the solid and liquid forms of each of the para-aminobenzoates, near the respective melting point, were determined by differential scanning calorimetry. The data lead one to conclude that the assumption that the differential heat capacity is not usually negligible and is better approximated by the entropy of fusion.

  4. Investigating the mixture and subdivision of perceptual and conceptual processing in Japanese memory tests.

    PubMed

    Gabeza, R

    1995-03-01

    The dual nature of the Japanese writing system was used to investigate two assumptions of the processing view of memory transfer: (1) that both perceptual and conceptual processing can contribute to the same memory test (mixture assumption) and (2) that both can be broken into more specific processes (subdivision assumption). Supporting the mixture assumption, a word fragment completion test based on ideographic kanji characters (kanji fragment completion test) was affected by both perceptual (hiragana/kanji script shift) and conceptual (levels-of-processing) study manipulations kanji fragments, because it did not occur with the use of meaningless hiragana fragments. The mixture assumption is also supported by an effect of study script on an implicit conceptual test (sentence completion), and the subdivision assumption is supported by a crossover dissociation between hiragana and kanji fragment completion as a function of study script.

  5. Inter-species activity correlations reveal functional correspondences between monkey and human brain areas

    PubMed Central

    Mantini, Dante; Hasson, Uri; Betti, Viviana; Perrucci, Mauro G.; Romani, Gian Luca; Corbetta, Maurizio; Orban, Guy A.; Vanduffel, Wim

    2012-01-01

    Evolution-driven functional changes in the primate brain are typically assessed by aligning monkey and human activation maps using cortical surface expansion models. These models use putative homologous areas as registration landmarks, assuming they are functionally correspondent. In cases where functional changes have occurred in an area, this assumption prohibits to reveal whether other areas may have assumed lost functions. Here we describe a method to examine functional correspondences across species. Without making spatial assumptions, we assess similarities in sensory-driven functional magnetic resonance imaging responses between monkey (Macaca mulatta) and human brain areas by means of temporal correlation. Using natural vision data, we reveal regions for which functional processing has shifted to topologically divergent locations during evolution. We conclude that substantial evolution-driven functional reorganizations have occurred, not always consistent with cortical expansion processes. This novel framework for evaluating changes in functional architecture is crucial to building more accurate evolutionary models. PMID:22306809

  6. Reinforcing loose foundation stones in trait-based plant ecology.

    PubMed

    Shipley, Bill; De Bello, Francesco; Cornelissen, J Hans C; Laliberté, Etienne; Laughlin, Daniel C; Reich, Peter B

    2016-04-01

    The promise of "trait-based" plant ecology is one of generalized prediction across organizational and spatial scales, independent of taxonomy. This promise is a major reason for the increased popularity of this approach. Here, we argue that some important foundational assumptions of trait-based ecology have not received sufficient empirical evaluation. We identify three such assumptions and, where possible, suggest methods of improvement: (i) traits are functional to the degree that they determine individual fitness, (ii) intraspecific variation in functional traits can be largely ignored, and (iii) functional traits show general predictive relationships to measurable environmental gradients.

  7. Lesbian health and the assumption of heterosexuality: an organizational perspective.

    PubMed

    Daley, Andrea

    2003-01-01

    This study used a qualitative research design to explore hospital policies and practices and the assumption of female heterosexuality. The assumption of heterosexuality is a product of discursive practices that normalize heterosexuality and individualize lesbian sexual identities. Literature indicates that the assumption of female heterosexuality is implicated in both the invisibility and marked visibility of lesbians as service users. This research adds to existing literature by shifting the focus of study from individual to organizational practices and, in so doing, seeks to uncover hidden truths, explore the functional power of language, and allow for the discovery of what we know and--equally as important--how we know.

  8. Using StorAge Selection Functions to Improve Simulation of Groundwater Nitrate Lag Times in the SWAT Modeling Framework.

    NASA Astrophysics Data System (ADS)

    Wilusz, D. C.; Fuka, D.; Cho, C.; Ball, W. P.; Easton, Z. M.; Harman, C. J.

    2017-12-01

    Intensive agriculture and atmospheric deposition have dramatically increased the input of reactive nitrogen into many watersheds worldwide. Reactive nitrogen can leach as nitrate into groundwater, which is stored and eventually released over years to decades into surface waters, potentially degrading water quality. To simulate the fate and transport of groundwater nitrate, many researchers and practitioners use the Soil and Water Assessment Tool (SWAT) or an enhanced version of SWAT that accounts for topographically-driven variable source areas (TopoSWAT). Both SWAT and TopoSWAT effectively assume that nitrate in the groundwater reservoir is well-mixed, which is known to be a poor assumption at many sites. In this study, we describe modifications to TopoSWAT that (1) relax the assumption of groundwater well-mixedness, (2) more flexibly parameterize groundwater transport as a time-varying distribution of travel times using the recently developed theory of rank StorAge Selection (rSAS) functions, and (3) allow for groundwater age to be represented by position on the hillslope or hydrological distance from the stream. The approach conceptualizes the groundwater aquifer as a population of water parcels entering as recharge with a particular nitrate concentration, aging as they move through storage, and eventually exiting as baseflow. The rSAS function selects the distribution of parcel ages that exit as baseflow based on a parameterized probability distribution; this distribution can be adjusted to preferentially select different distributions of young and old parcels in storage so as to reproduce (in principle) any form of transport. The modified TopoSWAT model (TopoSWAT+rSAS) is tested at a small agricultural catchment in the Eastern Shore, MD with an extensive hydrologic and hydrochemical data record for calibration and evaluation. The results examine (1) the sensitivity of TopoSWAT+rSAS modeling of nitrate transport to assumptions about the distribution of travel times of the groundwater aquifer, (2) which travel times are most likely at our study site based on available data, and (3) how TopoSWAT+rSAS performs and can be applied to other catchments.

  9. Orbital migration and the period distribution of exoplanets

    NASA Astrophysics Data System (ADS)

    Del Popolo, A.; Ercan, N.; Yeşilyurt, I. S.

    2005-06-01

    We use the model for the migration of planets introduced in Del Popolo et al. (2003, MNRAS, 339, 556) to calculate the observed mass and semimajor axis distribution of extra-solar planets. The assumption that the surface density in planetesimals is proportional to that of gas is relaxed, and in order to describe disc evolution we use a method which, using a series of simplifying assumptions, is able to simultaneously follow the evolution of gas and solid particles for up to 107 ~yr. The distribution of planetesimals obtained after 107 ~yr is used to study the migration rate of a giant planet through the model described in the present paper. The disk and migration models are used to calculate the distribution of planets as function of mass and semimajor axis. The results show that the model can give a reasonable prediction of planets' semi-major axes and mass distribution. In particular there is a pile-up of planets at a ≃ 0.05 AU, a minimum near 0.3 AU, indicating a paucity of planets at that distance, and a rise for semi-major axes larger than 0.3 AU, out to 3 AU. The semi-major axis distribution shows that the more massive planets (typically, masses larger than 4~ M_J) form preferentially in the outer regions and do not migrate much. Intermediate-mass objects migrate more easily whatever the distance at which they form, and that the lighter planets (masses from sub-Saturnian to Jovian) migrate easily.

  10. Constant-Round Concurrent Zero Knowledge From Falsifiable Assumptions

    DTIC Science & Technology

    2013-01-01

    assumptions (e.g., [DS98, Dam00, CGGM00, Gol02, PTV12, GJO+12]), or in alternative models (e.g., super -polynomial-time simulation [Pas03b, PV10]). In the...T (·)-time computations, where T (·) is some “nice” (slightly) super -polynomial function (e.g., T (n) = nlog log logn). We refer to such proof...put a cap on both using a (slightly) super -polynomial function, and thus to guarantee soundness of the concurrent zero-knowledge protocol, we need

  11. Israeli culture and the emergence of community mental health practices: the case of the West Jerusalem Mental Health Center.

    PubMed

    Reinharz, S; Mester, R

    1978-01-01

    The action assumptions which characterize and differentiate cultures affect the creation and functioning of their institutions. Using this analytic framework, the development of a community mental health center in Israel reflects a culture which contains both pioneering and bureaucratic action assumptions. The effects of these assumptions on staff interventions in community problems are traced. Finally, various dimensions of the emerging definition of community mental health practice in Israel are discussed and their problematic features identified.

  12. Near-wall modeling of compressible turbulent flow

    NASA Technical Reports Server (NTRS)

    So, Ronald M. C.

    1991-01-01

    A near-wall two-equation model for compressible flows is proposed. The model is formulated by relaxing the assumption of dynamic field similarity between compressible and incompressible flows. A postulate is made to justify the extension of incompressible models to ammount for compressibility effects. This requires formulation the turbulent kinetic energy equation in a form similar to its incompressible counterpart. As a result, the compressible dissipation function has to be split into a solenoidal part, which is not sensitive to changes of compressibility indicators, and a dilatational part, which is directly affected by these changes. A model with an explicit dependence on the turbulent Mach number is proposed for the dilatational dissipation rate.

  13. Concepts for design of an energy management system incorporating dispersed storage and generation

    NASA Technical Reports Server (NTRS)

    Kirkham, H.; Koerner, T.; Nightingale, D.

    1981-01-01

    New forms of generation based on renewable resources must be managed as part of existing power systems in order to be utilized with maximum effectiveness. Many of these generators are by their very nature dispersed or small, so that they will be connected to the distribution part of the power system. This situation poses new questions of control and protection, and the intermittent nature of some of the energy sources poses problems of scheduling and dispatch. Under the assumption that the general objectives of energy management will remain unchanged, the impact of dispersed storage and generation on some of the specific functions of power system control and its hardware are discussed.

  14. A general method for decomposing the causes of socioeconomic inequality in health.

    PubMed

    Heckley, Gawain; Gerdtham, Ulf-G; Kjellsson, Gustav

    2016-07-01

    We introduce a general decomposition method applicable to all forms of bivariate rank dependent indices of socioeconomic inequality in health, including the concentration index. The technique is based on recentered influence function regression and requires only the application of OLS to a transformed variable with similar interpretation. Our method requires few identifying assumptions to yield valid estimates in most common empirical applications, unlike current methods favoured in the literature. Using the Swedish Twin Registry and a within twin pair fixed effects identification strategy, our new method finds no evidence of a causal effect of education on income-related health inequality. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  15. Separating macroecological pattern and process: comparing ecological, economic, and geological systems.

    PubMed

    Blonder, Benjamin; Sloat, Lindsey; Enquist, Brian J; McGill, Brian

    2014-01-01

    Theories of biodiversity rest on several macroecological patterns describing the relationship between species abundance and diversity. A central problem is that all theories make similar predictions for these patterns despite disparate assumptions. A troubling implication is that these patterns may not reflect anything unique about organizational principles of biology or the functioning of ecological systems. To test this, we analyze five datasets from ecological, economic, and geological systems that describe the distribution of objects across categories in the United States. At the level of functional form ('first-order effects'), these patterns are not unique to ecological systems, indicating they may reveal little about biological process. However, we show that mechanism can be better revealed in the scale-dependency of first-order patterns ('second-order effects'). These results provide a roadmap for biodiversity theory to move beyond traditional patterns, and also suggest ways in which macroecological theory can constrain the dynamics of economic systems.

  16. On the emergence of a generalised Gamma distribution. Application to traded volume in financial markets

    NASA Astrophysics Data System (ADS)

    Duarte Queirós, S. M.

    2005-08-01

    This letter reports on a stochastic dynamical scenario whose associated stationary probability density function is exactly a generalised form, with a power law instead of exponencial decay, of the ubiquitous Gamma distribution. This generalisation, also known as F-distribution, was empirically proposed for the first time to adjust for high-frequency stock traded volume distributions in financial markets and verified in experiments with granular material. The dynamical assumption presented herein is based on local temporal fluctuations of the average value of the observable under study. This proposal is related to superstatistics and thus to the current nonextensive statistical mechanics framework. For the specific case of stock traded volume, we connect the local fluctuations in the mean stock traded volume with the typical herding behaviour presented by financial traders. Last of all, NASDAQ 1 and 2 minute stock traded volume sequences and probability density functions are numerically reproduced.

  17. Multilayered models for electromagnetic reflection amplitudes

    NASA Technical Reports Server (NTRS)

    Linlor, W. I.

    1976-01-01

    The remote sensing of snowpack characteristics with surface installations or with an airborne system could have important applications in water resource management and flood prediction. To derive some insight into such applications, the electromagnetic response of multilayer snow models is analyzed. Normally incident plane waves are assumed at frequencies ranging from 10 to the 6th power to 10 to the 10th power Hz, and amplitude reflection coefficients are calculated for models having various snow-layer combinations, including ice sheets. Layers are defined by a thickness, permittivity, and conductivity; the electrical parameters are constant or prescribed functions of frequency. To illustrate the effect of various layering combinations, results are given in the form of curves of amplitude reflection coefficients, versus frequency for a variety of models. Under simplifying assumptions, the snow thickness and effective dielectric constant can be estimated from the reflection coefficient variations as a function of frequency.

  18. On the X-ray spectrum of the volume emissivity arising from Abell clusters

    NASA Technical Reports Server (NTRS)

    Stottlemyer, A. R.; Boldt, E. A.

    1984-01-01

    HEAO 1 A-2 X-ray spectra (2-15 keV) for an optically selected sample of Abell clusters of galaxies with z less than 0.1 have been analyzed to determine the energy dependence of the cosmological X-ray volume emissivity arising from such clusters. This spectrum is well fitted by an isothermal-bremsstrahlung model with kT = 7.4 + or - 1.5 KeV. This result is a test of the isothermal-volume-emissivity spectrum to be inferred from the conjecture that all contributing clusters may be characterized by kT = 7 keV, as assumed by McKee et al. (1980) in estimating the underlying luminosity function for the same sample. Although satisfied at the statistical level indicated, the analysis of a low-luminosity subsample suggests that this assumption of identical isothermal spectra would lead to a systematic error for a more statistically precise determination of the luminosity function's form.

  19. Dyadic Green's function of a cluster of spheres.

    PubMed

    Moneda, Angela P; Chrissoulidis, Dimitrios P

    2007-11-01

    The electric dyadic Green's function (dGf) of a cluster of spheres is obtained by application of the superposition principle, dyadic algebra, and the indirect mode-matching method. The analysis results in a set of linear equations for the unknown, vector, wave amplitudes of the dGf; that set is solved by truncation and matrix inversion. The theory is exact in the sense that no simplifying assumptions are made in the analytical steps leading to the dGf, and it is general in the sense that any number, position, size and electrical properties can be considered for the spheres that cluster together. The point source can be anywhere, even within one of the spheres. Energy conservation, reciprocity, and other tests prove that this solution is correct. Numerical results are presented for an electric Hertz dipole radiating in the presence of an array of rexolite spheres, which manifests lensing and beam-forming capabilities.

  20. Speed-of-light limitations in passive linear media

    NASA Astrophysics Data System (ADS)

    Welters, Aaron; Avniel, Yehuda; Johnson, Steven G.

    2014-08-01

    We prove that well-known speed-of-light restrictions on electromagnetic energy velocity can be extended to a new level of generality, encompassing even nonlocal chiral media in periodic geometries, while at the same time weakening the underlying assumptions to only passivity and linearity of the medium (either with a transparency window or with dissipation). As was also shown by other authors under more limiting assumptions, passivity alone is sufficient to guarantee causality and positivity of the energy density (with no thermodynamic assumptions). Our proof is general enough to include a very broad range of material properties, including anisotropy, bianisotropy (chirality), nonlocality, dispersion, periodicity, and even delta functions or similar generalized functions. We also show that the "dynamical energy density" used by some previous authors in dissipative media reduces to the standard Brillouin formula for dispersive energy density in a transparency window. The results in this paper are proved by exploiting deep results from linear-response theory, harmonic analysis, and functional analysis that had previously not been brought together in the context of electrodynamics.

  1. Type IIB flux vacua from G-theory II

    NASA Astrophysics Data System (ADS)

    Candelas, Philip; Constantin, Andrei; Damian, Cesar; Larfors, Magdalena; Morales, Jose Francisco

    2015-02-01

    We find analytic solutions of type IIB supergravity on geometries that locally take the form Mink × M 4 × ℂ with M 4 a generalised complex manifold. The solutions involve the metric, the dilaton, NSNS and RR flux potentials (oriented along the M 4) parametrised by functions varying only over ℂ. Under this assumption, the supersymmetry equations are solved using the formalism of pure spinors in terms of a finite number of holomorphic functions. Alternatively, the solutions can be viewed as vacua of maximally supersymmetric supergravity in six dimensions with a set of scalar fields varying holomorphically over ℂ. For a class of solutions characterised by up to five holomorphic functions, we outline how the local solutions can be completed to four-dimensional flux vacua of type IIB theory. A detailed study of this global completion for solutions with two holomorphic functions has been carried out in the companion paper [1]. The fluxes of the global solutions are, as in F-theory, entirely codified in the geometry of an auxiliary K3 fibration over ℂℙ1. The results provide a geometric construction of fluxes in F-theory.

  2. Generalized linear mixed models with varying coefficients for longitudinal data.

    PubMed

    Zhang, Daowen

    2004-03-01

    The routinely assumed parametric functional form in the linear predictor of a generalized linear mixed model for longitudinal data may be too restrictive to represent true underlying covariate effects. We relax this assumption by representing these covariate effects by smooth but otherwise arbitrary functions of time, with random effects used to model the correlation induced by among-subject and within-subject variation. Due to the usually intractable integration involved in evaluating the quasi-likelihood function, the double penalized quasi-likelihood (DPQL) approach of Lin and Zhang (1999, Journal of the Royal Statistical Society, Series B61, 381-400) is used to estimate the varying coefficients and the variance components simultaneously by representing a nonparametric function by a linear combination of fixed effects and random effects. A scaled chi-squared test based on the mixed model representation of the proposed model is developed to test whether an underlying varying coefficient is a polynomial of certain degree. We evaluate the performance of the procedures through simulation studies and illustrate their application with Indonesian children infectious disease data.

  3. Intertemporal consumption with directly measured welfare functions and subjective expectations

    PubMed Central

    Kapteyn, Arie; Kleinjans, Kristin J.; van Soest, Arthur

    2010-01-01

    Euler equation estimation of intertemporal consumption models requires many, often unverifiable assumptions. These include assumptions on expectations and preferences. We aim at reducing some of these requirements by using direct subjective information on respondents’ preferences and expectations. The results suggest that individually measured welfare functions and expectations have predictive power for the variation in consumption across households. Furthermore, estimates of the intertemporal elasticity of substitution based on the estimated welfare functions are plausible and of a similar order of magnitude as other estimates found in the literature. The model favored by the data only requires cross-section data for estimation. PMID:20442798

  4. Testing the basic assumption of the hydrogeomorphic approach to assessing wetland functions.

    PubMed

    Hruby, T

    2001-05-01

    The hydrogeomorphic (HGM) approach for developing "rapid" wetland function assessment methods stipulates that the variables used are to be scaled based on data collected at sites judged to be the best at performing the wetland functions (reference standard sites). A critical step in the process is to choose the least altered wetlands in a hydrogeomorphic subclass to use as a reference standard against which other wetlands are compared. The basic assumption made in this approach is that wetlands judged to have had the least human impact have the highest level of sustainable performance for all functions. The levels at which functions are performed in these least altered wetlands are assumed to be "characteristic" for the subclass and "sustainable." Results from data collected in wetlands in the lowlands of western Washington suggest that the assumption may not be appropriate for this region. Teams developing methods for assessing wetland functions did not find that the least altered wetlands in a subclass had a range of performance levels that could be identified as "characteristic" or "sustainable." Forty-four wetlands in four hydrogeomorphic subclasses (two depressional subclasses and two riverine subclasses) were rated by teams of experts on the severity of their human alterations and on the level of performance of 15 wetland functions. An ordinal scale of 1-5 was used to quantify alterations in water regime, soils, vegetation, buffers, and contributing basin. Performance of functions was judged on an ordinal scale of 1-7. Relatively unaltered wetlands were judged to perform individual functions at levels that spanned all of the seven possible ratings in all four subclasses. The basic assumption of the HGM approach, that the least altered wetlands represent "characteristic" and "sustainable" levels of functioning that are different from those found in altered wetlands, was not confirmed. Although the intent of the HGM approach is to use level of functioning as a metric to assess the ecological integrity or "health" of the wetland ecosystem, the metric does not seem to work in western Washington for that purpose.

  5. Materials prediction via classification learning

    DOE PAGES

    Balachandran, Prasanna V.; Theiler, James; Rondinelli, James M.; ...

    2015-08-25

    In the paradigm of materials informatics for accelerated materials discovery, the choice of feature set (i.e. attributes that capture aspects of structure, chemistry and/or bonding) is critical. Ideally, the feature sets should provide a simple physical basis for extracting major structural and chemical trends and furthermore, enable rapid predictions of new material chemistries. Orbital radii calculated from model pseudopotential fits to spectroscopic data are potential candidates to satisfy these conditions. Although these radii (and their linear combinations) have been utilized in the past, their functional forms are largely justified with heuristic arguments. Here we show that machine learning methods naturallymore » uncover the functional forms that mimic most frequently used features in the literature, thereby providing a mathematical basis for feature set construction without a priori assumptions. We apply these principles to study two broad materials classes: (i) wide band gap AB compounds and (ii) rare earth-main group RM intermetallics. The AB compounds serve as a prototypical example to demonstrate our approach, whereas the RM intermetallics show how these concepts can be used to rapidly design new ductile materials. In conclusion, our predictive models indicate that ScCo, ScIr, and YCd should be ductile, whereas each was previously proposed to be brittle.« less

  6. Materials Prediction via Classification Learning

    PubMed Central

    Balachandran, Prasanna V.; Theiler, James; Rondinelli, James M.; Lookman, Turab

    2015-01-01

    In the paradigm of materials informatics for accelerated materials discovery, the choice of feature set (i.e. attributes that capture aspects of structure, chemistry and/or bonding) is critical. Ideally, the feature sets should provide a simple physical basis for extracting major structural and chemical trends and furthermore, enable rapid predictions of new material chemistries. Orbital radii calculated from model pseudopotential fits to spectroscopic data are potential candidates to satisfy these conditions. Although these radii (and their linear combinations) have been utilized in the past, their functional forms are largely justified with heuristic arguments. Here we show that machine learning methods naturally uncover the functional forms that mimic most frequently used features in the literature, thereby providing a mathematical basis for feature set construction without a priori assumptions. We apply these principles to study two broad materials classes: (i) wide band gap AB compounds and (ii) rare earth-main group RM intermetallics. The AB compounds serve as a prototypical example to demonstrate our approach, whereas the RM intermetallics show how these concepts can be used to rapidly design new ductile materials. Our predictive models indicate that ScCo, ScIr, and YCd should be ductile, whereas each was previously proposed to be brittle. PMID:26304800

  7. Scaling relations for a functionally two-dimensional plant: Chamaesyce setiloba (Euphorbiaceae).

    PubMed

    Koontz, Terri L; Petroff, Alexander; West, Geoffrey B; Brown, James H

    2009-05-01

    Many characteristics of plants and animals scale with body size as described by allometric equations of the form Y = βM(α), where Y is an attribute of the organism, β is a coefficient that varies with attribute, M is a measure of organism size, and α is another constant, the scaling exponent. In current models, the frequently observed quarter-power scaling exponents are hypothesized to be due to fractal-like structures. However, not all plants or animals conform to the assumptions of these models. Therefore, they might be expected to have different scaling relations. We studied one such plant, Chamaesyce setiloba, a prostrate annual herb that grows to functionally fill a two-dimensional space. Number of leaves scaled slightly less than isometrically with total aboveground plant mass (α ≈ 0.9) and substantially less than isometrically with dry total stem mass (α = 0.82), showing reduced allocation to leaf as opposed to stem tissue with increasing plant size. Additionally, scalings of the lengths and radii of parent and daughter branches differed from those predicted for three-dimensional trees and shrubs. Unlike plants with typical three-dimensional architectures, C. setiloba has distinctive scaling relations associated with its particular prostrate herbaceous growth form.

  8. Structural motifs of pre-nucleation clusters.

    PubMed

    Zhang, Y; Türkmen, I R; Wassermann, B; Erko, A; Rühl, E

    2013-10-07

    Structural motifs of pre-nucleation clusters prepared in single, optically levitated supersaturated aqueous aerosol microparticles containing CaBr2 as a model system are reported. Cluster formation is identified by means of X-ray absorption in the Br K-edge regime. The salt concentration beyond the saturation point is varied by controlling the humidity in the ambient atmosphere surrounding the 15-30 μm microdroplets. This leads to the formation of metastable supersaturated liquid particles. Distinct spectral shifts in near-edge spectra as a function of salt concentration are observed, in which the energy position of the Br K-edge is red-shifted by up to 7.1 ± 0.4 eV if the dilute solution is compared to the solid. The K-edge positions of supersaturated solutions are found between these limits. The changes in electronic structure are rationalized in terms of the formation of pre-nucleation clusters. This assumption is verified by spectral simulations using first-principle density functional theory and molecular dynamics calculations, in which structural motifs are considered, explaining the experimental results. These consist of solvated CaBr2 moieties, rather than building blocks forming calcium bromide hexahydrates, the crystal system that is formed by drying aqueous CaBr2 solutions.

  9. Nonlinear flight control design using backstepping methodology

    NASA Astrophysics Data System (ADS)

    Tran, Thanh Trung

    The subject of nonlinear flight control design using backstepping control methodology is investigated in the dissertation research presented here. Control design methods based on nonlinear models of the dynamic system provide higher utility and versatility because the design model more closely matches the physical system behavior. Obtaining requisite model fidelity is only half of the overall design process, however. Design of the nonlinear control loops can lessen the effects of nonlinearity, or even exploit nonlinearity, to achieve higher levels of closed-loop stability, performance, and robustness. The goal of the research is to improve control quality for a general class of strict-feedback dynamic systems and provide flight control architectures to augment the aircraft motion. The research is divided into two parts: theoretical control development for the strict-feedback form of nonlinear dynamic systems and application of the proposed theory for nonlinear flight dynamics. In the first part, the research is built on two components: transforming the nonlinear dynamic model to a canonical strict-feedback form and then applying backstepping control theory to the canonical model. The research considers a process to determine when this transformation is possible, and when it is possible, a systematic process to transfer the model is also considered when practical. When this is not the case, certain modeling assumptions are explored to facilitate the transformation. After achieving the canonical form, a systematic design procedure for formulating a backstepping control law is explored in the research. Starting with the simplest subsystem and ending with the full system, pseudo control concepts based on Lyapunov control functions are used to control each successive subsystem. Typically each pseudo control must be solved from a nonlinear algebraic equation. At the end of this process, the physical control input must be re-expressed in terms of the physical states by eliminating the pseudo control transformations. In the second part, the research focuses on nonlinear control design for flight dynamics of aircraft motion. Some assumptions on aerodynamics of the aircraft are addressed to transform full nonlinear flight dynamics into the canonical strict-feedback form. The assumptions are also analyzed, validated, and compared to show the advantages and disadvantages of the design models. With the achieved models, investigation focuses on formulating the backstepping control laws and provides an advanced control algorithm for nonlinear flight dynamics of the aircraft. Experimental and simulation studies are successfully implemented to validate the proposed control method. Advancement of nonlinear backstepping control theory and its application to nonlinear flight control are achieved in the dissertation research.

  10. Query by forms: User-oriented relational database retrieving system and its application in analysis of experiment data

    NASA Astrophysics Data System (ADS)

    Skotniczny, Zbigniew

    1989-12-01

    The Query by Forms (QbF) system is a user-oriented interactive tool for querying large relational database with minimal queries difinition cost. The system was worked out under the assumption that user's time and effort for defining needed queries is the most severe bottleneck. The system may be applied in any Rdb/VMS databases system and is recommended for specific information systems of any project where end-user queries cannot be foreseen. The tool is dedicated to specialist of an application domain who have to analyze data maintained in database from any needed point of view, who do not need to know commercial databases languages. The paper presents the system developed as a compromise between its functionality and usability. User-system communication via a menu-driven "tree-like" structure of screen-forms which produces a query difinition and execution is discussed in detail. Output of query results (printed reports and graphics) is also discussed. Finally the paper shows one application of QbF to a HERA-project.

  11. Some Investigations Relating to the Elastostatics of a Tapered Tube

    DTIC Science & Technology

    1978-03-01

    regularity of the solution on the Z axis. Indeed the assumption of such’regularity is stated explicitly by Heins (p. 789) and the problems solved (e.g. a... assumptions , becomes where t h e integrand is evaluated a t ( + i ,O). This i s a form P a of t he i n t e g r a l representa t ion of t h e...solut ion. Now l e t us look a t t h e assumptions on Q. F i r s t of a l l , i n order t o be sure t h a t our operations a r e l eg i

  12. Heteroscedasticity as a Basis of Direction Dependence in Reversible Linear Regression Models.

    PubMed

    Wiedermann, Wolfgang; Artner, Richard; von Eye, Alexander

    2017-01-01

    Heteroscedasticity is a well-known issue in linear regression modeling. When heteroscedasticity is observed, researchers are advised to remedy possible model misspecification of the explanatory part of the model (e.g., considering alternative functional forms and/or omitted variables). The present contribution discusses another source of heteroscedasticity in observational data: Directional model misspecifications in the case of nonnormal variables. Directional misspecification refers to situations where alternative models are equally likely to explain the data-generating process (e.g., x → y versus y → x). It is shown that the homoscedasticity assumption is likely to be violated in models that erroneously treat true nonnormal predictors as response variables. Recently, Direction Dependence Analysis (DDA) has been proposed as a framework to empirically evaluate the direction of effects in linear models. The present study links the phenomenon of heteroscedasticity with DDA and describes visual diagnostics and nine homoscedasticity tests that can be used to make decisions concerning the direction of effects in linear models. Results of a Monte Carlo simulation that demonstrate the adequacy of the approach are presented. An empirical example is provided, and applicability of the methodology in cases of violated assumptions is discussed.

  13. Signal Detection with Criterion Noise: Applications to Recognition Memory

    ERIC Educational Resources Information Center

    Benjamin, Aaron S.; Diaz, Michael; Wee, Serena

    2009-01-01

    A tacit but fundamental assumption of the theory of signal detection is that criterion placement is a noise-free process. This article challenges that assumption on theoretical and empirical grounds and presents the noisy decision theory of signal detection (ND-TSD). Generalized equations for the isosensitivity function and for measures of…

  14. Why Are Experts Correlated? Decomposing Correlations between Judges

    ERIC Educational Resources Information Center

    Broomell, Stephen B.; Budescu, David V.

    2009-01-01

    We derive an analytic model of the inter-judge correlation as a function of five underlying parameters. Inter-cue correlation and the number of cues capture our assumptions about the environment, while differentiations between cues, the weights attached to the cues, and (un)reliability describe assumptions about the judges. We study the relative…

  15. The Effect of Missing Data Treatment on Mantel-Haenszel DIF Detection

    ERIC Educational Resources Information Center

    Emenogu, Barnabas C.; Falenchuk, Olesya; Childs, Ruth A.

    2010-01-01

    Most implementations of the Mantel-Haenszel differential item functioning procedure delete records with missing responses or replace missing responses with scores of 0. These treatments of missing data make strong assumptions about the causes of the missing data. Such assumptions may be particularly problematic when groups differ in their patterns…

  16. The natural selection of organizational and safety culture within a small to medium sized enterprise (SME).

    PubMed

    Brooks, Benjamin

    2008-01-01

    Small to Medium Sized Enterprises (SMEs) form the majority of Australian businesses. This study uses ethnographic research methods to describe the organizational culture of a small furniture-manufacturing business in southern Australia. Results show a range of cultural assumptions variously 'embedded' within the enterprise. In line with memetics - Richard Dawkin's cultural application of Charles Darwin's theory of Evolution by Natural Selection, the author suggests that these assumptions compete to be replicated and retained within the organization. The author suggests that dominant assumptions are naturally selected, and that the selection can be better understood by considering the cultural assumptions in reference to Darwin's original principles and Frederik Barth's anthropological framework of knowledge. The results are discussed with reference to safety systems, negative cultural elements called Cultural Safety Viruses, and how our understanding of this particular organizational culture might be used to build resistance to these viruses.

  17. The evolution of utility functions and psychological altruism.

    PubMed

    Clavien, Christine; Chapuisat, Michel

    2016-04-01

    Numerous studies show that humans tend to be more cooperative than expected given the assumption that they are rational maximizers of personal gain. As a result, theoreticians have proposed elaborated formal representations of human decision-making, in which utility functions including "altruistic" or "moral" preferences replace the purely self-oriented "Homo economicus" function. Here we review mathematical approaches that provide insights into the mathematical stability of alternative utility functions. Candidate utility functions may be evaluated with help of game theory, classical modeling of social evolution that focuses on behavioral strategies, and modeling of social evolution that focuses directly on utility functions. We present the advantages of the latter form of investigation and discuss one surprisingly precise result: "Homo economicus" as well as "altruistic" utility functions are less stable than a function containing a preference for the common welfare that is only expressed in social contexts composed of individuals with similar preferences. We discuss the contribution of mathematical models to our understanding of human other-oriented behavior, with a focus on the classical debate over psychological altruism. We conclude that human can be psychologically altruistic, but that psychological altruism evolved because it was generally expressed towards individuals that contributed to the actor's fitness, such as own children, romantic partners and long term reciprocators. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Direct Position Determination of Unknown Signals in the Presence of Multipath Propagation

    PubMed Central

    Yu, Hongyi

    2018-01-01

    A novel geolocation architecture, termed “Multiple Transponders and Multiple Receivers for Multiple Emitters Positioning System (MTRE)” is proposed in this paper. Existing Direct Position Determination (DPD) methods take advantage of a rather simple channel assumption (line of sight channels with complex path attenuations) and a simplified MUltiple SIgnal Classification (MUSIC) algorithm cost function to avoid the high dimension searching. We point out that the simplified assumption and cost function reduce the positioning accuracy because of the singularity of the array manifold in a multi-path environment. We present a DPD model for unknown signals in the presence of Multi-path Propagation (MP-DPD) in this paper. MP-DPD adds non-negative real path attenuation constraints to avoid the mistake caused by the singularity of the array manifold. The Multi-path Propagation MUSIC (MP-MUSIC) method and the Active Set Algorithm (ASA) are designed to reduce the dimension of searching. A Multi-path Propagation Maximum Likelihood (MP-ML) method is proposed in addition to overcome the limitation of MP-MUSIC in the sense of a time-sensitive application. An iterative algorithm and an approach of initial value setting are given to make the MP-ML time consumption acceptable. Numerical results validate the performances improvement of MP-MUSIC and MP-ML. A closed form of the Cramér–Rao Lower Bound (CRLB) is derived as a benchmark to evaluate the performances of MP-MUSIC and MP-ML. PMID:29562601

  19. Direct Position Determination of Unknown Signals in the Presence of Multipath Propagation.

    PubMed

    Du, Jianping; Wang, Ding; Yu, Wanting; Yu, Hongyi

    2018-03-17

    A novel geolocation architecture, termed "Multiple Transponders and Multiple Receivers for Multiple Emitters Positioning System (MTRE)" is proposed in this paper. Existing Direct Position Determination (DPD) methods take advantage of a rather simple channel assumption (line of sight channels with complex path attenuations) and a simplified MUltiple SIgnal Classification (MUSIC) algorithm cost function to avoid the high dimension searching. We point out that the simplified assumption and cost function reduce the positioning accuracy because of the singularity of the array manifold in a multi-path environment. We present a DPD model for unknown signals in the presence of Multi-path Propagation (MP-DPD) in this paper. MP-DPD adds non-negative real path attenuation constraints to avoid the mistake caused by the singularity of the array manifold. The Multi-path Propagation MUSIC (MP-MUSIC) method and the Active Set Algorithm (ASA) are designed to reduce the dimension of searching. A Multi-path Propagation Maximum Likelihood (MP-ML) method is proposed in addition to overcome the limitation of MP-MUSIC in the sense of a time-sensitive application. An iterative algorithm and an approach of initial value setting are given to make the MP-ML time consumption acceptable. Numerical results validate the performances improvement of MP-MUSIC and MP-ML. A closed form of the Cramér-Rao Lower Bound (CRLB) is derived as a benchmark to evaluate the performances of MP-MUSIC and MP-ML.

  20. Improving Global Models of Remotely Sensed Ocean Chlorophyll Content Using Partial Least Squares and Geographically Weighted Regression

    NASA Astrophysics Data System (ADS)

    Gholizadeh, H.; Robeson, S. M.

    2015-12-01

    Empirical models have been widely used to estimate global chlorophyll content from remotely sensed data. Here, we focus on the standard NASA empirical models that use blue-green band ratios. These band ratio ocean color (OC) algorithms are in the form of fourth-order polynomials and the parameters of these polynomials (i.e. coefficients) are estimated from the NASA bio-Optical Marine Algorithm Data set (NOMAD). Most of the points in this data set have been sampled from tropical and temperate regions. However, polynomial coefficients obtained from this data set are used to estimate chlorophyll content in all ocean regions with different properties such as sea-surface temperature, salinity, and downwelling/upwelling patterns. Further, the polynomial terms in these models are highly correlated. In sum, the limitations of these empirical models are as follows: 1) the independent variables within the empirical models, in their current form, are correlated (multicollinear), and 2) current algorithms are global approaches and are based on the spatial stationarity assumption, so they are independent of location. Multicollinearity problem is resolved by using partial least squares (PLS). PLS, which transforms the data into a set of independent components, can be considered as a combined form of principal component regression (PCR) and multiple regression. Geographically weighted regression (GWR) is also used to investigate the validity of spatial stationarity assumption. GWR solves a regression model over each sample point by using the observations within its neighbourhood. PLS results show that the empirical method underestimates chlorophyll content in high latitudes, including the Southern Ocean region, when compared to PLS (see Figure 1). Cluster analysis of GWR coefficients also shows that the spatial stationarity assumption in empirical models is not likely a valid assumption.

  1. 37 CFR 351.10 - Evidence.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., magnetic impulse, mechanical or electronic recording, or other form of data compilation. “Photographs... plan, the principles and methods underlying the study, all relevant assumptions, all variables...

  2. 78 FR 58348 - Agency Information Collection Activities; Proposed Collection; Comments Requested: National...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-23

    .... Fleming, Field Division Counsel, El Paso Intelligence Center, 11339 SSG Sims Blvd., El Paso, TX 79908... validity of the methodology and assumptions used; Enhance the quality, utility, and clarity of the... sponsoring the collection: Form number: EPIC Form 143. Component: El Paso Intelligence Center, Drug...

  3. 75 FR 19658 - Agency Information Collection Activities: Proposed Collection; Comments Requested: National...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-15

    ... Division Counsel, El Paso Intelligence Center, 11339 SSG Sims Blvd., El Paso, TX 79908. Written comments... of the methodology and assumptions used; Enhance the quality, utility, and clarity of the information... collection: Form number: EPIC Form 143. Component: El Paso Intelligence Center, Drug Enforcement...

  4. Comparing the Incomparable: An Essay on the Importance of Big Assumptions and Scant Evidence.

    ERIC Educational Resources Information Center

    Wainer, Howard

    1999-01-01

    Discusses the comparison of groups of individuals who were administered different forms of a test. Focuses on the situation in which there is little overlap in content between the test forms. Reviews equating problems in national tests in Canada and Israel. (SLD)

  5. How General is General Strain Theory? Assessing Determinacy and Indeterminacy across Life Domains

    ERIC Educational Resources Information Center

    De Coster, Stacy; Kort-Butler, Lisa

    2006-01-01

    This article explores how assumptions of determinacy and indeterminacy apply to general strain theory. Theories assuming determinacy assert that motivational conditions determine specific forms of deviant adaptations, whereas those assuming indeterminacy propose that a given social circumstance can predispose a person toward many forms of…

  6. Life Support Baseline Values and Assumptions Document

    NASA Technical Reports Server (NTRS)

    Anderson, Molly S.; Ewert, Michael K.; Keener, John F.; Wagner, Sandra A.

    2015-01-01

    The Baseline Values and Assumptions Document (BVAD) provides analysts, modelers, and other life support researchers with a common set of values and assumptions which can be used as a baseline in their studies. This baseline, in turn, provides a common point of origin from which many studies in the community may depart, making research results easier to compare and providing researchers with reasonable values to assume for areas outside their experience. With the ability to accurately compare different technologies' performance for the same function, managers will be able to make better decisions regarding technology development.

  7. Assessing and Treating Stereotypical Behaviors in Classrooms Using a Functional Approach

    ERIC Educational Resources Information Center

    Bruhn, Allison L.; Balint-Langel, Kinga; Troughton, Leonard; Langan, Sean; Lodge, Kelsey; Kortemeyer, Sara

    2015-01-01

    For years, the assumption has been that stereotypical behaviors functioned only to provide sensory or automatic reinforcement. However, these behaviors also may serve social functions. Given the unsettled debate, functional behavior assessment and functional analysis can be used to identify the exact function of stereotypical behavior and design…

  8. Reactant conversion in homogeneous turbulence: Mathematical modeling, computational validations and practical applications

    NASA Technical Reports Server (NTRS)

    Madnia, C. K.; Frankel, S. H.; Givi, P.

    1992-01-01

    Closed form analytical expressions are obtained for predicting the limited rate of reactant conversion in a binary reaction of the type F + rO yields (1 + r) Product in unpremixed homogeneous turbulence. These relations are obtained by means of a single point Probability Density Function (PDF) method based on the Amplitude Mapping Closure. It is demonstrated that with this model, the maximum rate of the reactants' decay can be conveniently expressed in terms of definite integrals of the Parabolic Cylinder Functions. For the cases with complete initial segregation, it is shown that the results agree very closely with those predicted by employing a Beta density of the first kind for an appropriately defined Shvab-Zeldovich scalar variable. With this assumption, the final results can also be expressed in terms of closed form analytical expressions which are based on the Incomplete Beta Functions. With both models, the dependence of the results on the stoichiometric coefficient and the equivalence ratio can be expressed in an explicit manner. For a stoichiometric mixture, the analytical results simplify significantly. In the mapping closure, these results are expressed in terms of simple trigonometric functions. For the Beta density model, they are in the form of Gamma Functions. In all the cases considered, the results are shown to agree well with data generated by Direct Numerical Simulations (DNS). Due to the simplicity of these expressions and because of nice mathematical features of the Parabolic Cylinder and the Incomplete Beta Functions, these models are recommended for estimating the limiting rate of reactant conversion in homogeneous reacting flows. These results also provide useful insights in assessing the extent of validity of turbulence closures in the modeling of unpremixed reacting flows. Some discussions are provided on the extension of the model for treating more complicated reacting systems including realistic kinetics schemes and multi-scalar mixing with finite rate chemical reactions in more complex configurations.

  9. Generalization of the Activated Complex Theory of Reaction Rates. I. Quantum Mechanical Treatment

    DOE R&D Accomplishments Database

    Marcus, R. A.

    1964-01-01

    In its usual form activated complex theory assumes a quasi-equilibrium between reactants and activated complex, a separable reaction coordinate, a Cartesian reaction coordinate, and an absence of interaction of rotation with internal motion in the complex. In the present paper a rate expression is derived without introducing the Cartesian assumption. The expression bears a formal resemblance to the usual one and reduces to it when the added assumptions of the latter are introduced.

  10. A model-independent comparison of the rates of uptake and short term retention of 47Ca and 85Sr by the skeleton.

    PubMed

    Reeve, J; Hesp, R

    1976-12-22

    1. A method has been devised for comparing the impulse response functions of the skeleton for two or more boneseeking tracers, and for estimating the contribution made by measurement errors to the differences between any pair of impulse response functions. 2. Comparisons were made between the calculated impulse response functions for 47Ca and 85Sr obtained in simultaneous double tracer studies in sixteen subjects. Collectively the differences between the 47Ca and 85Sr functions could be accounted for entirely by measurement errors. 3. Because the calculation of an impulse response function requires fewer a priori assumptions than other forms of mathematical analysis, and automatically corrects for differences induced by recycling of tracer and non-identical rates of excretory plasma clearance of tracer, it is concluded that differences shown in previous in vivo studies between the fluxes of Ca and Sr into bone can be fully accounted for by undetermined oversimplifications in the various mathematical models used to analyse the results of those studies. 85Sr is therefore an adequate tracer for bone calcium in most in vivo studies.

  11. Cumulant-based expressions for the multibody terms for the correlation between local and electrostatic interactions in the united-residue force field

    NASA Astrophysics Data System (ADS)

    Liwo, Adam; Czaplewski, Cezary; Pillardy, Jarosław; Scheraga, Harold A.

    2001-08-01

    A general method to derive site-site or united-residue potentials is presented. The basic principle of the method is the separation of the degrees of freedom of a system into the primary and secondary ones. The primary degrees of freedom describe the basic features of the system, while the secondary ones are averaged over when calculating the potential of mean force, which is hereafter referred to as the restricted free energy (RFE) function. The RFE can be factored into one-, two-, and multibody terms, using the cluster-cumulant expansion of Kubo. These factors can be assigned the functional forms of the corresponding lowest-order nonzero generalized cumulants, which can, in most cases, be evaluated analytically, after making some simplifying assumptions. This procedure to derive coarse-grain force fields is very valuable when applied to multibody terms, whose functional forms are hard to deduce in another way (e.g., from structural databases). After the functional forms have been derived, they can be parametrized based on the RFE surfaces of model systems obtained from all-atom models or on the statistics derived from structural databases. The approach has been applied to our united-residue force field for proteins. Analytical expressions were derived for the multibody terms pertaining to the correlation between local and electrostatic interactions within the polypeptide backbone; these expressions correspond to up to sixth-order terms in the cumulant expansion of the RFE. These expressions were subsequently parametrized by fitting to the RFEs of selected peptide fragments, calculated with the empirical conformational energy program for peptides force field. The new multibody terms enable not only the heretofore predictable α-helical segments, but also regular β-sheets, to form as the lowest-energy structures, as assessed by test calculations on a model helical protein A, as well as a model 20-residue polypeptide (betanova); the latter was not possible without introducing these new terms.

  12. Gene function prediction with gene interaction networks: a context graph kernel approach.

    PubMed

    Li, Xin; Chen, Hsinchun; Li, Jiexun; Zhang, Zhu

    2010-01-01

    Predicting gene functions is a challenge for biologists in the postgenomic era. Interactions among genes and their products compose networks that can be used to infer gene functions. Most previous studies adopt a linkage assumption, i.e., they assume that gene interactions indicate functional similarities between connected genes. In this study, we propose to use a gene's context graph, i.e., the gene interaction network associated with the focal gene, to infer its functions. In a kernel-based machine-learning framework, we design a context graph kernel to capture the information in context graphs. Our experimental study on a testbed of p53-related genes demonstrates the advantage of using indirect gene interactions and shows the empirical superiority of the proposed approach over linkage-assumption-based methods, such as the algorithm to minimize inconsistent connected genes and diffusion kernels.

  13. Zwischen Gesetz und Fall. Mutmassungen uber Typologien als Padagogische Wissensform (Between General Law and the Individual Case. Conjectures Concerning Typologies as a Form of Pedagogical Knowledge).

    ERIC Educational Resources Information Center

    Herzog, Walter

    2003-01-01

    Considers the mediation between scientific knowledge and practical action as a crucial feature of professional teaching. Investigates the assumption that typologies represent a form of knowledge which can bridge the gap between theory and practice. Differentiates between two forms of typological thinking and discusses reservations concerning…

  14. Mediating objects: scientific and public functions of models in nineteenth-century biology.

    PubMed

    Ludwig, David

    2013-01-01

    The aim of this article is to examine the scientific and public functions of two- and three-dimensional models in the context of three episodes from nineteenth-century biology. I argue that these models incorporate both data and theory by presenting theoretical assumptions in the light of concrete data or organizing data through theoretical assumptions. Despite their diverse roles in scientific practice, they all can be characterized as mediators between data and theory. Furthermore, I argue that these different mediating functions often reflect their different audiences that included specialized scientists, students, and the general public. In this sense, models in nineteenth-century biology can be understood as mediators between theory, data, and their diverse audiences.

  15. An Election Algorithm for a Distributed Clock Synchronization Program

    DTIC Science & Technology

    1985-12-01

    distinguis h a pausing process from one that has crash ed. With an Archim edean timing system a process can use a ti mer to tell if some p rocess on a...Machines have clocks with Archim edean time function s. This assumption allows the use of tim ers. Note that no unre alistic assumptions are

  16. Examination of the reliability of the crash modification factors using empirical Bayes method with resampling technique.

    PubMed

    Wang, Jung-Han; Abdel-Aty, Mohamed; Wang, Ling

    2017-07-01

    There have been plenty of studies intended to use different methods, for example, empirical Bayes before-after methods, to get accurate estimation of CMFs. All of them have different assumptions toward crash count if there was no treatment. Additionally, another major assumption is that multiple sites share the same true CMF. Under this assumption, the CMF at an individual intersection is randomly drawn from a normally distributed population of CMFs at all intersections. Since CMFs are non-zero values, the population of all CMFs might not follow normal distributions, and even if it does, the true mean of CMFs at some intersections may be different from that at others. Therefore, a bootstrap method based on before-after empirical Bayes theory was proposed to estimate CMFs, but it did not make distributional assumptions. This bootstrap procedure has the added benefit of producing a measure of CMF stability. Furthermore, based on the bootstrapped CMF, a new CMF precision rating method was proposed to evaluate the reliability of CMFs. This study chose 29 urban four-legged intersections as treated sites, and their controls were changed from stop-controlled to signal-controlled. Meanwhile, 124 urban four-legged stop-controlled intersections were selected as reference sites. At first, different safety performance functions (SPFs) were applied to five crash categories, and it was found that each crash category had different optimal SPF form. Then, the CMFs of these five crash categories were estimated using the bootstrap empirical Bayes method. The results of the bootstrapped method showed that signalization significantly decreased Angle+Left-Turn crashes, and its CMF had the highest precision. While, the CMF for Rear-End crashes was unreliable. For KABCO, KABC, and KAB crashes, their CMFs were proved to be reliable for the majority of intersections, but the estimated effect of signalization may be not accurate at some sites. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Lithium Depletion in Solar-like Stars: Effect of Overshooting Based on Realistic Multi-dimensional Simulations

    NASA Astrophysics Data System (ADS)

    Baraffe, I.; Pratt, J.; Goffrey, T.; Constantino, T.; Folini, D.; Popov, M. V.; Walder, R.; Viallet, M.

    2017-08-01

    We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a young low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ˜50 Myr to ˜4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.

  18. Lithium Depletion in Solar-like Stars: Effect of Overshooting Based on Realistic Multi-dimensional Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baraffe, I.; Pratt, J.; Goffrey, T.

    We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a youngmore » low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ∼50 Myr to ∼4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.« less

  19. Travelling Fronts and Entire Solutionsof the Fisher-KPP Equation in N

    NASA Astrophysics Data System (ADS)

    Hamel, François; Nadirashvili, Nikolaï

    This paper is devoted to time-global solutions of the Fisher-KPP equation in N: where f is a C2 concave function on [0,1] such that f(0)=f(1)=0 and f>0 on (0,1). It is well known that this equation admits a finite-dimensional manifold of planar travelling-fronts solutions. By considering the mixing of any density of travelling fronts, we prove the existence of an infinite-dimensional manifold of solutions. In particular, there are infinite-dimensional manifolds of (nonplanar) travelling fronts and radial solutions. Furthermore, up to an additional assumption, a given solution u can be represented in terms of such a mixing of travelling fronts.

  20. The transition from animal spirits to animal electricity: a neuroscience paradigm shift.

    PubMed

    Clower, W T

    1998-12-01

    The Animal Spirits Paradigm had been in place for over a thousand years as a general way of looking at the nervous system, and was completely ingrained into the fabric of scientific thinking. However, the community of researchers in the 17th and 18th centuries abandoned their long-held assumptions, and started anew with the novel assertion that the currency of nervous function was, instead of Animal Spirits, a uniquely amimal electricity. This conceptual rearrangement represented a scientific revolution in thinking, a change in absolute perspective that required the reinterpretation of old data within a completely novel framework. The manner in which this transition occurred followed the general form of scientific paradigm shifts as outlined by Thomas Kuhn (Kuhn, 1962)

  1. The Philosophical Basis of Bioethics.

    PubMed

    Horn, Peter

    2015-09-01

    In this article, I consider in what sense bioethics is philosophical. Philosophy includes both analysis and synthesis. Analysis focuses on central concepts in a domain, for example, informed consent, death, medical futility, and health. It is argued that analysis should avoid oversimplification. The synthesis or synoptic dimension prompts people to explain how their views have logical assumptions and implications. In addition to the conceptual elements are the evaluative and empirical dimensions. Among its functions, philosophy can be a form of prophylaxis--helping people avoid some commonly accepted questionable theories. Generally, recent philosophy has steered away from algorithms and deductivist approaches to ethical justification. In bioethics, philosophy works in partnership with a range of other disciplines, including pediatrics and neurology. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Accuracy of measurement of star images on a pixel array

    NASA Technical Reports Server (NTRS)

    King, I. R.

    1983-01-01

    Algorithms are developed for predicting the accuracy with which the brightness of a star can be determined from its image on a digital detector array, as a function of the brightness of the background. The assumption is made that a known profile is being fitted by least squares. The two profiles used correspond to ST images and to ground-based observations. The first result is an approximate rule of thumb for equivalent noise area. More rigorous results are then given in tabular form. The size of the pixels, relative to the image size, is taken into account. Astronometric accuracy is also discussed briefly; the error, relative to image size, is very similar to the photometric error relative to brightness.

  3. Testing the functional significance of microbial community composition.

    Treesearch

    Michael S. Strickland; Christian Lauber; Noah Fierer; Mark A. Bradford

    2009-01-01

    A critical assumption underlying terrestrial ecosystem models is that soil microbial communities, when placed in a common environment, will function in an identical manner regardless of the composition...

  4. Ego Depletion in Real-Time: An Examination of the Sequential-Task Paradigm.

    PubMed

    Arber, Madeleine M; Ireland, Michael J; Feger, Roy; Marrington, Jessica; Tehan, Joshua; Tehan, Gerald

    2017-01-01

    Current research into self-control that is based on the sequential task methodology is currently at an impasse. The sequential task methodology involves completing a task that is designed to tax self-control resources which in turn has carry-over effects on a second, unrelated task. The current impasse is in large part due to the lack of empirical research that tests explicit assumptions regarding the initial task. Five studies test one key, untested assumption underpinning strength (finite resource) models of self-regulation: Performance will decline over time on a task that depletes self-regulatory resources. In the aftermath of high profile replication failures using a popular letter-crossing task and subsequent criticisms of that task, the current studies examined whether depletion effects would occur in real time using letter-crossing tasks that did not invoke habit-forming and breaking, and whether these effects were moderated by administration type (paper and pencil vs. computer administration). Sample makeup and sizes as well as response formats were also varied across the studies. The five studies yielded a clear and consistent pattern of increasing performance deficits (errors) as a function of time spent on task with generally large effects and in the fifth study the strength of negative transfer effects to a working memory task were related to individual differences in depletion. These results demonstrate that some form of depletion is occurring on letter-crossing tasks though whether an internal regulatory resource reservoir or some other factor is changing across time remains an important question for future research.

  5. Ego Depletion in Real-Time: An Examination of the Sequential-Task Paradigm

    PubMed Central

    Arber, Madeleine M.; Ireland, Michael J.; Feger, Roy; Marrington, Jessica; Tehan, Joshua; Tehan, Gerald

    2017-01-01

    Current research into self-control that is based on the sequential task methodology is currently at an impasse. The sequential task methodology involves completing a task that is designed to tax self-control resources which in turn has carry-over effects on a second, unrelated task. The current impasse is in large part due to the lack of empirical research that tests explicit assumptions regarding the initial task. Five studies test one key, untested assumption underpinning strength (finite resource) models of self-regulation: Performance will decline over time on a task that depletes self-regulatory resources. In the aftermath of high profile replication failures using a popular letter-crossing task and subsequent criticisms of that task, the current studies examined whether depletion effects would occur in real time using letter-crossing tasks that did not invoke habit-forming and breaking, and whether these effects were moderated by administration type (paper and pencil vs. computer administration). Sample makeup and sizes as well as response formats were also varied across the studies. The five studies yielded a clear and consistent pattern of increasing performance deficits (errors) as a function of time spent on task with generally large effects and in the fifth study the strength of negative transfer effects to a working memory task were related to individual differences in depletion. These results demonstrate that some form of depletion is occurring on letter-crossing tasks though whether an internal regulatory resource reservoir or some other factor is changing across time remains an important question for future research. PMID:29018390

  6. Probabilistic Material Strength Degradation Model for Inconel 718 Components Subjected to High Temperature, Mechanical Fatigue, Creep and Thermal Fatigue Effects

    NASA Technical Reports Server (NTRS)

    Bast, Callie Corinne Scheidt

    1994-01-01

    This thesis presents the on-going development of methodology for a probabilistic material strength degradation model. The probabilistic model, in the form of a postulated randomized multifactor equation, provides for quantification of uncertainty in the lifetime material strength of aerospace propulsion system components subjected to a number of diverse random effects. This model is embodied in the computer program entitled PROMISS, which can include up to eighteen different effects. Presently, the model includes four effects that typically reduce lifetime strength: high temperature, mechanical fatigue, creep, and thermal fatigue. Statistical analysis was conducted on experimental Inconel 718 data obtained from the open literature. This analysis provided regression parameters for use as the model's empirical material constants, thus calibrating the model specifically for Inconel 718. Model calibration was carried out for four variables, namely, high temperature, mechanical fatigue, creep, and thermal fatigue. Methodology to estimate standard deviations of these material constants for input into the probabilistic material strength model was developed. Using the current version of PROMISS, entitled PROMISS93, a sensitivity study for the combined effects of mechanical fatigue, creep, and thermal fatigue was performed. Results, in the form of cumulative distribution functions, illustrated the sensitivity of lifetime strength to any current value of an effect. In addition, verification studies comparing a combination of mechanical fatigue and high temperature effects by model to the combination by experiment were conducted. Thus, for Inconel 718, the basic model assumption of independence between effects was evaluated. Results from this limited verification study strongly supported this assumption.

  7. Models in palaeontological functional analysis

    PubMed Central

    Anderson, Philip S. L.; Bright, Jen A.; Gill, Pamela G.; Palmer, Colin; Rayfield, Emily J.

    2012-01-01

    Models are a principal tool of modern science. By definition, and in practice, models are not literal representations of reality but provide simplifications or substitutes of the events, scenarios or behaviours that are being studied or predicted. All models make assumptions, and palaeontological models in particular require additional assumptions to study unobservable events in deep time. In the case of functional analysis, the degree of missing data associated with reconstructing musculoskeletal anatomy and neuronal control in extinct organisms has, in the eyes of some scientists, rendered detailed functional analysis of fossils intractable. Such a prognosis may indeed be realized if palaeontologists attempt to recreate elaborate biomechanical models based on missing data and loosely justified assumptions. Yet multiple enabling methodologies and techniques now exist: tools for bracketing boundaries of reality; more rigorous consideration of soft tissues and missing data and methods drawing on physical principles that all organisms must adhere to. As with many aspects of science, the utility of such biomechanical models depends on the questions they seek to address, and the accuracy and validity of the models themselves. PMID:21865242

  8. Approximate calculation of multispar cantilever and semicantilever wings with parallel ribs under direct and indirect loading

    NASA Technical Reports Server (NTRS)

    Sanger, Eugen

    1932-01-01

    A method is presented for approximate static calculation, which is based on the customary assumption of rigid ribs, while taking into account the systematic errors in the calculation results due to this arbitrary assumption. The procedure is given in greater detail for semicantilever and cantilever wings with polygonal spar plan form and for wings under direct loading only. The last example illustrates the advantages of the use of influence lines for such wing structures and their practical interpretation.

  9. 5 CFR 842.702 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... for valuation of the System, based on dynamic assumptions. The present value factors are unisex... EMPLOYEES RETIREMENT SYSTEM-BASIC ANNUITY Alternative Forms of Annuities § 842.702 Definitions. In this...

  10. On the self-association potential of transmembrane tight junction proteins.

    PubMed

    Blasig, I E; Winkler, L; Lassowski, B; Mueller, S L; Zuleger, N; Krause, E; Krause, G; Gast, K; Kolbe, M; Piontek, J

    2006-02-01

    Tight junctions seal intercellular clefts via membrane-related strands, hence, maintaining important organ functions. We investigated the self-association of strand-forming transmembrane tight junction proteins. The regulatory tight junction protein occludin was differently tagged and cotransfected in eucaryotic cells. These occludins colocalized within the plasma membrane of the same cell, coprecipitated and exhibited fluorescence resonance energy transfer. Differently tagged strand-forming claudin-5 also colocalized in the plasma membrane of the same cell and showed fluorescence resonance energy transfer. This demonstrates self-association in intact cells both of occludin and claudin-5 in one plasma membrane. In search of dimerizing regions of occludin, dimerization of its cytosolic C-terminal coiledcoil domain was identified. In claudin-5, the second extracellular loop was detected as a dimer. Since the transmembrane junctional adhesion molecule also is known to dimerize, the assumption that homodimerization of transmembrane tight junction proteins may serve as a common structural feature in tight junction assembly is supported.

  11. Quick and Easy Rate Equations for Multistep Reactions

    ERIC Educational Resources Information Center

    Savage, Phillip E.

    2008-01-01

    Students rarely see closed-form analytical rate equations derived from underlying chemical mechanisms that contain more than a few steps unless restrictive simplifying assumptions (e.g., existence of a rate-determining step) are made. Yet, work published decades ago allows closed-form analytical rate equations to be written quickly and easily for…

  12. 76 FR 36582 - Submission for Review: Standard Form 2809, Health Benefits Election Form

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-22

    ..., 2010 at Volume 75 FR 39587 allowing for a 60-day public comment period. We received comments from one... comments that: 1. Evaluate whether the proposed collection of information is necessary for the proper..., including the validity of the methodology and assumptions used; 3. Enhance the quality, utility, and clarity...

  13. Enhancing Adhesion: Relative Merits of Different Approaches

    NASA Technical Reports Server (NTRS)

    Penn, L. S.; Pater, R.

    1996-01-01

    Adhesive performance is improved mainly by manipulation of the bimaterials interface zone, which is only a few molecules thick. There are three approaches to enhancement of interfacial adhesion at the molecular level. They are 1) changing the nonchemically bonded interactions across the interface from weak ones to strong ones, 2) making the true interfacial area much larger than the simple geometric area, and 3) inducing chemical bonding between the two materials forming the interface. Our goal this summer was to question some of the built-in assumptions contained within these approaches and to determine the most promising approach, both theoretically and practically, for enhancing adhesion in NASA structures. Our computations revealed that all three of these approaches have, in theory, the potential to enhance molecular adhesion approximately ten-fold. Experiments, however, revealed that this excellent level of enhancement is not likely to be reached in practice. Each approach was found to be severely limited by practical problems. In addition, some of the built-in assumptions associated with these approaches were found to be insufficient or inadequate. The first approach, changing the nonchemically bonded interactions from weak to strong, Is an example of one containing inadequate assumptions. The extensive literature on intermolecular interactions, based on solution studies, shows that certain functional group pairs interact much more strongly than others. It has always been assumed that these data can be reliably extended to systems where only one member of the pair is in solution and the other Is contained in a solid surface. Our experiments this summer demonstrated that solution data do not adequately predict the strength of functional group interaction at the solid-liquid interface. Furthermore, the strong solvents needed to dissolve the monomers or polymers to which the functional groups of interest are attached compete successfully with the solid surface for the functional group. As a result, functional groups in solution cannot pair with the complementary groups in the solid surface, and the expected enhancement of nonchemically bonded interactions is not realized. The second approach, increasing the true interfacial area, is an example of one containing inadequate assumptions and suffering from numerous practical problems. First, practitioners have assumed that material removal, such as bead blasting or etching, increases true surface area (and therefore interfacial area) in a meaningful way. Our geometric analysis demonstrated that removal methods increase area by a factor of two at most. To increase interfacial area by an order of magnitude or more. a thin layer of high porosity must be added to the substrate surface prior to application of the adhesive phase. Consistent with this finding, we attempted to create a thin layer of rigid, highly porous glass on the surface of our smooth glass substrate by means of sol-gel technology. We were unable to surmount a wide variety of practical problems and obtained only collapsed, nonporous layers. Thus this approach, appealing in principle, would require long term development and is not promising in the near term. The third approach, inducing chemical bonding at the interface, is an example of one having neither inadequate assumptions nor insurmountable practical problems. When silicate glass is the substrate, there are only a few chemical reactions that can be successfully conducted to create these chemical bonds, and these reactions usually involve silicon-containing reagents. We compared the silazane reagents to the silane reagents and found through experiment that the silazanes react with the glass surface much more readily, and under milder conditions, than the silanes. The functional groups attached to the glass surface by silazane reactions were not able to be removed by solvent extraction, elevated temperature exposure, or mechanical action. This clearly indicates that the formation of chemical bonds at the interface is the most effective approach for enhancing molecular adhesion.

  14. Accounting for Non-Gaussian Sources of Spatial Correlation in Parametric Functional Magnetic Resonance Imaging Paradigms II: A Method to Obtain First-Level Analysis Residuals with Uniform and Gaussian Spatial Autocorrelation Function and Independent and Identically Distributed Time-Series.

    PubMed

    Gopinath, Kaundinya; Krishnamurthy, Venkatagiri; Lacey, Simon; Sathian, K

    2018-02-01

    In a recent study Eklund et al. have shown that cluster-wise family-wise error (FWE) rate-corrected inferences made in parametric statistical method-based functional magnetic resonance imaging (fMRI) studies over the past couple of decades may have been invalid, particularly for cluster defining thresholds less stringent than p < 0.001; principally because the spatial autocorrelation functions (sACFs) of fMRI data had been modeled incorrectly to follow a Gaussian form, whereas empirical data suggest otherwise. Hence, the residuals from general linear model (GLM)-based fMRI activation estimates in these studies may not have possessed a homogenously Gaussian sACF. Here we propose a method based on the assumption that heterogeneity and non-Gaussianity of the sACF of the first-level GLM analysis residuals, as well as temporal autocorrelations in the first-level voxel residual time-series, are caused by unmodeled MRI signal from neuronal and physiological processes as well as motion and other artifacts, which can be approximated by appropriate decompositions of the first-level residuals with principal component analysis (PCA), and removed. We show that application of this method yields GLM residuals with significantly reduced spatial correlation, nearly Gaussian sACF and uniform spatial smoothness across the brain, thereby allowing valid cluster-based FWE-corrected inferences based on assumption of Gaussian spatial noise. We further show that application of this method renders the voxel time-series of first-level GLM residuals independent, and identically distributed across time (which is a necessary condition for appropriate voxel-level GLM inference), without having to fit ad hoc stochastic colored noise models. Furthermore, the detection power of individual subject brain activation analysis is enhanced. This method will be especially useful for case studies, which rely on first-level GLM analysis inferences.

  15. Methodological Issues in Examining Measurement Equivalence in Patient Reported Outcomes Measures: Methods Overview to the Two-Part Series, "Measurement Equivalence of the Patient Reported Outcomes Measurement Information System® (PROMIS®) Short Forms".

    PubMed

    Teresi, Jeanne A; Jones, Richard N

    2016-01-01

    The purpose of this article is to introduce the methods used and challenges confronted by the authors of this two-part series of articles describing the results of analyses of measurement equivalence of the short form scales from the Patient Reported Outcomes Measurement Information System ® (PROMIS ® ). Qualitative and quantitative approaches used to examine differential item functioning (DIF) are reviewed briefly. Qualitative methods focused on generation of DIF hypotheses. The basic quantitative approaches used all rely on a latent variable model, and examine parameters either derived directly from item response theory (IRT) or from structural equation models (SEM). A key methods focus of these articles is to describe state-of-the art approaches to examination of measurement equivalence in eight domains: physical health, pain, fatigue, sleep, depression, anxiety, cognition, and social function. These articles represent the first time that DIF has been examined systematically in the PROMIS short form measures, particularly among ethnically diverse groups. This is also the first set of analyses to examine the performance of PROMIS short forms in patients with cancer. Latent variable model state-of-the-art methods for examining measurement equivalence are introduced briefly in this paper to orient readers to the approaches adopted in this set of papers. Several methodological challenges underlying (DIF-free) anchor item selection and model assumption violations are presented as a backdrop for the articles in this two-part series on measurement equivalence of PROMIS measures.

  16. Hot granules medium pressure forming process of AA7075 conical parts

    NASA Astrophysics Data System (ADS)

    Dong, Guojiang; Zhao, Changcai; Peng, Yaxin; Li, Ying

    2015-05-01

    High strength aluminum alloy plate has a low elongation at room temperature, which leads to the forming of its components need a high temperature. Liquid or gas is used as the pressure-transfer medium in the existing flexible mould forming process, the heat resistance of the medium and pressurizing device makes the application of aluminum alloy plate thermoforming restricted. To solve this problem, the existing medium is replaced by the heat-resisting solid granules and the general pressure equipments are applied. Based on the pressure-transfer performance test of the solid granules medium, the feasibility that the assumption of the extended Drucker-Prager linear model can be used in the finite element analysis is proved. The constitutive equation, the yield function and the theoretical forming limit diagram(FLD) of AA7075 sheet are established. Through the finite element numerical simulation of hot granules medium pressure forming(HGMF) process, not only the influence laws of the process parameters, such as forming temperature, the blank-holder gap and the diameter of the slab, on sheet metal forming performance are discussed, but also the broken area of the forming process is analyzed and predicted, which are coincided with the technological test. The conical part whose half cone angle is 15° and relative height H/d 0 is 0.57, is formed in one process at 250°C. The HGMF process solves the problems of loading and seal in the existing flexible mould forming process and provides a novel technology for thermoforming of light alloy plate, such as magnesium alloy, aluminium alloy and titanium alloy.

  17. Probability distribution functions for intermittent scrape-off layer plasma fluctuations

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; Garcia, O. E.

    2018-03-01

    A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.

  18. Differentiability of correlations in realistic quantum mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cabrera, Alejandro; Faria, Edson de; Pujals, Enrique

    2015-09-15

    We prove a version of Bell’s theorem in which the locality assumption is weakened. We start by assuming theoretical quantum mechanics and weak forms of relativistic causality and of realism (essentially the fact that observable values are well defined independently of whether or not they are measured). Under these hypotheses, we show that only one of the correlation functions that can be formulated in the framework of the usual Bell theorem is unknown. We prove that this unknown function must be differentiable at certain angular configuration points that include the origin. We also prove that, if this correlation is assumedmore » to be twice differentiable at the origin, then we arrive at a version of Bell’s theorem. On the one hand, we are showing that any realistic theory of quantum mechanics which incorporates the kinematic aspects of relativity must lead to this type of rough correlation function that is once but not twice differentiable. On the other hand, this study brings us a single degree of differentiability away from a relativistic von Neumann no hidden variables theorem.« less

  19. Kin networks and poverty among African Americans: past and present.

    PubMed

    Miller-Cribbs, Julie E; Farber, Naomi B

    2008-01-01

    Trends in social welfare policy and programs place increasing expectations on families to provide members with various forms of material and socioemotional support. The historic ability of kin networks of many African Americans to provide such support has been compromised by long-term community and family poverty. The potential mismatch between the expectations of social welfare systems for kin support and the actual functional capacities of kin networks places African Americans living in poverty at great risk of chronic poverty and its long-term multiple consequences. This article reviews historical and contemporary research on the structure and function of African American kin networks. On the basis of evidence of functional decline, the authors argue that social workers must re-examine the a priori assumption of viable kin networks as a reliable source of resilience among African Americans living in poverty. Social workers must focus assessment at all levels of practice on a variety of aspects of kin networks to make accurate judgments about not only the availability of resources, but also the perceived costs and benefits of participation in exchange for resources.

  20. Assessing the role of spatial correlations during collective cell spreading

    PubMed Central

    Treloar, Katrina K.; Simpson, Matthew J.; Binder, Benjamin J.; McElwain, D. L. Sean; Baker, Ruth E.

    2014-01-01

    Spreading cell fronts are essential features of development, repair and disease processes. Many mathematical models used to describe the motion of cell fronts, such as Fisher's equation, invoke a mean–field assumption which implies that there is no spatial structure, such as cell clustering, present. Here, we examine the presence of spatial structure using a combination of in vitro circular barrier assays, discrete random walk simulations and pair correlation functions. In particular, we analyse discrete simulation data using pair correlation functions to show that spatial structure can form in a spreading population of cells either through sufficiently strong cell–to–cell adhesion or sufficiently rapid cell proliferation. We analyse images from a circular barrier assay describing the spreading of a population of MM127 melanoma cells using the same pair correlation functions. Our results indicate that the spreading melanoma cell populations remain very close to spatially uniform, suggesting that the strength of cell–to–cell adhesion and the rate of cell proliferation are both sufficiently small so as not to induce any spatial patterning in the spreading populations. PMID:25026987

  1. Resolving the homology—function relationship through comparative genomics of membrane-trafficking machinery and parasite cell biology

    PubMed Central

    Klinger, Christen M.; Ramirez-Macias, Inmaculada; Herman, Emily K.; Turkewitz, Aaron P.; Field, Mark C.; Dacks, Joel B.

    2016-01-01

    With advances in DNA sequencing technology, it is increasingly common and tractable to informatically look for genes of interest in the genomic databases of parasitic organisms and infer cellular states. Assignment of a putative gene function based on homology to functionally characterized genes in other organisms, though powerful, relies on the implicit assumption of functional homology, i.e. that orthology indicates conserved function. Eukaryotes reveal a dazzling array of cellular features and structural organization, suggesting a concomitant diversity in their underlying molecular machinery. Significantly, examples of novel functions for pre-existing or new paralogues are not uncommon. Do these examples undermine the basic assumption of functional homology, especially in parasitic protists, which are often highly derived? Here we examine the extent to which functional homology exists between organisms spanning the eukaryotic lineage. By comparing membrane trafficking proteins between parasitic protists and traditional model organisms, where direct functional evidence is available, we find that function is indeed largely conserved between orthologues, albeit with significant adaptation arising from the unique biological features within each lineage. PMID:27444378

  2. The Magnetar Model of the Superluminous Supernova GAIA16apd and the Explosion Jet Feedback Mechanism

    NASA Astrophysics Data System (ADS)

    Soker, Noam

    2017-04-01

    Under the assumption that jets explode core collapse supernovae (CCSNe) in a negative jet feedback mechanism (JFM), this paper shows that rapidly rotating neutron stars are likely to be formed when the explosion is very energetic. Under the assumption that an accretion disk or an accretion belt around the just-formed neutron star launch jets and that the accreted gas spins-up the just-formed neutron star, I derive a crude relation between the energy that is stored in the spinning neutron star and the explosion energy. This relation is (E NS-spin/E exp) ≈ E exp/1052 erg; It shows that within the frame of the JFM explosion model of CCSNe, spinning neutron stars, such as magnetars, might have significant energy in super-energetic explosions. The existence of magnetars, if confirmed, such as in the recent super-energetic supernova GAIA16apd, further supports the call for a paradigm shift from neutrino-driven to jet-driven CCSN mechanisms.

  3. The impact of space travel on dosage form design and use.

    PubMed

    Aronsohn, A; Brazeau, G; Hughes, J

    1999-07-01

    The author speculates on potential factors that may influence the utilization of dosage forms in space. A key assumption is that most of the arguments will be based on current understanding of how dosage forms work on earth. Factors discussed include dosage form stability; and administration of drugs, particularly inhalation and aerosols. A sample experiment used a tissue culture model of drug transfer for passively absorbed drugs to address how alterations in hydrostatic pressure would change paracellular transport.

  4. Compact Assumption Applied to the Monopole Term of Farassat's Formulations

    NASA Technical Reports Server (NTRS)

    Lopes, Leonard V.

    2015-01-01

    Farassat's formulations provide an acoustic prediction at an observer location provided a source surface, including motion and flow conditions. This paper presents compact forms for the monopole term of several of Farassat's formulations. When the physical surface is elongated, such as the case of a high aspect ratio rotorcraft blade, compact forms can be derived which are shown to be a function of the blade cross sectional area by reducing the computation from a surface integral to a line integral. The compact forms of all formulations are applied to two example cases: a short span wing with constant airfoil cross section moving at three forward flight Mach numbers and a rotor at two advance ratios. Acoustic pressure time histories and power spectral densities of monopole noise predicted from the compact forms of all the formulations at several observer positions are shown to compare very closely to the predictions from their non-compact counterparts. A study on the influence of rotorcraft blade shape on the high frequency portion of the power spectral density shows that there is a direct correlation between the aspect ratio of the airfoil and the error incurred by using the compact form. Finally, a prediction of pressure gradient from the non-compact and compact forms of the thickness term of Formulation G1A shows that using the compact forms results in a 99.6% improvement in computation time, which will be critical when noise is incorporated into a design environment.

  5. GLACiAR, an Open-Source Python Tool for Simulations of Source Recovery and Completeness in Galaxy Surveys

    NASA Astrophysics Data System (ADS)

    Carrasco, D.; Trenti, M.; Mutch, S.; Oesch, P. A.

    2018-06-01

    The luminosity function is a fundamental observable for characterising how galaxies form and evolve throughout the cosmic history. One key ingredient to derive this measurement from the number counts in a survey is the characterisation of the completeness and redshift selection functions for the observations. In this paper, we present GLACiAR, an open python tool available on GitHub to estimate the completeness and selection functions in galaxy surveys. The code is tailored for multiband imaging surveys aimed at searching for high-redshift galaxies through the Lyman-break technique, but it can be applied broadly. The code generates artificial galaxies that follow Sérsic profiles with different indexes and with customisable size, redshift, and spectral energy distribution properties, adds them to input images, and measures the recovery rate. To illustrate this new software tool, we apply it to quantify the completeness and redshift selection functions for J-dropouts sources (redshift z 10 galaxies) in the Hubble Space Telescope Brightest of Reionizing Galaxies Survey. Our comparison with a previous completeness analysis on the same dataset shows overall agreement, but also highlights how different modelling assumptions for the artificial sources can impact completeness estimates.

  6. Assessment of parametric uncertainty for groundwater reactive transport modeling,

    USGS Publications Warehouse

    Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun

    2014-01-01

    The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.

  7. Exploring super-Gaussianity toward robust information-theoretical time delay estimation.

    PubMed

    Petsatodis, Theodoros; Talantzis, Fotios; Boukis, Christos; Tan, Zheng-Hua; Prasad, Ramjee

    2013-03-01

    Time delay estimation (TDE) is a fundamental component of speaker localization and tracking algorithms. Most of the existing systems are based on the generalized cross-correlation method assuming gaussianity of the source. It has been shown that the distribution of speech, captured with far-field microphones, is highly varying, depending on the noise and reverberation conditions. Thus the performance of TDE is expected to fluctuate depending on the underlying assumption for the speech distribution, being also subject to multi-path reflections and competitive background noise. This paper investigates the effect upon TDE when modeling the source signal with different speech-based distributions. An information theoretical TDE method indirectly encapsulating higher order statistics (HOS) formed the basis of this work. The underlying assumption of Gaussian distributed source has been replaced by that of generalized Gaussian distribution that allows evaluating the problem under a larger set of speech-shaped distributions, ranging from Gaussian to Laplacian and Gamma. Closed forms of the univariate and multivariate entropy expressions of the generalized Gaussian distribution are derived to evaluate the TDE. The results indicate that TDE based on the specific criterion is independent of the underlying assumption for the distribution of the source, for the same covariance matrix.

  8. 76 FR 81471 - Agency Information Collection Activities: Proposed Collection; Comment Request-Special...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-28

    ... Children (WIC) Forms: FNS-698, FNS-699, and FNS-700; The Integrity Profile (TIP) AGENCY: Food and Nutrition..., including the validity of the methodology and assumptions used; (c) ways to enhance the quality, utility and... Marianas, and the Virgin Islands. The reporting burden consists of three automated forms, the FNS-698, FNS...

  9. Galaxy And Mass Assembly (GAMA): growing up in a bad neighbourhood - how do low-mass galaxies become passive?

    NASA Astrophysics Data System (ADS)

    Davies, L. J. M.; Robotham, A. S. G.; Driver, S. P.; Alpaslan, M.; Baldry, I. K.; Bland-Hawthorn, J.; Brough, S.; Brown, M. J. I.; Cluver, M. E.; Holwerda, B. W.; Hopkins, A. M.; Lara-López, M. A.; Mahajan, S.; Moffett, A. J.; Owers, M. S.; Phillipps, S.

    2016-02-01

    Both theoretical predictions and observations of the very nearby Universe suggest that low-mass galaxies(log10[M*/M⊙] < 9.5) are likely to remain star-forming unless they are affected by their local environment. To test this premise, we compare and contrast the local environment of both passive and star-forming galaxies as a function of stellar mass, using the Galaxy and Mass Assembly survey. We find that passive fractions are higher in both interacting pair and group galaxies than the field at all stellar masses, and that this effect is most apparent in the lowest mass galaxies. We also find that essentially all passive log10[M*/M⊙] < 8.5 galaxies are found in pair/group environments, suggesting that local interactions with a more massive neighbour cause them to cease forming new stars. We find that the effects of immediate environment (local galaxy-galaxy interactions) in forming passive systems increase with decreasing stellar mass, and highlight that this is potentially due to increasing interaction time-scales giving sufficient time for the galaxy to become passive via starvation. We then present a simplistic model to test this premise, and show that given our speculative assumptions, it is consistent with our observed results.

  10. Linear instability in the wake of an elliptic wing

    NASA Astrophysics Data System (ADS)

    He, Wei; Tendero, Juan Ángel; Paredes, Pedro; Theofilis, Vassilis

    2017-12-01

    Linear global instability analysis has been performed in the wake of a low aspect ratio three-dimensional wing of elliptic cross section, constructed with appropriately scaled Eppler E387 airfoils. The flow field over the airfoil and in its wake has been computed by full three-dimensional direct numerical simulation at a chord Reynolds number of Rec=1750 and two angles of attack, {AoA}=0° and 5°. Point-vortex methods have been employed to predict the inviscid counterpart of this flow. The spatial BiGlobal eigenvalue problem governing linear small-amplitude perturbations superposed upon the viscous three-dimensional wake has been solved at several axial locations, and results were used to initialize linear PSE-3D analyses without any simplifying assumptions regarding the form of the trailing vortex system, other than weak dependence of all flow quantities on the axial spatial direction. Two classes of linearly unstable perturbations were identified, namely stronger-amplified symmetric modes and weaker-amplified antisymmetric disturbances, both peaking at the vortex sheet which connects the trailing vortices. The amplitude functions of both classes of modes were documented, and their characteristics were compared with those delivered by local linear stability analysis in the wake near the symmetry plane and in the vicinity of the vortex core. While all linear instability analysis approaches employed have delivered qualitatively consistent predictions, only PSE-3D is free from assumptions regarding the underlying base flow and should thus be employed to obtain quantitative information on amplification rates and amplitude functions in this class of configurations.

  11. Planar isotropy of passive scalar turbulent mixing with a mean perpendicular gradient.

    PubMed

    Danaila, L; Dusek, J; Le Gal, P; Anselmet, F; Brun, C; Pumir, A

    1999-08-01

    A recently proposed evolution equation [Vaienti et al., Physica D 85, 405 (1994)] for the probability density functions (PDF's) of turbulent passive scalar increments obtained under the assumptions of fully three-dimensional homogeneity and isotropy is submitted to validation using direct numerical simulation (DNS) results of the mixing of a passive scalar with a nonzero mean gradient by a homogeneous and isotropic turbulent velocity field. It is shown that this approach leads to a quantitatively correct balance between the different terms of the equation, in a plane perpendicular to the mean gradient, at small scales and at large Péclet number. A weaker assumption of homogeneity and isotropy restricted to the plane normal to the mean gradient is then considered to derive an equation describing the evolution of the PDF's as a function of the spatial scale and the scalar increments. A very good agreement between the theory and the DNS data is obtained at all scales. As a particular case of the theory, we derive a generalized form for the well-known Yaglom equation (the isotropic relation between the second-order moments for temperature increments and the third-order velocity-temperature mixed moments). This approach allows us to determine quantitatively how the integral scale properties influence the properties of mixing throughout the whole range of scales. In the simple configuration considered here, the PDF's of the scalar increments perpendicular to the mean gradient can be theoretically described once the sources of inhomogeneity and anisotropy at large scales are correctly taken into account.

  12. The cerebellum in action: a simulation and robotics study.

    PubMed

    Hofstötter, Constanze; Mintz, Matti; Verschure, Paul F M J

    2002-10-01

    The control or prediction of the precise timing of events are central aspects of the many tasks assigned to the cerebellum. Despite much detailed knowledge of its physiology and anatomy, it remains unclear how the cerebellar circuitry can achieve such an adaptive timing function. We present a computational model pursuing this question for one extensively studied type of cerebellar-mediated learning: the classical conditioning of discrete motor responses. This model combines multiple current assumptions on the function of the cerebellar circuitry and was used to investigate whether plasticity in the cerebellar cortex alone can mediate adaptive conditioned response timing. In particular, we studied the effect of changes in the strength of the synapses formed between parallel fibres and Purkinje cells under the control of a negative feedback loop formed between inferior olive, cerebellar cortex and cerebellar deep nuclei. The learning performance of the model was evaluated at the circuit level in simulated conditioning experiments as well as at the behavioural level using a mobile robot. We demonstrate that the model supports adaptively timed responses under real-world conditions. Thus, in contrast to many other models that have focused on cerebellar-mediated conditioning, we investigated whether and how the suggested underlying mechanisms could give rise to behavioural phenomena.

  13. AUDITORY ASSOCIATIVE MEMORY AND REPRESENTATIONAL PLASTICITY IN THE PRIMARY AUDITORY CORTEX

    PubMed Central

    Weinberger, Norman M.

    2009-01-01

    Historically, the primary auditory cortex has been largely ignored as a substrate of auditory memory, perhaps because studies of associative learning could not reveal the plasticity of receptive fields (RFs). The use of a unified experimental design, in which RFs are obtained before and after standard training (e.g., classical and instrumental conditioning) revealed associative representational plasticity, characterized by facilitation of responses to tonal conditioned stimuli (CSs) at the expense of other frequencies, producing CS-specific tuning shifts. Associative representational plasticity (ARP) possesses the major attributes of associative memory: it is highly specific, discriminative, rapidly acquired, consolidates over hours and days and can be retained indefinitely. The nucleus basalis cholinergic system is sufficient both for the induction of ARP and for the induction of specific auditory memory, including control of the amount of remembered acoustic details. Extant controversies regarding the form, function and neural substrates of ARP appear largely to reflect different assumptions, which are explicitly discussed. The view that the forms of plasticity are task-dependent is supported by ongoing studies in which auditory learning involves CS-specific decreases in threshold or bandwidth without affecting frequency tuning. Future research needs to focus on the factors that determine ARP and their functions in hearing and in auditory memory. PMID:17344002

  14. The roles of teachers' science talk in revealing language demands within diverse elementary school classrooms: a study of teaching heat and temperature in Singapore

    NASA Astrophysics Data System (ADS)

    Seah, Lay Hoon; Yore, Larry D.

    2017-01-01

    This study of three science teachers' lessons on heat and temperature seeks to characterise classroom talk that highlighted the ways language is used and to examine the nature of the language demands revealed in constructing, negotiating, arguing and communicating science ideas. The transcripts from the entire instructional units for these teachers' four culturally and linguistically diverse Grade 4 classes (10 years old) with English as the language of instruction constitute the data for this investigation. Analysis of these transcripts focused on teachers' talk that made explicit reference to the form or function of the language of science and led to the inductive development of the 'Attending to Language Demands in Science' analytical framework. This framework in turn revealed that the major foregrounding purposes of teachers' talk include labelling, explaining, differentiating, selecting and constructing. Further classification of the instances within these categories revealed the extensive and contextualised nature of the language demands. The results challenge the conventional assumption that basic literacy skills dominate over disciplinary literacy skills in primary school science. Potential uses of the analytical framework that could further expand our understanding of the forms, functions and demands of language used in elementary school science are also discussed.

  15. UFOs: What to Do?

    DTIC Science & Technology

    1968-11-01

    of the existence of other highly developed life forms. To begin with, the observable universe -- that is, the distance to which we can observe luminous...been around long enough that life forms as developed as our own could exist. Implicit in further dis- cussion are the assumptions that: 1. Planets... life will exist; 3. Our own history of past evolution and development is neither slow nor fast, but average and tyoical for life forms. (Ours is the

  16. Temperature-dependent microindentation data of an epoxy composition in the glassy region

    NASA Astrophysics Data System (ADS)

    Minster, Jiří; Králík, Vlastimil

    2015-02-01

    The short-term instrumented microindentation technique was applied for assessing the influence of temperature in the glassy region on the time-dependent mechanical properties of an average epoxy resin mix near to its native state. Linear viscoelasticity theory with the assumption of time-independent Poisson ratio value forms the basis for processing the experimental results. The sharp standard Berkovich indenter was used to measure the local mechanical properties at temperatures 20, 24, 28, and 35 °C. The short-term viscoelastic compliance histories were defined by the Kohlrausch-Williams-Watts double exponential function. The findings suggest that depth-sensing indentation data of thermorheologically simple materials influenced by different temperatures in the glassy region can also be used, through the time-temperature superposition, to extract viscoelastic response functions accurately. This statement is supported by the comparison of the viscoelastic compliance master curve of the tested material with data derived from standard macro creep measurements under pressure on the material in a conformable state.

  17. Assessment of morphological and functional changes in organs of rats after intramuscular introduction of iron nanoparticles and their agglomerates.

    PubMed

    Sizova, Elena; Miroshnikov, Sergey; Yausheva, Elena; Polyakova, Valentina

    2015-01-01

    The research was performed on male Wistar rats based on assumptions that new microelement preparations containing metal nanoparticles and their agglomerates had potential. Morphological and functional changes in tissues in the injection site and dynamics of chemical element metabolism (25 indicators) in body were assessed after repeated intramuscular injections (total, 7) with preparation containing agglomerate of iron nanoparticles. As a result, iron depot was formed in myosymplasts of injection sites. The quantity of muscle fibers having positive Perls' stain increased with increasing number of injections. However, the concentration of the most chemical elements and iron significantly decreased in the whole skeletal muscle system (injection sites are not included). Consequently, it increased up to the control level after the sixth and the seventh injections. Among the studied organs (liver, kidneys, and spleen), Caspase-3 expression was revealed only in spleen. The expression had a direct dependence on the number of injections. Processes of iron elimination from preparation containing nanoparticles and their agglomerates had different intensity.

  18. A homogenization approach for the effective drained viscoelastic properties of 2D porous media and an application for cortical bone.

    PubMed

    Nguyen, Sy-Tuan; Vu, Mai-Ba; Vu, Minh-Ngoc; To, Quy-Dong

    2018-02-01

    Closed-form solutions for the effective rheological properties of a 2D viscoelastic drained porous medium made of a Generalized Maxwell viscoelastic matrix and pore inclusions are developed and applied for cortical bone. The in-plane (transverse) effective viscoelastic bulk and shear moduli of the Generalized Maxwell rheology of the homogenized medium are expressed as functions of the porosity and the viscoelastic properties of the solid phase. When deriving these functions, the classical inverse Laplace-Carson transformation technique is avoided, due to its complexity, by considering the short and long term approximations. The approximated results are validated against exact solutions obtained from the inverse Laplace-Carson transform for a simple configuration when the later is available. An application for cortical bone with assumption of circular pore in the transverse plane shows that the proposed approximation fit very well with experimental data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Determination of hot carrier energy distributions from inversion of ultrafast pump-probe reflectivity measurements.

    PubMed

    Heilpern, Tal; Manjare, Manoj; Govorov, Alexander O; Wiederrecht, Gary P; Gray, Stephen K; Harutyunyan, Hayk

    2018-05-10

    Developing a fundamental understanding of ultrafast non-thermal processes in metallic nanosystems will lead to applications in photodetection, photochemistry and photonic circuitry. Typically, non-thermal and thermal carrier populations in plasmonic systems are inferred either by making assumptions about the functional form of the initial energy distribution or using indirect sensors like localized plasmon frequency shifts. Here we directly determine non-thermal and thermal distributions and dynamics in thin films by applying a double inversion procedure to optical pump-probe data that relates the reflectivity changes around Fermi energy to the changes in the dielectric function and in the single-electron energy band occupancies. When applied to normal incidence measurements our method uncovers the ultrafast excitation of a non-Fermi-Dirac distribution and its subsequent thermalization dynamics. Furthermore, when applied to the Kretschmann configuration, we show that the excitation of propagating plasmons leads to a broader energy distribution of electrons due to the enhanced Landau damping.

  20. On The Sfr-M* Main Sequence Archetypal Star-Formation History And Analytical Models

    NASA Astrophysics Data System (ADS)

    Ciesla, Laure; Elbaz, David; Fensch, Jeremy

    2017-06-01

    From the evolution of the main sequence we can build the star formation history (SFH) of MS galaxies, assuming that they follow this relation all their life. We show that this SFH is not only a function of cosmic time but also involve the seed mass of the galaxy. We discuss the implications of this MS SFH on the stellar mass growth, and the entry in the passive region of the UVJ diagram, while the galaxy is still forming stars. We test the ability of different analytical SFH forms found in the literature to probe the SFR of all type of galaxies. Using a sample of GOODS-South galaxies, we show that these SFHs artificially enhance or create a gradient of age, parallel to the MS. A simple model of a MS galaxy, such as those expected from compaction or variation in gas accretion, undergoing some fluctuations provide does not predict such a gradient, that we show is due to SFH assumptions. We propose an improved analytical form, taking into account a flexibility in the recent SFH that we calibrate as a diagnostic to identify rapidly quenched galaxies from large photometric survey.

  1. Energetics of the formation of Cu-Ag core–shell nanoparticles

    DOE PAGES

    Chandross, Michael

    2014-10-06

    Our work presents molecular dynamics and Monte Carlo simulations aimed at developing an understanding of the formation of core–shell Cu-Ag nanoparticles. The effects of surface and interfacial energies were considered and used to form a phenomenological model that calculates the energy gained upon the formation of a core–shell structure from two previously distinct, non-interacting nanoparticles. In most cases, the core–shell structure was found to be energetically favored. Specifically, the difference in energy as a function of the radii of the individual Cu and Ag particles was examined, with the assumption that a core–shell structure forms. In general, it was foundmore » that the energetic gain from forming such a structure increased with increasing size of the initial Ag particle. This result was interpreted as a result of the reduction in surface energy. Moreover, for two separate particles, both Cu and Ag contribute to the surface energy; however, for a core–shell structure, the only contribution to the surface energy is from the Ag shell and the Cu contribution is changed to a Cu–Ag interfacial energy, which is always smaller.« less

  2. 78 FR 37245 - Submission for Review: OPM Form 1203-FX, Occupational Questionnaire

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-20

    ...The Automated Systems Management Branch, Office of Personnel Management (OPM) offers the general public and other Federal agencies the opportunity to comment on an existing information collection request (ICR) 3206-0040, Occupational Questionnaire, OPM Form 1203-FX. As required by the Paperwork Reduction Act of 1995, (Pub. L. 104-13, 44 U.S.C. chapter 35) as amended by the Clinger-Cohen Act (Pub. L. 104- 106), OPM is soliciting comments for this collection. The Office of Management and Budget is particularly interested in comments that: 1. Evaluate whether the proposed collection of information is necessary for the proper performance of the functions of OPM, including whether the information will have practical utility; 2. Evaluate the accuracy of OPM's estimate of the burden of the proposed collection of information, including the validity of the methodology and assumptions used; 3. Enhance the quality, utility, and clarity of the information to be collected; and 4. Minimize the burden of the collection of information on those who are to respond, including through the use of appropriate automated, electronic, mechanical, or other technological collection techniques or other forms of information technology, e.g., permitting electronic submissions of responses.

  3. Eigensensitivity analysis of rotating clamped uniform beams with the asymptotic numerical method

    NASA Astrophysics Data System (ADS)

    Bekhoucha, F.; Rechak, S.; Cadou, J. M.

    2016-12-01

    In this paper, free vibrations of a rotating clamped Euler-Bernoulli beams with uniform cross section are studied using continuation method, namely asymptotic numerical method. The governing equations of motion are derived using Lagrange's method. The kinetic and strain energy expression are derived from Rayleigh-Ritz method using a set of hybrid variables and based on a linear deflection assumption. The derived equations are transformed in two eigenvalue problems, where the first is a linear gyroscopic eigenvalue problem and presents the coupled lagging and stretch motions through gyroscopic terms. While the second is standard eigenvalue problem and corresponds to the flapping motion. Those two eigenvalue problems are transformed into two functionals treated by continuation method, the Asymptotic Numerical Method. New method proposed for the solution of the linear gyroscopic system based on an augmented system, which transforms the original problem to a standard form with real symmetric matrices. By using some techniques to resolve these singular problems by the continuation method, evolution curves of the natural frequencies against dimensionless angular velocity are determined. At high angular velocity, some singular points, due to the linear elastic assumption, are computed. Numerical tests of convergence are conducted and the obtained results are compared to the exact values. Results obtained by continuation are compared to those computed with discrete eigenvalue problem.

  4. Dynamics of an HIV-1 infection model with cell mediated immunity

    NASA Astrophysics Data System (ADS)

    Yu, Pei; Huang, Jianing; Jiang, Jiao

    2014-10-01

    In this paper, we study the dynamics of an improved mathematical model on HIV-1 virus with cell mediated immunity. This new 5-dimensional model is based on the combination of a basic 3-dimensional HIV-1 model and a 4-dimensional immunity response model, which more realistically describes dynamics between the uninfected cells, infected cells, virus, the CTL response cells and CTL effector cells. Our 5-dimensional model may be reduced to the 4-dimensional model by applying a quasi-steady state assumption on the variable of virus. However, it is shown in this paper that virus is necessary to be involved in the modeling, and that a quasi-steady state assumption should be applied carefully, which may miss some important dynamical behavior of the system. Detailed bifurcation analysis is given to show that the system has three equilibrium solutions, namely the infection-free equilibrium, the infectious equilibrium without CTL, and the infectious equilibrium with CTL, and a series of bifurcations including two transcritical bifurcations and one or two possible Hopf bifurcations occur from these three equilibria as the basic reproduction number is varied. The mathematical methods applied in this paper include characteristic equations, Routh-Hurwitz condition, fluctuation lemma, Lyapunov function and computation of normal forms. Numerical simulation is also presented to demonstrate the applicability of the theoretical predictions.

  5. Trace formulas for a class of non-Fredholm operators: A review

    NASA Astrophysics Data System (ADS)

    Carey, Alan; Gesztesy, Fritz; Grosse, Harald; Levitina, Galina; Potapov, Denis; Sukochev, Fedor; Zanin, Dmitriy

    2016-11-01

    Take a one-parameter family of self-adjoint Fredholm operators {A(t)}t∈ℝ on a Hilbert space ℋ, joining endpoints A±. There is a long history of work on the question of whether the spectral flow along this path is given by the index of the operator DA = (d/dt) + A acting in L2(ℝ; ℋ), where A denotes the multiplication operator (Af)(t) = A(t)f(t) for f ∈dom(A). Most results are about the case where the operators A(ṡ) have compact resolvent. In this article, we review what is known when these operators have some essential spectrum and describe some new results. Using the operators H1 = DA∗D A, H2 = DADA∗, an abstract trace formula for Fredholm operators with essential spectrum was proved in [23], extending a result of Pushnitski [35], although, still under strong hypotheses on A(ṡ): trL2(ℝ;ℋ)((H2 - zI)-1 - (H 1 - zI)-1) = 1 2ztrL2(ℋ)(gz(A+) - gz(A-)), where gz(x) = x(x2 - z)-1/2, x ∈ ℝ, z ∈ ℂ\\[0,∞). Associated to the pairs (H2,H1) and (A+,A-) are Krein spectral shift functions ξ(ṡ; H2,H1) and ξ(ṡ; A+,A-), respectively. From the trace formula, it was shown that there is a second, Pushnitski-type, formula: ξ(λ; H2,H1) = 1 π∫-λ1/2λ1/2 ξ(ν; A+,A-)dν (λ - ν2)1/2 for a.e. λ > 0. This can be employed to establish the desired equality, Fredholm index = ξ(0; A+,A-) = spectral flow. This equality was generalized to non-Fredholm operators in [14] in the form Witten index = [ξR(0; A+,A-) + ξL(0; A+,A-)]/2, replacing the Fredholm index on the left-hand side by the Witten index of DA and ξ(0; A+,A-) on the right-hand side by an appropriate arithmetic mean (assuming 0 is a right and left Lebesgue point for ξ(ṡ; A+,A-) denoted by ξR(0; A+,A-) and ξL(0; A+,A-), respectively). But this applies only under the restrictive assumption that the endpoint A+ is a relatively trace class perturbation of A- (ruling out general differential operators). In addition to reviewing this previous work, we describe in this article some extensions using a (1 + 1)-dimensional setup, where A± are non-Fredholm differential operators. By a careful analysis we prove, for a class of examples, that the preceding trace formula still holds in this more general situation. Then we prove that the Pushnitski-type formula for spectral shift functions also holds and this then gives the equality of spectral shift functions in the form ξ(λ; H2,H1) = ξ(ν; A+,A-)for a.e. λ > 0 and a.e.ν ∈ ℝ, for the (1 + 1)-dimensional model operator at hand. This shows that neither the relatively trace class perturbation assumption nor the Fredholm assumption are required if one works with spectral shift functions. The results support the view that the spectral shift function should be a replacement for the spectral flow in certain non-Fredholm situations and also point the way to the study of higher-dimensional cases. We discuss the connection with summability questions in Fredholm modules in an appendix.

  6. Slipping Anchor? Testing the Vignettes Approach to Identification and Correction of Reporting Heterogeneity

    PubMed Central

    d’Uva, Teresa Bago; Lindeboom, Maarten; O’Donnell, Owen; van Doorslaer, Eddy

    2011-01-01

    We propose tests of the two assumptions under which anchoring vignettes identify heterogeneity in reporting of categorical evaluations. Systematic variation in the perceived difference between any two vignette states is sufficient to reject vignette equivalence. Response consistency - the respondent uses the same response scale to evaluate the vignette and herself – is testable given sufficiently comprehensive objective indicators that independently identify response scales. Both assumptions are rejected for reporting of cognitive and physical functioning in a sample of older English individuals, although a weaker test resting on less stringent assumptions does not reject response consistency for cognition. PMID:22184479

  7. [The accuracy of rapid equilibrium assumption in steady-state enzyme kinetics is the function of equilibrium segment structure and properties].

    PubMed

    Vrzheshch, P V

    2015-01-01

    Quantitative evaluation of the accuracy of the rapid equilibrium assumption in the steady-state enzyme kinetics was obtained for an arbitrary mechanism of an enzyme-catalyzed reaction. This evaluation depends only on the structure and properties of the equilibrium segment, but doesn't depend on the structure and properties of the rest (stationary part) of the kinetic scheme. The smaller the values of the edges leaving equilibrium segment in relation to values of the edges within the equilibrium segment, the higher the accuracy of determination of intermediate concentrations and reaction velocity in a case of the rapid equilibrium assumption.

  8. 41 CFR 60-3.9 - No assumption of validity.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... general reputation of a test or other selection procedures, its author or its publisher, or casual reports... of validity based on a procedure's name or descriptive labels; all forms of promotional literature...

  9. 41 CFR 60-3.9 - No assumption of validity.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... general reputation of a test or other selection procedures, its author or its publisher, or casual reports... of validity based on a procedure's name or descriptive labels; all forms of promotional literature...

  10. Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis; Gold, Dara

    2013-01-01

    We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.

  11. The determination of some requirements for a helicopter flight research simulation facility

    NASA Technical Reports Server (NTRS)

    Sinacori, J. B.

    1977-01-01

    Important requirements were defined for a flight simulation facility to support Army helicopter development. In particular requirements associated with the visual and motion subsystems of the planned simulator were studied. The method used in the motion requirements study is presented together with the underlying assumptions and a description of the supporting data. Results are given in a form suitable for use in a preliminary design. Visual requirements associated with a television camera/model concept are related. The important parameters are described together with substantiating data and assumptions. Research recommendations are given.

  12. Model-based Utility Functions

    NASA Astrophysics Data System (ADS)

    Hibbard, Bill

    2012-05-01

    Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions.

  13. New Statistical Model for Variability of Aerosol Optical Thickness: Theory and Application to MODIS Data over Ocean

    NASA Technical Reports Server (NTRS)

    Alexandrov, Mikhail Dmitrievic; Geogdzhayev, Igor V.; Tsigaridis, Konstantinos; Marshak, Alexander; Levy, Robert; Cairns, Brian

    2016-01-01

    A novel model for the variability in aerosol optical thickness (AOT) is presented. This model is based on the consideration of AOT fields as realizations of a stochastic process, that is the exponent of an underlying Gaussian process with a specific autocorrelation function. In this approach AOT fields have lognormal PDFs and structure functions having the correct asymptotic behavior at large scales. The latter is an advantage compared with fractal (scale-invariant) approaches. The simple analytical form of the structure function in the proposed model facilitates its use for the parameterization of AOT statistics derived from remote sensing data. The new approach is illustrated using a month-long global MODIS AOT dataset (over ocean) with 10 km resolution. It was used to compute AOT statistics for sample cells forming a grid with 5deg spacing. The observed shapes of the structure functions indicated that in a large number of cases the AOT variability is split into two regimes that exhibit different patterns of behavior: small-scale stationary processes and trends reflecting variations at larger scales. The small-scale patterns are suggested to be generated by local aerosols within the marine boundary layer, while the large-scale trends are indicative of elevated aerosols transported from remote continental sources. This assumption is evaluated by comparison of the geographical distributions of these patterns derived from MODIS data with those obtained from the GISS GCM. This study shows considerable potential to enhance comparisons between remote sensing datasets and climate models beyond regional mean AOTs.

  14. A multi-scalar PDF approach for LES of turbulent spray combustion

    NASA Astrophysics Data System (ADS)

    Raman, Venkat; Heye, Colin

    2011-11-01

    A comprehensive joint-scalar probability density function (PDF) approach is proposed for large eddy simulation (LES) of turbulent spray combustion and tests are conducted to analyze the validity and modeling requirements. The PDF method has the advantage that the chemical source term appears closed but requires models for the small scale mixing process. A stable and consistent numerical algorithm for the LES/PDF approach is presented. To understand the modeling issues in the PDF method, direct numerical simulation of a spray flame at three different fuel droplet Stokes numbers and an equivalent gaseous flame are carried out. Assumptions in closing the subfilter conditional diffusion term in the filtered PDF transport equation are evaluated for various model forms. In addition, the validity of evaporation rate models in high Stokes number flows is analyzed.

  15. Turbulence kinetic energy equation for dilute suspensions

    NASA Technical Reports Server (NTRS)

    Abou-Arab, T. W.; Roco, M. C.

    1989-01-01

    A multiphase turbulence closure model is presented which employs one transport equation, namely the turbulence kinetic energy equation. The proposed form of this equation is different from the earlier formulations in some aspects. The power spectrum of the carrier fluid is divided into two regions, which interact in different ways and at different rates with the suspended particles as a function of the particle-eddy size ratio and density ratio. The length scale is described algebraically. A mass/time averaging procedure for the momentum and kinetic energy equations is adopted. The resulting turbulence correlations are modeled under less retrictive assumptions comparative to previous work. The closures for the momentum and kinetic energy equations are given. Comparisons of the predictions with experimental results on liquid-solid jet and gas-solid pipe flow show satisfactory agreement.

  16. Robust optimal control of material flows in demand-driven supply networks

    NASA Astrophysics Data System (ADS)

    Laumanns, Marco; Lefeber, Erjen

    2006-04-01

    We develop a model based on stochastic discrete-time controlled dynamical systems in order to derive optimal policies for controlling the material flow in supply networks. Each node in the network is described as a transducer such that the dynamics of the material and information flows within the entire network can be expressed by a system of first-order difference equations, where some inputs to the system act as external disturbances. We apply methods from constrained robust optimal control to compute the explicit control law as a function of the current state. For the numerical examples considered, these control laws correspond to certain classes of optimal ordering policies from inventory management while avoiding, however, any a priori assumptions about the general form of the policy.

  17. Volumetric calibration of a plenoptic camera.

    PubMed

    Hall, Elise Munz; Fahringer, Timothy W; Guildenbecher, Daniel R; Thurow, Brian S

    2018-02-01

    The volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creation of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.

  18. Multifield stochastic particle production: beyond a maximum entropy ansatz

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amin, Mustafa A.; Garcia, Marcos A.G.; Xie, Hong-Yi

    2017-09-01

    We explore non-adiabatic particle production for N {sub f} coupled scalar fields in a time-dependent background with stochastically varying effective masses, cross-couplings and intervals between interactions. Under the assumption of weak scattering per interaction, we provide a framework for calculating the typical particle production rates after a large number of interactions. After setting up the framework, for analytic tractability, we consider interactions (effective masses and cross couplings) characterized by series of Dirac-delta functions in time with amplitudes and locations drawn from different distributions. Without assuming that the fields are statistically equivalent, we present closed form results (up to quadratures) formore » the asymptotic particle production rates for the N {sub f}=1 and N {sub f}=2 cases. We also present results for the general N {sub f} >2 case, but with more restrictive assumptions. We find agreement between our analytic results and direct numerical calculations of the total occupation number of the produced particles, with departures that can be explained in terms of violation of our assumptions. We elucidate the precise connection between the maximum entropy ansatz (MEA) used in Amin and Baumann (2015) and the underlying statistical distribution of the self and cross couplings. We provide and justify a simple to use (MEA-inspired) expression for the particle production rate, which agrees with our more detailed treatment when the parameters characterizing the effective mass and cross-couplings between fields are all comparable to each other. However, deviations are seen when some parameters differ significantly from others. We show that such deviations become negligible for a broad range of parameters when N {sub f}>> 1.« less

  19. Mission Command in the Age of Network-Enabled Operations: Social Network Analysis of Information Sharing and Situation Awareness.

    PubMed

    Buchler, Norbou; Fitzhugh, Sean M; Marusich, Laura R; Ungvarsky, Diane M; Lebiere, Christian; Gonzalez, Cleotilde

    2016-01-01

    A common assumption in organizations is that information sharing improves situation awareness and ultimately organizational effectiveness. The sheer volume and rapid pace of information and communications received and readily accessible through computer networks, however, can overwhelm individuals, resulting in data overload from a combination of diverse data sources, multiple data formats, and large data volumes. The current conceptual framework of network enabled operations (NEO) posits that robust networking and information sharing act as a positive feedback loop resulting in greater situation awareness and mission effectiveness in military operations (Alberts and Garstka, 2004). We test this assumption in a large-scale, 2-week military training exercise. We conducted a social network analysis of email communications among the multi-echelon Mission Command staff (one Division and two sub-ordinate Brigades) and assessed the situational awareness of every individual. Results from our exponential random graph models challenge the aforementioned assumption, as increased email output was associated with lower individual situation awareness. It emerged that higher situation awareness was associated with a lower probability of out-ties, so that broadly sending many messages decreased the likelihood of attaining situation awareness. This challenges the hypothesis that increased information sharing improves situation awareness, at least for those doing the bulk of the sharing. In addition, we observed two trends that reflect a compartmentalizing of networked information sharing as email links were more commonly formed among members of the command staff with both similar functions and levels of situation awareness, than between two individuals with dissimilar functions and levels of situation awareness; both those findings can be interpreted to reflect effects of homophily. Our results have major implications that challenge the current conceptual framework of NEO. In addition, the information sharing network was largely imbalanced and dominated by a few key individuals so that most individuals in the network have very few email connections, but a small number of individuals have very many connections. These results highlight several major growing pains for networked organizations and military organizations in particular.

  20. Mission Command in the Age of Network-Enabled Operations: Social Network Analysis of Information Sharing and Situation Awareness

    PubMed Central

    Buchler, Norbou; Fitzhugh, Sean M.; Marusich, Laura R.; Ungvarsky, Diane M.; Lebiere, Christian; Gonzalez, Cleotilde

    2016-01-01

    A common assumption in organizations is that information sharing improves situation awareness and ultimately organizational effectiveness. The sheer volume and rapid pace of information and communications received and readily accessible through computer networks, however, can overwhelm individuals, resulting in data overload from a combination of diverse data sources, multiple data formats, and large data volumes. The current conceptual framework of network enabled operations (NEO) posits that robust networking and information sharing act as a positive feedback loop resulting in greater situation awareness and mission effectiveness in military operations (Alberts and Garstka, 2004). We test this assumption in a large-scale, 2-week military training exercise. We conducted a social network analysis of email communications among the multi-echelon Mission Command staff (one Division and two sub-ordinate Brigades) and assessed the situational awareness of every individual. Results from our exponential random graph models challenge the aforementioned assumption, as increased email output was associated with lower individual situation awareness. It emerged that higher situation awareness was associated with a lower probability of out-ties, so that broadly sending many messages decreased the likelihood of attaining situation awareness. This challenges the hypothesis that increased information sharing improves situation awareness, at least for those doing the bulk of the sharing. In addition, we observed two trends that reflect a compartmentalizing of networked information sharing as email links were more commonly formed among members of the command staff with both similar functions and levels of situation awareness, than between two individuals with dissimilar functions and levels of situation awareness; both those findings can be interpreted to reflect effects of homophily. Our results have major implications that challenge the current conceptual framework of NEO. In addition, the information sharing network was largely imbalanced and dominated by a few key individuals so that most individuals in the network have very few email connections, but a small number of individuals have very many connections. These results highlight several major growing pains for networked organizations and military organizations in particular. PMID:27445905

  1. Group Facilitation: Functions and Skills.

    ERIC Educational Resources Information Center

    Anderson, L. Frances; Robertson, Sharon E.

    1985-01-01

    Discusses a model based on a specific set of assumptions about causality and effectiveness in interactional groups. Discusses personal qualities of group facilitators and proposes five major functions and seven skill clusters central to effective group facilitation. (Author/BH)

  2. Error bounds of adaptive dynamic programming algorithms for solving undiscounted optimal control problems.

    PubMed

    Liu, Derong; Li, Hongliang; Wang, Ding

    2015-06-01

    In this paper, we establish error bounds of adaptive dynamic programming algorithms for solving undiscounted infinite-horizon optimal control problems of discrete-time deterministic nonlinear systems. We consider approximation errors in the update equations of both value function and control policy. We utilize a new assumption instead of the contraction assumption in discounted optimal control problems. We establish the error bounds for approximate value iteration based on a new error condition. Furthermore, we also establish the error bounds for approximate policy iteration and approximate optimistic policy iteration algorithms. It is shown that the iterative approximate value function can converge to a finite neighborhood of the optimal value function under some conditions. To implement the developed algorithms, critic and action neural networks are used to approximate the value function and control policy, respectively. Finally, a simulation example is given to demonstrate the effectiveness of the developed algorithms.

  3. 26 CFR 1.417(a)(3)-1 - Required explanation of qualified joint and survivor annuity and qualified preretirement survivor...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... grouping rules of paragraph (c)(2)(iii) of this section. Separate charts are provided for ages 55, 60, and...) Simplified presentations permitted—(A) Grouping of certain optional forms. Two or more optional forms of... starting date, a reasonable assumption for the age of the participant's spouse, or, in the case of a...

  4. Viable inflationary evolution from Einstein frame loop quantum cosmology

    NASA Astrophysics Data System (ADS)

    de Haro, Jaume; Odintsov, S. D.; Oikonomou, V. K.

    2018-04-01

    In this work we construct a bottom-up reconstruction technique for loop quantum cosmology scalar-tensor theories, from the observational indices. Particularly, the reconstruction technique is based on fixing the functional form of the scalar-to-tensor ratio as a function of the e -foldings number. The aim of the technique is to realize viable inflationary scenarios, and the only assumption that must hold true in order for the reconstruction technique to work is that the dynamical evolution of the scalar field obeys the slow-roll conditions. We use two functional forms for the scalar-to-tensor ratio, one of which corresponds to a popular inflationary class of models, the α attractors. For the latter, we calculate the leading order behavior of the spectral index and we demonstrate that the resulting inflationary theory is viable and compatible with the latest Planck and BICEP2/Keck-Array data. In addition, we find the classical limit of the theory, and as we demonstrate, the loop quantum cosmology corrected theory and the classical theory are identical at leading order in the perturbative expansion quantified by the parameter ρc, which is the critical density of the quantum theory. Finally, by using the formalism of slow-roll scalar-tensor loop quantum cosmology, we investigate how several inflationary potentials can be realized by the quantum theory, and we calculate directly the slow-roll indices and the corresponding observational indices. In addition, the f (R ) gravity frame picture is presented.

  5. Large-scale unassisted smoking cessation over 50 years: lessons from history for endgame planning in tobacco control.

    PubMed

    Chapman, Simon; Wakefield, Melanie A

    2013-05-01

    In the 50 years since the twentieth century's smoking epidemic began to decline from the beginning of the 1960s, hundreds of millions of smokers around the world have stopped smoking permanently. Overwhelmingly, most stopped without any formal assistance in the form of medication or professional assistance, including many millions of former heavy smokers. Nascent discussion about national and global tobacco endgame scenarios is dominated by an assumption that transitioning from cigarettes to alternative forms of potent, consumer-acceptable forms of nicotine will be essential to the success of endgames. This appears to uncritically assume (1) the hardening hypothesis: that as smoking prevalence moves toward and below 10%, the remaining smokers will be mostly deeply addicted, and will be largely unable to stop smoking unless they are able to move to other forms of 'clean' nicotine addiction such as e-cigarettes and more potent forms of nicotine replacement; and (2) an overly medicalised view of smoking cessation that sees unassisted cessation as both inefficient and inhumane. In this paper, we question these assumptions. We also note that some vanguard nations which continue to experience declining smoking prevalence have long banned smokeless tobacco and non-therapeutic forms of nicotine delivery. We argue that there are potentially risky consequences of unravelling such bans when history suggests that large-scale cessation is demonstrably possible.

  6. Large-scale unassisted smoking cessation over 50 years: lessons from history for endgame planning in tobacco control

    PubMed Central

    Chapman, Simon; Wakefield, Melanie A

    2013-01-01

    In the 50 years since the twentieth century's smoking epidemic began to decline from the beginning of the 1960s, hundreds of millions of smokers around the world have stopped smoking permanently. Overwhelmingly, most stopped without any formal assistance in the form of medication or professional assistance, including many millions of former heavy smokers. Nascent discussion about national and global tobacco endgame scenarios is dominated by an assumption that transitioning from cigarettes to alternative forms of potent, consumer-acceptable forms of nicotine will be essential to the success of endgames. This appears to uncritically assume (1) the hardening hypothesis: that as smoking prevalence moves toward and below 10%, the remaining smokers will be mostly deeply addicted, and will be largely unable to stop smoking unless they are able to move to other forms of ‘clean’ nicotine addiction such as e-cigarettes and more potent forms of nicotine replacement; and (2) an overly medicalised view of smoking cessation that sees unassisted cessation as both inefficient and inhumane. In this paper, we question these assumptions. We also note that some vanguard nations which continue to experience declining smoking prevalence have long banned smokeless tobacco and non-therapeutic forms of nicotine delivery. We argue that there are potentially risky consequences of unravelling such bans when history suggests that large-scale cessation is demonstrably possible. PMID:23591504

  7. The Space-Time Conservation Element and Solution Element Method: A New High-Resolution and Genuinely Multidimensional Paradigm for Solving Conservation Laws. 1; The Two Dimensional Time Marching Schemes

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Wang, Xiao-Yen; Chow, Chuen-Yen

    1998-01-01

    A new high resolution and genuinely multidimensional numerical method for solving conservation laws is being, developed. It was designed to avoid the limitations of the traditional methods. and was built from round zero with extensive physics considerations. Nevertheless, its foundation is mathmatically simple enough that one can build from it a coherent, robust. efficient and accurate numerical framework. Two basic beliefs that set the new method apart from the established methods are at the core of its development. The first belief is that, in order to capture physics more efficiently and realistically, the modeling, focus should be placed on the original integral form of the physical conservation laws, rather than the differential form. The latter form follows from the integral form under the additional assumption that the physical solution is smooth, an assumption that is difficult to realize numerically in a region of rapid chance. such as a boundary layer or a shock. The second belief is that, with proper modeling of the integral and differential forms themselves, the resulting, numerical solution should automatically be consistent with the properties derived front the integral and differential forms, e.g., the jump conditions across a shock and the properties of characteristics. Therefore a much simpler and more robust method can be developed by not using the above derived properties explicitly.

  8. Epistemological issues in the study of microbial life: alternative terran biospheres?

    PubMed

    Cleland, Carol E

    2007-12-01

    The assumption that all life on Earth today shares the same basic molecular architecture and biochemistry is part of the paradigm of modern biology. This paper argues that there is little theoretical or empirical support for this widely held assumption. Scientists know that life could have been at least modestly different at the molecular level and it is clear that alternative molecular building blocks for life were available on the early Earth. If the emergence of life is, like other natural phenomena, highly probable given the right chemical and physical conditions then it seems likely that the early Earth hosted multiple origins of life, some of which produced chemical variations on life as we know it. While these points are often conceded, it is nevertheless maintained that any primitive alternatives to familiar life would have been eliminated long ago, either amalgamated into a single form of life through lateral gene transfer (LGT) or alternatively out-competed by our putatively more evolutionarily robust form of life. Besides, the argument continues, if such life forms still existed, we surely would have encountered telling signs of them by now. These arguments do not hold up well under close scrutiny. They reflect a host of assumptions that are grounded in our experience with large multicellular organisms and, most importantly, do not apply to microbial forms of life, which cannot be easily studied without the aid of sophisticated technologies. Significantly, the most powerful molecular biology techniques available-polymerase chain reaction (PCR) amplification of rRNA genes augmented by metagenomic analysis-could not detect such microbes if they existed. Given the profound philosophical and scientific importance that such a discovery would represent, a dedicated search for 'shadow microbes' (heretofore unrecognized 'alien' forms of terran microbial life) seems in order. The best place to start such a search is with puzzling (anomalous) phenomena, such as desert varnish, that resist classification as 'biological' or 'nonbiological'.

  9. Methylmercury is the predominant form of mercury in bird eggs: a synthesis

    USGS Publications Warehouse

    Ackerman, Joshua T.; Herzog, Mark P.; Schwarzbach, Steven E.

    2013-01-01

    Bird eggs are commonly used in mercury monitoring programs to assess methylmercury contamination and toxicity to birds. However, only 6% of >200 studies investigating mercury in bird eggs have actually measured methylmercury concentrations in eggs. Instead, studies typically measure total mercury in eggs (both organic and inorganic forms of mercury), with the explicit assumption that total mercury concentrations in eggs are a reliable proxy for methylmercury concentrations in eggs. This assumption is rarely tested, but has important implications for assessing risk of mercury to birds. We conducted a detailed assessment of this assumption by (1) collecting original data to examine the relationship between total and methylmercury in eggs of two species, and (2) reviewing the published literature on mercury concentrations in bird eggs to examine whether the percentage of total mercury in the methylmercury form differed among species. Within American avocets (Recurvirostra americana) and Forster’s terns (Sterna forsteri), methylmercury concentrations were highly correlated (R2 = 0.99) with total mercury concentrations in individual eggs (range: 0.03–7.33 μg/g fww), and the regression slope (log scale) was not different from one (m = 0.992). The mean percentage of total mercury in the methylmercury form in eggs was 97% for American avocets (n = 30 eggs), 96% for Forster’s terns (n = 30 eggs), and 96% among all 22 species of birds (n = 30 estimates of species means). The percentage of total mercury in the methylmercury form ranged from 63% to 116% among individual eggs and 82% to 111% among species means, but this variation was not related to total mercury concentrations in eggs, foraging guild, nor to a species life history strategy as characterized along the precocial to altricial spectrum. Our results support the use of total mercury concentrations to estimate methylmercury concentrations in bird eggs.

  10. A SYSTEMS ANALYSIS OF SCHOOL BOARD ACTION.

    ERIC Educational Resources Information Center

    SCRIBNER, JAY D.

    THE BASIC ASSUMPTION OF THE FUNCTIONAL-SYSTEMS THEORY IS THAT STRUCTURES FULFILL FUNCTIONS IN SYSTEMS AND THAT SUBSYSTEMS OPERATE SEPARATELY WITHIN ANY TYPE OF STRUCTURE. RELYING MAINLY ON GABRIEL ALMOND'S PARADIGM, THE AUTHOR ATTEMPTS TO DETERMINE THE USEFULNESS OF THE FUNCTIONAL-SYSTEMS THEORY IN CONDUCTING EMPIRICAL RESEARCH OF SCHOOL BOARDS.…

  11. How cognitive neuroscience could be more biological—and what it might learn from clinical neuropsychology

    PubMed Central

    Frisch, Stefan

    2014-01-01

    Three widespread assumptions of Cognitive-affective Neuroscience are discussed: first, mental functions are assumed to be localized in circumscribed brain areas which can be exactly determined, at least in principle (localizationism). Second, this assumption is associated with the more general claim that these functions (and dysfunctions, such as in neurological or mental diseases) are somehow generated inside the brain (internalism). Third, these functions are seen to be “biological” in the sense that they can be decomposed and finally explained on the basis of elementary biological causes (i.e., genetic, molecular, neurophysiological etc.), causes that can be identified by experimental methods as the gold standard (isolationism). Clinical neuropsychology is widely assumed to support these tenets. However, by making reference to the ideas of Kurt Goldstein (1878–1965), one of its most important founders, I argue that none of these assumptions is sufficiently supported. From the perspective of a clinical-neuropsychological practitioner, assessing and treating brain damage sequelae reveals a quite different picture of the brain as well as of us “brain carriers”, making the organism (or person) in its specific environment the crucial reference point. This conclusion can be further elaborated: all experimental and clinical research on humans presupposes the notion of a situated, reflecting, and interacting subject, which precedes all kinds of scientific decomposition, however useful. These implications support the core assumptions of the embodiment approach to brain and mind, and, as I argue, Goldstein and his clinical-neuropsychological observations are part of its very origin, for both theoretical and historical reasons. PMID:25100981

  12. Improvements to Fidelity, Generation and Implementation of Physics-Based Lithium-Ion Reduced-Order Models

    NASA Astrophysics Data System (ADS)

    Rodriguez Marco, Albert

    Battery management systems (BMS) require computationally simple but highly accurate models of the battery cells they are monitoring and controlling. Historically, empirical equivalent-circuit models have been used, but increasingly researchers are focusing their attention on physics-based models due to their greater predictive capabilities. These models are of high intrinsic computational complexity and so must undergo some kind of order-reduction process to make their use by a BMS feasible: we favor methods based on a transfer-function approach of battery cell dynamics. In prior works, transfer functions have been found from full-order PDE models via two simplifying assumptions: (1) a linearization assumption--which is a fundamental necessity in order to make transfer functions--and (2) an assumption made out of expedience that decouples the electrolyte-potential and electrolyte-concentration PDEs in order to render an approach to solve for the transfer functions from the PDEs. This dissertation improves the fidelity of physics-based models by eliminating the need for the second assumption and, by linearizing nonlinear dynamics around different constant currents. Electrochemical transfer functions are infinite-order and cannot be expressed as a ratio of polynomials in the Laplace variable s. Thus, for practical use, these systems need to be approximated using reduced-order models that capture the most significant dynamics. This dissertation improves the generation of physics-based reduced-order models by introducing different realization algorithms, which produce a low-order model from the infinite-order electrochemical transfer functions. Physics-based reduced-order models are linear and describe cell dynamics if operated near the setpoint at which they have been generated. Hence, multiple physics-based reduced-order models need to be generated at different setpoints (i.e., state-of-charge, temperature and C-rate) in order to extend the cell operating range. This dissertation improves the implementation of physics-based reduced-order models by introducing different blending approaches that combine the pre-computed models generated (offline) at different setpoints in order to produce good electrochemical estimates (online) along the cell state-of-charge, temperature and C-rate range.

  13. Asymmetrical effects of mesophyll conductance on fundamental photosynthetic parameters and their relationships estimated from leaf gas exchange measurements.

    PubMed

    Sun, Ying; Gu, Lianhong; Dickinson, Robert E; Pallardy, Stephen G; Baker, John; Cao, Yonghui; DaMatta, Fábio Murilo; Dong, Xuejun; Ellsworth, David; Van Goethem, Davina; Jensen, Anna M; Law, Beverly E; Loos, Rodolfo; Martins, Samuel C Vitor; Norby, Richard J; Warren, Jeffrey; Weston, David; Winter, Klaus

    2014-04-01

    Worldwide measurements of nearly 130 C3 species covering all major plant functional types are analysed in conjunction with model simulations to determine the effects of mesophyll conductance (g(m)) on photosynthetic parameters and their relationships estimated from A/Ci curves. We find that an assumption of infinite g(m) results in up to 75% underestimation for maximum carboxylation rate V(cmax), 60% for maximum electron transport rate J(max), and 40% for triose phosphate utilization rate T(u) . V(cmax) is most sensitive, J(max) is less sensitive, and T(u) has the least sensitivity to the variation of g(m). Because of this asymmetrical effect of g(m), the ratios of J(max) to V(cmax), T(u) to V(cmax) and T(u) to J(max) are all overestimated. An infinite g(m) assumption also limits the freedom of variation of estimated parameters and artificially constrains parameter relationships to stronger shapes. These findings suggest the importance of quantifying g(m) for understanding in situ photosynthetic machinery functioning. We show that a nonzero resistance to CO2 movement in chloroplasts has small effects on estimated parameters. A non-linear function with gm as input is developed to convert the parameters estimated under an assumption of infinite gm to proper values. This function will facilitate gm representation in global carbon cycle models. © 2013 John Wiley & Sons Ltd.

  14. Functional annotation from the genome sequence of the giant panda.

    PubMed

    Huo, Tong; Zhang, Yinjie; Lin, Jianping

    2012-08-01

    The giant panda is one of the most critically endangered species due to the fragmentation and loss of its habitat. Studying the functions of proteins in this animal, especially specific trait-related proteins, is therefore necessary to protect the species. In this work, the functions of these proteins were investigated using the genome sequence of the giant panda. Data on 21,001 proteins and their functions were stored in the Giant Panda Protein Database, in which the proteins were divided into two groups: 20,179 proteins whose functions can be predicted by GeneScan formed the known-function group, whereas 822 proteins whose functions cannot be predicted by GeneScan comprised the unknown-function group. For the known-function group, we further classified the proteins by molecular function, biological process, cellular component, and tissue specificity. For the unknown-function group, we developed a strategy in which the proteins were filtered by cross-Blast to identify panda-specific proteins under the assumption that proteins related to the panda-specific traits in the unknown-function group exist. After this filtering procedure, we identified 32 proteins (2 of which are membrane proteins) specific to the giant panda genome as compared against the dog and horse genomes. Based on their amino acid sequences, these 32 proteins were further analyzed by functional classification using SVM-Prot, motif prediction using MyHits, and interacting protein prediction using the Database of Interacting Proteins. Nineteen proteins were predicted to be zinc-binding proteins, thus affecting the activities of nucleic acids. The 32 panda-specific proteins will be further investigated by structural and functional analysis.

  15. Building functional groups of marine benthic macroinvertebrates on the basis of general community assembly mechanisms

    NASA Astrophysics Data System (ADS)

    Alexandridis, Nikolaos; Bacher, Cédric; Desroy, Nicolas; Jean, Fred

    2017-03-01

    The accurate reproduction of the spatial and temporal dynamics of marine benthic biodiversity requires the development of mechanistic models, based on the processes that shape macroinvertebrate communities. The modelled entities should, accordingly, be able to adequately represent the many functional roles that are performed by benthic organisms. With this goal in mind, we applied the emergent group hypothesis (EGH), which assumes functional equivalence within and functional divergence between groups of species. The first step of the grouping involved the selection of 14 biological traits that describe the role of benthic macroinvertebrates in 7 important community assembly mechanisms. A matrix of trait values for the 240 species that occurred in the Rance estuary (Brittany, France) in 1995 formed the basis for a hierarchical classification that generated 20 functional groups, each with its own trait values. The functional groups were first evaluated based on their ability to represent observed patterns of biodiversity. The two main assumptions of the EGH were then tested, by assessing the preservation of niche attributes among the groups and the neutrality of functional differences within them. The generally positive results give us confidence in the ability of the grouping to recreate functional diversity in the Rance estuary. A first look at the emergent groups provides insights into the potential role of community assembly mechanisms in shaping biodiversity patterns. Our next steps include the derivation of general rules of interaction and their incorporation, along with the functional groups, into mechanistic models of benthic biodiversity.

  16. Local linear discriminant analysis framework using sample neighbors.

    PubMed

    Fan, Zizhu; Xu, Yong; Zhang, David

    2011-07-01

    The linear discriminant analysis (LDA) is a very popular linear feature extraction approach. The algorithms of LDA usually perform well under the following two assumptions. The first assumption is that the global data structure is consistent with the local data structure. The second assumption is that the input data classes are Gaussian distributions. However, in real-world applications, these assumptions are not always satisfied. In this paper, we propose an improved LDA framework, the local LDA (LLDA), which can perform well without needing to satisfy the above two assumptions. Our LLDA framework can effectively capture the local structure of samples. According to different types of local data structure, our LLDA framework incorporates several different forms of linear feature extraction approaches, such as the classical LDA and principal component analysis. The proposed framework includes two LLDA algorithms: a vector-based LLDA algorithm and a matrix-based LLDA (MLLDA) algorithm. MLLDA is directly applicable to image recognition, such as face recognition. Our algorithms need to train only a small portion of the whole training set before testing a sample. They are suitable for learning large-scale databases especially when the input data dimensions are very high and can achieve high classification accuracy. Extensive experiments show that the proposed algorithms can obtain good classification results.

  17. Elaborative retrieval: Do semantic mediators improve memory?

    PubMed

    Lehman, Melissa; Karpicke, Jeffrey D

    2016-10-01

    The elaborative retrieval account of retrieval-based learning proposes that retrieval enhances retention because the retrieval process produces the generation of semantic mediators that link cues to target information. We tested 2 assumptions that form the basis of this account: that semantic mediators are more likely to be generated during retrieval than during restudy and that the generation of mediators facilitates later recall of targets. Although these assumptions are often discussed in the context of retrieval processes, we noted that there was little prior empirical evidence to support either assumption. We conducted a series of experiments to measure the generation of mediators during retrieval and restudy and to examine the effect of the generation of mediators on later target recall. Across 7 experiments, we found that the generation of mediators was not more likely during retrieval (and may be more likely during restudy), and that the activation of mediators was unrelated to subsequent free recall of targets and was negatively related to cued recall of targets. The results pose challenges for both assumptions of the elaborative retrieval account. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  18. Adaptive windowing and windowless approaches to estimate dynamic functional brain connectivity

    NASA Astrophysics Data System (ADS)

    Yaesoubi, Maziar; Calhoun, Vince D.

    2017-08-01

    In this work, we discuss estimation of dynamic dependence of a multi-variate signal. Commonly used approaches are often based on a locality assumption (e.g. sliding-window) which can miss spontaneous changes due to blurring with local but unrelated changes. We discuss recent approaches to overcome this limitation including 1) a wavelet-space approach, essentially adapting the window to the underlying frequency content and 2) a sparse signal-representation which removes any locality assumption. The latter is especially useful when there is no prior knowledge of the validity of such assumption as in brain-analysis. Results on several large resting-fMRI data sets highlight the potential of these approaches.

  19. 77 FR 75440 - Agency Information Collection Activities: Application for Naturalization, Form Number N-400...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-20

    ... validity of the methodology and assumptions used; (3) Enhance the quality, utility, and clarity of the... the inadmissibility grounds that were added by the Intelligence Reform and Terrorism Prevention Act of...

  20. Simplified analysis of a generalized bias test for fabrics with two families of inextensible fibres

    NASA Astrophysics Data System (ADS)

    Cuomo, M.; dell'Isola, F.; Greco, L.

    2016-06-01

    Two tests for woven fabrics with orthogonal fibres are examined using simplified kinematic assumptions. The aim is to analyse how different constitutive assumptions may affect the response of the specimen. The fibres are considered inextensible, and the kinematics of 2D continua with inextensible chords due to Rivlin is adopted. In addition to two forms of strain energy depending on the shear deformation, also two forms of energy depending on the gradient of shear are examined. It is shown that this energy can account for the bending of the fibres. In addition to the standard bias extension test, a modified test has been examined, in which the head of the specimen is rotated rather than translated. In this case more bending occurs, so that the results of the simulation carried out with the different energy models adopted differ more that what has been found for the BE test.

  1. The Magnetar Model of the Superluminous Supernova GAIA16apd and the Explosion Jet Feedback Mechanism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soker, Noam, E-mail: soker@physics.technion.ac.il

    Under the assumption that jets explode core collapse supernovae (CCSNe) in a negative jet feedback mechanism (JFM), this paper shows that rapidly rotating neutron stars are likely to be formed when the explosion is very energetic. Under the assumption that an accretion disk or an accretion belt around the just-formed neutron star launch jets and that the accreted gas spins-up the just-formed neutron star, I derive a crude relation between the energy that is stored in the spinning neutron star and the explosion energy. This relation is ( E {sub NS-spin}/ E {sub exp}) ≈ E {sub exp}/10{sup 52} erg;more » It shows that within the frame of the JFM explosion model of CCSNe, spinning neutron stars, such as magnetars, might have significant energy in super-energetic explosions. The existence of magnetars, if confirmed, such as in the recent super-energetic supernova GAIA16apd, further supports the call for a paradigm shift from neutrino-driven to jet-driven CCSN mechanisms.« less

  2. Student Services: Programs and Functions. A Report on the Administration of Selected Student and Campus Services of the University of Illinois at Chicago Circle. Part 1 and 2.

    ERIC Educational Resources Information Center

    Bentz, Robert P.; And Others

    The commuter institute is one to which students commute. The two basic assumptions of this study are: (1) the Chicago Circle campus of the University of Illinois will remain a commuter institution during the decade ahead; and (2) the campus will increasingly serve a more heterogeneous student body. These assumptions have important implications for…

  3. ARE COASTAL WETLAND-LAKE LINKAGES IMPORTANT?

    EPA Science Inventory

    Because coastal werlands typically comprise only a small percentage of the overall surface area in large lakes, an assumption has often been made that functional links between wetlands and the lake proper are of little significance. Recent investigations of functional linkages be...

  4. Does the Position of Response Options in Multiple-Choice Tests Matter?

    ERIC Educational Resources Information Center

    Hohensinn, Christine; Baghaei, Purya

    2017-01-01

    In large scale multiple-choice (MC) tests alternate forms of a test may be developed to prevent cheating by changing the order of items or by changing the position of the response options. The assumption is that since the content of the test forms are the same the order of items or the positions of the response options do not have any effect on…

  5. Merchantable height of trees in Oregon—a comparison of current logging practice and volume table specifications.

    Treesearch

    Don Minore; Donald R. Gedney

    1960-01-01

    A large proportion of present-day timber cruising is done by measuring or estimating three tree dimensions: diameter at breast height, form class, and merchantable height. Tree volumes are then determined from tables which equate volume to the varying combinations of height, d.b.h., and form class. Assumptions concerning merchantable height were made in constructing...

  6. Lexical Collocation and Topic Occurrence in Well-Written Editorials: A Study in Form.

    ERIC Educational Resources Information Center

    Addison, James C., Jr.

    To explore the concept of lexical collocation, or relationships between words, a study was conducted based on three assumptions: (1) that a text structure for a unit of discourse was analogous to that existing at the level of the sentence, (2) that such a text form could be discovered if a large enough sample of generically similar texts was…

  7. Efficacy of Error for the Correction of Initially Incorrect Assumptions and of Feedback for the Affirmation of Correct Responding: Learning in the Classroom

    ERIC Educational Resources Information Center

    Brosvic, Gary M.; Epstein, Michael L.; Cook, Michael J.; Dihoff, Roberta E.

    2005-01-01

    Participants completed 5 classroom examinations during which the timing of knowledge of results (no feedback: Scantron form; delayed feedback: end-of-test, 24 hour delay; immediate feedback: educator, response form) and iterative responding (1 response, up to 4 responses) were manipulated. At the end of the semester, each participant completed a…

  8. Use of cognitive behavior therapy for functional hypothalamic amenorrhea.

    PubMed

    Berga, Sarah L; Loucks, Tammy L

    2006-12-01

    Behaviors that chronically activate the hypothalamic-pituitary-adrenal (HPA) axis and/or suppress the hypothalamic-pituitary-thyroidal (HPT) axis disrupt the hypothalamic-pituitary-gonadal axis in women and men. Individuals with functional hypothalamic hypogonadism typically engage in a combination of behaviors that concomitantly heighten psychogenic stress and increase energy demand. Although it is not widely recognized clinically, functional forms of hypothalamic hypogonadism are more than an isolated disruption of gonadotropin-releasing hormone (GnRH) drive and reproductive compromise. Indeed, women with functional hypothalamic amenorrhea display a constellation of neuroendocrine aberrations that reflect allostatic adjustments to chronic stress. Given these considerations, we have suggested that complete neuroendocrine recovery would involve more than reproductive recovery. Hormone replacement strategies have limited benefit because they do not ameliorate allostatic endocrine adjustments, particularly the activation of the adrenal and the suppression of the thyroidal axes. Indeed, the rationale for the use of sex steroid replacement is based on the erroneous assumption that functional forms of hypothalamic hypogonadism represent only or primarily an alteration in the hypothalamic-pituitary-gonadal axis. Potential health consequences of functional hypothalamic amenorrhea, often termed stress-induced anovulation, may include an increased risk of cardiovascular disease, osteoporosis, depression, other psychiatric conditions, and dementia. Although fertility can be restored with exogenous administration of gonadotropins or pulsatile GnRH, fertility management alone will not permit recovery of the adrenal and thyroidal axes. Initiating pregnancy with exogenous means without reversing the hormonal milieu induced by chronic stress may increase the likelihood of poor obstetrical, fetal, or neonatal outcomes. In contrast, behavioral and psychological interventions that address problematic behaviors and attitudes, such as cognitive behavior therapy (CBT), have the potential to permit resumption of full ovarian function along with recovery of the adrenal, thyroidal, and other neuroendocrine aberrations. Full endocrine recovery potentially offers better individual, maternal, and child health.

  9. Unified halo-independent formalism from convex hulls for direct dark matter searches

    NASA Astrophysics Data System (ADS)

    Gelmini, Graciela B.; Huh, Ji-Haeng; Witte, Samuel J.

    2017-12-01

    Using the Fenchel-Eggleston theorem for convex hulls (an extension of the Caratheodory theorem), we prove that any likelihood can be maximized by either a dark matter 1- speed distribution F(v) in Earth's frame or 2- Galactic velocity distribution fgal(vec u), consisting of a sum of delta functions. The former case applies only to time-averaged rate measurements and the maximum number of delta functions is (Script N‑1), where Script N is the total number of data entries. The second case applies to any harmonic expansion coefficient of the time-dependent rate and the maximum number of terms is Script N. Using time-averaged rates, the aforementioned form of F(v) results in a piecewise constant unmodulated halo function tilde eta0BF(vmin) (which is an integral of the speed distribution) with at most (Script N-1) downward steps. The authors had previously proven this result for likelihoods comprised of at least one extended likelihood, and found the best-fit halo function to be unique. This uniqueness, however, cannot be guaranteed in the more general analysis applied to arbitrary likelihoods. Thus we introduce a method for determining whether there exists a unique best-fit halo function, and provide a procedure for constructing either a pointwise confidence band, if the best-fit halo function is unique, or a degeneracy band, if it is not. Using measurements of modulation amplitudes, the aforementioned form of fgal(vec u), which is a sum of Galactic streams, yields a periodic time-dependent halo function tilde etaBF(vmin, t) which at any fixed time is a piecewise constant function of vmin with at most Script N downward steps. In this case, we explain how to construct pointwise confidence and degeneracy bands from the time-averaged halo function. Finally, we show that requiring an isotropic Galactic velocity distribution leads to a Galactic speed distribution F(u) that is once again a sum of delta functions, and produces a time-dependent tilde etaBF(vmin, t) function (and a time-averaged tilde eta0BF(vmin)) that is piecewise linear, differing significantly from best-fit halo functions obtained without the assumption of isotropy.

  10. Improving protein complex classification accuracy using amino acid composition profile.

    PubMed

    Huang, Chien-Hung; Chou, Szu-Yu; Ng, Ka-Lok

    2013-09-01

    Protein complex prediction approaches are based on the assumptions that complexes have dense protein-protein interactions and high functional similarity between their subunits. We investigated those assumptions by studying the subunits' interaction topology, sequence similarity and molecular function for human and yeast protein complexes. Inclusion of amino acids' physicochemical properties can provide better understanding of protein complex properties. Principal component analysis is carried out to determine the major features. Adopting amino acid composition profile information with the SVM classifier serves as an effective post-processing step for complexes classification. Improvement is based on primary sequence information only, which is easy to obtain. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Regression-assisted deconvolution.

    PubMed

    McIntyre, Julie; Stefanski, Leonard A

    2011-06-30

    We present a semi-parametric deconvolution estimator for the density function of a random variable biX that is measured with error, a common challenge in many epidemiological studies. Traditional deconvolution estimators rely only on assumptions about the distribution of X and the error in its measurement, and ignore information available in auxiliary variables. Our method assumes the availability of a covariate vector statistically related to X by a mean-variance function regression model, where regression errors are normally distributed and independent of the measurement errors. Simulations suggest that the estimator achieves a much lower integrated squared error than the observed-data kernel density estimator when models are correctly specified and the assumption of normal regression errors is met. We illustrate the method using anthropometric measurements of newborns to estimate the density function of newborn length. Copyright © 2011 John Wiley & Sons, Ltd.

  12. On estimating probability of presence from use-availability or presence-background data.

    PubMed

    Phillips, Steven J; Elith, Jane

    2013-06-01

    A fundamental ecological modeling task is to estimate the probability that a species is present in (or uses) a site, conditional on environmental variables. For many species, available data consist of "presence" data (locations where the species [or evidence of it] has been observed), together with "background" data, a random sample of available environmental conditions. Recently published papers disagree on whether probability of presence is identifiable from such presence-background data alone. This paper aims to resolve the disagreement, demonstrating that additional information is required. We defined seven simulated species representing various simple shapes of response to environmental variables (constant, linear, convex, unimodal, S-shaped) and ran five logistic model-fitting methods using 1000 presence samples and 10 000 background samples; the simulations were repeated 100 times. The experiment revealed a stark contrast between two groups of methods: those based on a strong assumption that species' true probability of presence exactly matches a given parametric form had highly variable predictions and much larger RMS error than methods that take population prevalence (the fraction of sites in which the species is present) as an additional parameter. For six species, the former group grossly under- or overestimated probability of presence. The cause was not model structure or choice of link function, because all methods were logistic with linear and, where necessary, quadratic terms. Rather, the experiment demonstrates that an estimate of prevalence is not just helpful, but is necessary (except in special cases) for identifying probability of presence. We therefore advise against use of methods that rely on the strong assumption, due to Lele and Keim (recently advocated by Royle et al.) and Lancaster and Imbens. The methods are fragile, and their strong assumption is unlikely to be true in practice. We emphasize, however, that we are not arguing against standard statistical methods such as logistic regression, generalized linear models, and so forth, none of which requires the strong assumption. If probability of presence is required for a given application, there is no panacea for lack of data. Presence-background data must be augmented with an additional datum, e.g., species' prevalence, to reliably estimate absolute (rather than relative) probability of presence.

  13. Large earthquake rates from geologic, geodetic, and seismological perspectives

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.

    2017-12-01

    Earthquake rate and recurrence information comes primarily from geology, geodesy, and seismology. Geology gives the longest temporal perspective, but it reveals only surface deformation, relatable to earthquakes only with many assumptions. Geodesy is also limited to surface observations, but it detects evidence of the processes leading to earthquakes, again subject to important assumptions. Seismology reveals actual earthquakes, but its history is too short to capture important properties of very large ones. Unfortunately, the ranges of these observation types barely overlap, so that integrating them into a consistent picture adequate to infer future prospects requires a great deal of trust. Perhaps the most important boundary is the temporal one at the beginning of the instrumental seismic era, about a century ago. We have virtually no seismological or geodetic information on large earthquakes before then, and little geological information after. Virtually all-modern forecasts of large earthquakes assume some form of equivalence between tectonic- and seismic moment rates as functions of location, time, and magnitude threshold. That assumption links geology, geodesy, and seismology, but it invokes a host of other assumptions and incurs very significant uncertainties. Questions include temporal behavior of seismic and tectonic moment rates; shape of the earthquake magnitude distribution; upper magnitude limit; scaling between rupture length, width, and displacement; depth dependence of stress coupling; value of crustal rigidity; and relation between faults at depth and their surface fault traces, to name just a few. In this report I'll estimate the quantitative implications for estimating large earthquake rate. Global studies like the GEAR1 project suggest that surface deformation from geology and geodesy best show the geography of very large, rare earthquakes in the long term, while seismological observations of small earthquakes best forecasts moderate earthquakes up to about magnitude 7. Regional forecasts for a few decades, like those in UCERF3, could be improved by calibrating tectonic moment rate to past seismicity rates. Century-long forecasts must be speculative. Estimates of maximum magnitude and rate of giant earthquakes over geologic time scales require more than science.

  14. Neutron stars and millisecond pulsars from accretion-induced collapse in globular clusters

    NASA Technical Reports Server (NTRS)

    Bailyn, Charles D.; Grindlay, Jonathan E.

    1990-01-01

    This paper examines the limits on the number of millisecond pulsars which could be formed in globular clusters by the generally accepted scenario (in which a neutron star is created by the supernova of an initially massive star and subsequently captures a companion to form a low-mass X-ray binary which eventually becomes a millisecond pulsar). It is found that, while the number of observed low-mass X-ray binaries can be adequately explained in this way, the reasonable assumption that the pulsar luminosity function in clusters extends below the current observational limits down to the luminosity of the faintest millisecond pulsars in the field suggests a cluster population of millisecond pulsars which is substantially larger than the standard model can produce. Alleviating this problem by postulating much shorter lifetimes for the X-ray binaries requires massive star populations sufficiently large that the mass loss resulting from their evolution would be likely to unbind the cluster. It is argued that neutron star formation in globular clusters by accretion-induced collapse of white dwarfs may resolve the discrepancy in birthrates.

  15. Self-consistent nonlocal feedback theory for electrocatalytic swimmers with heterogeneous surface chemical kinetics

    NASA Astrophysics Data System (ADS)

    Nourhani, Amir; Crespi, Vincent H.; Lammert, Paul E.

    2015-06-01

    We present a self-consistent nonlocal feedback theory for the phoretic propulsion mechanisms of electrocatalytic micromotors or nanomotors. These swimmers, such as bimetallic platinum and gold rods catalyzing decomposition of hydrogen peroxide in aqueous solution, have received considerable theoretical attention. In contrast, the heterogeneous electrochemical processes with nonlocal feedback that are the actual "engines" of such motors are relatively neglected. We present a flexible approach to these processes using bias potential as a control parameter field and a locally-open-circuit reference state, carried through in detail for a spherical motor. While the phenomenological flavor makes meaningful contact with experiment easier, required inputs can also conceivably come from, e.g., Frumkin-Butler-Volmer kinetics. Previously obtained results are recovered in the weak-heterogeneity limit and improved small-basis approximations tailored to structural heterogeneity are presented. Under the assumption of weak inhomogeneity, a scaling form is deduced for motor speed as a function of fuel concentration and swimmer size. We argue that this form should be robust and demonstrate a good fit to experimental data.

  16. Simple robust control laws for robot manipulators. Part 2: Adaptive case

    NASA Technical Reports Server (NTRS)

    Bayard, D. S.; Wen, J. T.

    1987-01-01

    A new class of asymptotically stable adaptive control laws is introduced for application to the robotic manipulator. Unlike most applications of adaptive control theory to robotic manipulators, this analysis addresses the nonlinear dynamics directly without approximation, linearization, or ad hoc assumptions, and utilizes a parameterization based on physical (time-invariant) quantities. This approach is made possible by using energy-like Lyapunov functions which retain the nonlinear character and structure of the dynamics, rather than simple quadratic forms which are ubiquitous to the adaptive control literature, and which have bound the theory tightly to linear systems with unknown parameters. It is a unique feature of these results that the adaptive forms arise by straightforward certainty equivalence adaptation of their nonadaptive counterparts found in the companion to this paper (i.e., by replacing unknown quantities by their estimates) and that this simple approach leads to asymptotically stable closed-loop adaptive systems. Furthermore, it is emphasized that this approach does not require convergence of the parameter estimates (i.e., via persistent excitation), invertibility of the mass matrix estimate, or measurement of the joint accelerations.

  17. Random matrix models, double-time Painlevé equations, and wireless relaying

    NASA Astrophysics Data System (ADS)

    Chen, Yang; Haq, Nazmus S.; McKay, Matthew R.

    2013-06-01

    This paper gives an in-depth study of a multiple-antenna wireless communication scenario in which a weak signal received at an intermediate relay station is amplified and then forwarded to the final destination. The key quantity determining system performance is the statistical properties of the signal-to-noise ratio (SNR) γ at the destination. Under certain assumptions on the encoding structure, recent work has characterized the SNR distribution through its moment generating function, in terms of a certain Hankel determinant generated via a deformed Laguerre weight. Here, we employ two different methods to describe the Hankel determinant. First, we make use of ladder operators satisfied by orthogonal polynomials to give an exact characterization in terms of a "double-time" Painlevé differential equation, which reduces to Painlevé V under certain limits. Second, we employ Dyson's Coulomb fluid method to derive a closed form approximation for the Hankel determinant. The two characterizations are used to derive closed-form expressions for the cumulants of γ, and to compute performance quantities of engineering interest.

  18. Criminal Rehabilitation Through Medical Intervention: Moral Liability and the Right to Bodily Integrity.

    PubMed

    Douglas, Thomas

    2014-06-01

    Criminal offenders are sometimes required, by the institutions of criminal justice, to undergo medical interventions intended to promote rehabilitation. Ethical debate regarding this practice has largely proceeded on the assumption that medical interventions may only permissibly be administered to criminal offenders with their consent. In this article I challenge this assumption by suggesting that committing a crime might render one morally liable to certain forms of medical intervention. I then consider whether it is possible to respond persuasively to this challenge by invoking the right to bodily integrity. I argue that it is not.

  19. Dental Education: Trends and Assumptions for the 21st Century

    PubMed Central

    Sinkford, Jeanne C.

    1987-01-01

    Dental educational institutions, as components of university systems, must develop strategic plans for program development, resource allocation, evaluation, and continued financial support. This dynamic process will be accomplished in a competitive academic arena where program excellence and program relevance are key issues in the game of survival. This article focuses on issues and trends that form the basis for planning assumptions and initiatives into the next decade and into the 21st century. This is our challenge, this is our mission if we are to be catalysts for change in the future. PMID:3560255

  20. The Two Brains and the Education Process.

    ERIC Educational Resources Information Center

    Shook, Ronald

    The human brain is lateralized, different functions being housed in each hemisphere. Several assumptions which are mistakenly considered fact by researchers include: (1) the left hemisphere is for rational functions, while the right is for intuitive functions; (2) the hemispheres do not interact as well with each other as they should; (3) the use…

  1. Similarity of Turbulent Energy Scale Budget Equation of a Round Turbulent Jet

    NASA Astrophysics Data System (ADS)

    Sadeghi, Hamed; Lavoie, Philippe; Pollard, Andrew

    2014-11-01

    A novel extension to the similarity-based form of the transport equation for the second-order velocity structure function of <(δq) 2 > along the jet centreline (see Danaila et al., 2004) has been obtained. This new self-similar equation has the desirable benefit of requiring less extensive measurements to calculate the inhomogeneous (decay and production) terms of the transport equation. According to this equation, the normalized third-order structure function can be uniquely determined when the normalized second-order structure function, the power-law exponent of and the decay rate constants of and are available. In addition, on the basis of the current similarity analysis, the similarity assumptions in combination with power-law decay of mean velocity (U ~(x -x0) - 1) are strong enough to imply power-law decay of fluctuations ( ~(x -x0) m). The similarity solutions are then tested against new experimental data, which were taken along the centreline of a round jet at ReD = 50 , 000 . For the present set of initial conditions, exhibits a power-law behaviour with m = - 1 . 83 . This work was supported by grants from NSERC (Canada).

  2. Narayanaswamy's 1971 aging theory and material time

    NASA Astrophysics Data System (ADS)

    Dyre, Jeppe C.

    2015-09-01

    The Bochkov-Kuzovlev nonlinear fluctuation-dissipation theorem is used to derive Narayanaswamy's phenomenological theory of physical aging, in which this highly nonlinear phenomenon is described by a linear material-time convolution integral. A characteristic property of the Narayanaswamy aging description is material-time translational invariance, which is here taken as the basic assumption of the derivation. It is shown that only one possible definition of the material time obeys this invariance, namely, the square of the distance travelled from a configuration of the system far back in time. The paper concludes with suggestions for computer simulations that test for consequences of material-time translational invariance. One of these is the "unique-triangles property" according to which any three points on the system's path form a triangle such that two side lengths determine the third; this is equivalent to the well-known triangular relation for time-autocorrelation functions of aging spin glasses [L. F. Cugliandolo and J. Kurchan, J. Phys. A: Math. Gen. 27, 5749 (1994)]. The unique-triangles property implies a simple geometric interpretation of out-of-equilibrium time-autocorrelation functions, which extends to aging a previously proposed framework for such functions in equilibrium [J. C. Dyre, e-print arXiv:cond-mat/9712222 (1997)].

  3. Dimensional analysis yields the general second-order differential equation underlying many natural phenomena: the mathematical properties of a phenomenon's data plot then specify a unique differential equation for it.

    PubMed

    Kepner, Gordon R

    2014-08-27

    This study uses dimensional analysis to derive the general second-order differential equation that underlies numerous physical and natural phenomena described by common mathematical functions. It eschews assumptions about empirical constants and mechanisms. It relies only on the data plot's mathematical properties to provide the conditions and constraints needed to specify a second-order differential equation that is free of empirical constants for each phenomenon. A practical example of each function is analyzed using the general form of the underlying differential equation and the observable unique mathematical properties of each data plot, including boundary conditions. This yields a differential equation that describes the relationship among the physical variables governing the phenomenon's behavior. Complex phenomena such as the Standard Normal Distribution, the Logistic Growth Function, and Hill Ligand binding, which are characterized by data plots of distinctly different sigmoidal character, are readily analyzed by this approach. It provides an alternative, simple, unifying basis for analyzing each of these varied phenomena from a common perspective that ties them together and offers new insights into the appropriate empirical constants for describing each phenomenon.

  4. Impact angle constrained three-dimensional integrated guidance and control for STT missile in the presence of input saturation.

    PubMed

    Wang, Sen; Wang, Weihong; Xiong, Shaofeng

    2016-09-01

    Considering a class of skid-to-turn (STT) missile with fixed target and constrained terminal impact angles, a novel three-dimensional (3D) integrated guidance and control (IGC) scheme is proposed in this paper. Based on coriolis theorem, the fully nonlinear IGC model without the assumption that the missile flies heading to the target at initial time is established in the three-dimensional space. For this strict-feedback form of multi-variable system, dynamic surface control algorithm is implemented combining with extended observer (ESO) to complete the preliminary design. Then, in order to deal with the problems of the input constraints, a hyperbolic tangent function is introduced to approximate the saturation function and auxiliary system including a Nussbaum function established to compensate for the approximation error. The stability of the closed-loop system is proven based on Lyapunov theory. Numerical simulations results show that the proposed integrated guidance and control algorithm can ensure the accuracy of target interception with initial alignment angle deviation and the input saturation is suppressed with smooth deflection curves. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  5. STAR CLUSTER FORMATION AND DESTRUCTION IN THE MERGING GALAXY NGC 3256

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mulia, A. J.; Chandar, R.; Whitmore, B. C.

    2016-07-20

    We use the Advanced Camera for Surveys on the Hubble Space Telescope to study the rich population of young massive star clusters in the main body of NGC 3256, a merging pair of galaxies with a high star formation rate (SFR) and SFR per unit area (Σ{sub SFR}). These clusters have luminosity and mass functions that follow power laws, dN / dL ∝ L{sup α} with α = 2.23 ± 0.07, and dN / dM ∝ M{sup β} with β = 1.86 ± 0.34 for τ < 10 Myr clusters, similar to those found in more quiescent galaxies. The agemore » distribution can be described by dN / dτ ∝ τ{sup γ}, with γ ≈ 0.67 ± 0.08 for clusters younger than about a few hundred million years, with no obvious dependence on cluster mass. This is consistent with a picture where ∼80% of the clusters are disrupted each decade in time. We investigate the claim that galaxies with high Σ{sub SFR} form clusters more efficiently than quiescent systems by determining the fraction of stars in bound clusters (Γ) and the CMF/SFR statistic (CMF is the cluster mass function) for NGC 3256 and comparing the results with those for other galaxies. We find that the CMF/SFR statistic for NGC 3256 agrees well with that found for galaxies with Σ{sub SFR} and SFRs that are lower by 1–3 orders of magnitude, but that estimates for Γ are only robust when the same sets of assumptions are applied. Currently, Γ values available in the literature have used different sets of assumptions, making it more difficult to compare the results between galaxies.« less

  6. Modeling FtsZ ring formation in the bacterial cell-anisotropic aggregation via mutual interactions of polymer rods.

    PubMed

    Fischer-Friedrich, Elisabeth; Gov, Nir

    2011-04-01

    The cytoskeletal protein FtsZ polymerizes to a ring structure (Z ring) at the inner cytoplasmic membrane that marks the future division site and scaffolds the division machinery in many bacterial species. FtsZ is known to polymerize in the presence of GTP into single-stranded protofilaments. In vivo, FtsZ polymers become associated with the cytoplasmic membrane via interaction with the membrane-binding proteins FtsA and ZipA. The FtsZ ring structure is highly dynamic and undergoes constantly polymerization and depolymerization processes and exchange with the cytoplasmic pool. In this theoretical study, we consider a scenario of Z ring self-organization via self-enhanced attachment of FtsZ polymers due to end-to-end interactions and lateral interactions of FtsZ polymers on the membrane. With the assumption of exclusively circumferential polymer orientations, we derive coarse-grained equations for the dynamics of the pool of cytoplasmic and membrane-bound FtsZ. To capture stochastic effects expected in the system due to low particle numbers, we simulate our computational model using a Gillespie-type algorithm. We obtain ring- and arc-shaped aggregations of FtsZ polymers on the membrane as a function of monomer numbers in the cell. In particular, our model predicts the number of FtsZ rings forming in the cell as a function of cell geometry and FtsZ concentration. We also calculate the time of FtsZ ring localization to the midplane in the presence of Min oscillations. Finally, we demonstrate that the assumptions and results of our model are confirmed by 3D reconstructions of fluorescently-labeled FtsZ structures in E. coli that we obtained.

  7. Star Cluster Formation and Destruction in the Merging Galaxy NGC 3256

    NASA Astrophysics Data System (ADS)

    Mulia, A. J.; Chandar, R.; Whitmore, B. C.

    2016-07-01

    We use the Advanced Camera for Surveys on the Hubble Space Telescope to study the rich population of young massive star clusters in the main body of NGC 3256, a merging pair of galaxies with a high star formation rate (SFR) and SFR per unit area (ΣSFR). These clusters have luminosity and mass functions that follow power laws, dN/dL ∝ L α with α = -2.23 ± 0.07, and dN/dM ∝ M β with β = -1.86 ± 0.34 for τ < 10 Myr clusters, similar to those found in more quiescent galaxies. The age distribution can be described by dN/dτ ∝ τ γ , with γ ≈ -0.67 ± 0.08 for clusters younger than about a few hundred million years, with no obvious dependence on cluster mass. This is consistent with a picture where ˜80% of the clusters are disrupted each decade in time. We investigate the claim that galaxies with high ΣSFR form clusters more efficiently than quiescent systems by determining the fraction of stars in bound clusters (Γ) and the CMF/SFR statistic (CMF is the cluster mass function) for NGC 3256 and comparing the results with those for other galaxies. We find that the CMF/SFR statistic for NGC 3256 agrees well with that found for galaxies with ΣSFR and SFRs that are lower by 1-3 orders of magnitude, but that estimates for Γ are only robust when the same sets of assumptions are applied. Currently, Γ values available in the literature have used different sets of assumptions, making it more difficult to compare the results between galaxies.

  8. Modeling the energetic and exergetic self-sustainability of societies with different structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sciubba, E.

    1995-06-01

    The paper examines global energy and exergy flows in various models of organized human societies: from primitive tribal organizations to teocratic/aristocratic societies, to the present industrial (and post-industrial) society, to possible future highly robotized or central control social organizations. The analysis focuses on the very general chain of technological processes connected to the extraction, conversion, distribution and final use of the real energetic content of natural resources (i.e., their exergy): the biological food chain is also considered, albeit in a very simplified and humankind sense. It is argued that, to sustain this chain of processes, it is necessary to usemore » a substantial portion of the final-use energy flow, and to employ a large portion of the total work force sustained by this end-use energy. It is shown that if these quantities can be related to the total exergy flow rate (from the source) and to the total available work force, then this functional relationship takes different forms in different types of society. The procedure is very general: each type of societal organization is reduced to a simple model for which energy and exergy flow diagrams are calculated, under certain well-defined assumptions, which restrain both the exchanges among the functional groups which constitute the model, and the exchanges with the environment. The results can be quantified using some assumptions/projections about energy consumption levels for different stages of technological development which are available in the literature; the procedure is applied to some models of primitive and pre-industrial societies, to the present industrial/post-industrial society, and to a hypothetical model of a future, high-technology society.« less

  9. Forest treatment opportunities for Kansas 1982-1991.

    Treesearch

    W. Brad Smith; W.J. Moyer

    1984-01-01

    Reviews treatment opportunities for timber stands in Kansas from 1982 to 1991. Under the assumptions and management guides specified, 45% of Kansas' commercial forest land would benefit from timber harvest or some other form of treatment during the decade.

  10. 78 FR 17220 - Agency Information Collection Activities: Application for Naturalization, Form N-400; Revision of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-20

    ... validity of the methodology and assumptions used; (3) Enhance the quality, utility, and clarity of the... the inadmissibility grounds that were added by the Intelligence Reform and Terrorism Prevention Act of...

  11. Sustainability Frontiers

    ERIC Educational Resources Information Center

    Selby, David

    2010-01-01

    This article introduces Sustainability Frontiers, a newly formed, international, not-for-profit alliance of sustainability and global educators dedicated to challenging and laying bare the assumptions, exposing the blind spots, and transgressing the boundaries of mainstream understandings of sustainability-related education. Among the orthodoxies…

  12. Volumetric calibration of a plenoptic camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert

    Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less

  13. Observation of radiation damage induced by single-ion hits at the heavy ion microbeam system

    NASA Astrophysics Data System (ADS)

    Kamiya, Tomihiro; Sakai, Takuro; Hirao, Toshio; Oikawa, Masakazu

    2001-07-01

    A single-ion hit system combined with the JAERI heavy ion microbeam system can be applied to observe individual phenomena induced by interactions between high-energy ions and a semiconductor device using a technique to measure the pulse height of transient current (TC) signals. The reduction of the TC pulse height for a Si PIN photodiode was measured under irradiation of 15 MeV Ni ions onto various micron-sized areas in the diode. The data containing damage effect by these irradiations were analyzed with least-square fitting using a Weibull distribution function. Changes of the scale and the shape parameters as functions of the width of irradiation areas brought us an assumption that a charge collection in a diode has a micron level lateral extent larger than a spatial resolution of the microbeam at 1 μm. Numerical simulations for these measurements were made with a simplified two-dimensional model based on this assumption using a Monte Carlo method. Calculated data reproducing the pulse-height reductions by single-ion irradiations were analyzed using the same function as that for the measurement. The result of this analysis, which shows the same tendency in change of parameters as that by measurements, seems to support our assumption.

  14. Volumetric calibration of a plenoptic camera

    DOE PAGES

    Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert; ...

    2018-02-01

    Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less

  15. Development of the Complex General Linear Model in the Fourier Domain: Application to fMRI Multiple Input-Output Evoked Responses for Single Subjects

    PubMed Central

    Rio, Daniel E.; Rawlings, Robert R.; Woltz, Lawrence A.; Gilman, Jodi; Hommer, Daniel W.

    2013-01-01

    A linear time-invariant model based on statistical time series analysis in the Fourier domain for single subjects is further developed and applied to functional MRI (fMRI) blood-oxygen level-dependent (BOLD) multivariate data. This methodology was originally developed to analyze multiple stimulus input evoked response BOLD data. However, to analyze clinical data generated using a repeated measures experimental design, the model has been extended to handle multivariate time series data and demonstrated on control and alcoholic subjects taken from data previously analyzed in the temporal domain. Analysis of BOLD data is typically carried out in the time domain where the data has a high temporal correlation. These analyses generally employ parametric models of the hemodynamic response function (HRF) where prewhitening of the data is attempted using autoregressive (AR) models for the noise. However, this data can be analyzed in the Fourier domain. Here, assumptions made on the noise structure are less restrictive, and hypothesis tests can be constructed based on voxel-specific nonparametric estimates of the hemodynamic transfer function (HRF in the Fourier domain). This is especially important for experimental designs involving multiple states (either stimulus or drug induced) that may alter the form of the response function. PMID:23840281

  16. Development of the complex general linear model in the Fourier domain: application to fMRI multiple input-output evoked responses for single subjects.

    PubMed

    Rio, Daniel E; Rawlings, Robert R; Woltz, Lawrence A; Gilman, Jodi; Hommer, Daniel W

    2013-01-01

    A linear time-invariant model based on statistical time series analysis in the Fourier domain for single subjects is further developed and applied to functional MRI (fMRI) blood-oxygen level-dependent (BOLD) multivariate data. This methodology was originally developed to analyze multiple stimulus input evoked response BOLD data. However, to analyze clinical data generated using a repeated measures experimental design, the model has been extended to handle multivariate time series data and demonstrated on control and alcoholic subjects taken from data previously analyzed in the temporal domain. Analysis of BOLD data is typically carried out in the time domain where the data has a high temporal correlation. These analyses generally employ parametric models of the hemodynamic response function (HRF) where prewhitening of the data is attempted using autoregressive (AR) models for the noise. However, this data can be analyzed in the Fourier domain. Here, assumptions made on the noise structure are less restrictive, and hypothesis tests can be constructed based on voxel-specific nonparametric estimates of the hemodynamic transfer function (HRF in the Fourier domain). This is especially important for experimental designs involving multiple states (either stimulus or drug induced) that may alter the form of the response function.

  17. Review of Jones-Wilkins-Lee equation of state

    NASA Astrophysics Data System (ADS)

    Baudin, G.; Serradeill, R.

    The JWL EOS is widely used in different forms (two, three terms) according to the level of accuracy in the pressure-volume domain that applications need. The foundations of the relationship chosen to represent the reference curve, Chapman-Jouguet (CJ) isentrope, can be found assuming that the DP expansion isentrope issued from the CJ point is very nearly coincident with the Crussard curve in the pressure-material velocity plane. Its mathematical expression, using an appropriate relationship between shock velocity and material velocity leads to the exponential terms of the JWL EOS. It well validates the pressure-volume relationship chosen to represent the reference curves for DP. Nevertheless, the assumption of constant Gruneisen coefficient and heat capacity in the EOS thermal part remains the more restrictive assumption. A new derivation of JWL EOS is proposed, using a less restrictive assumption for the Gruneisen coefficient suggested by W.C. Davis to represent both large expansions and near-CJ states.

  18. [Modality specific systems of representation and processing of information. Superfluous images, useful representations, necessary evil or inevitable consequences of optimal stimulus processing].

    PubMed

    Zimmer, H D

    1993-01-01

    It is discussed what is underlying the assumption of modality-specific processing systems and representations. Starting from the information processing approach relevant aspects of mental representations and their physiological realizations are discussed. Then three different forms of modality-specific systems are distinguished: as stimulus specific processing, as specific informational formats, and as modular part systems. Parallel to that three kinds of analogue systems are differentiated: as holding an analogue-relation, as having a specific informational format and as a set of specific processing constraints. These different aspects of the assumption of modality-specific systems are demonstrated in the example of visual and spatial information processing. It is concluded that postulating information-specific systems is not a superfluous assumption, but it is necessary, and even more likely it is an inevitable consequence of an optimization of stimulus processing.

  19. Markov chain Monte Carlo estimation of quantum states

    NASA Astrophysics Data System (ADS)

    Diguglielmo, James; Messenger, Chris; Fiurášek, Jaromír; Hage, Boris; Samblowski, Aiko; Schmidt, Tabea; Schnabel, Roman

    2009-03-01

    We apply a Bayesian data analysis scheme known as the Markov chain Monte Carlo to the tomographic reconstruction of quantum states. This method yields a vector, known as the Markov chain, which contains the full statistical information concerning all reconstruction parameters including their statistical correlations with no a priori assumptions as to the form of the distribution from which it has been obtained. From this vector we can derive, e.g., the marginal distributions and uncertainties of all model parameters, and also of other quantities such as the purity of the reconstructed state. We demonstrate the utility of this scheme by reconstructing the Wigner function of phase-diffused squeezed states. These states possess non-Gaussian statistics and therefore represent a nontrivial case of tomographic reconstruction. We compare our results to those obtained through pure maximum-likelihood and Fisher information approaches.

  20. On Impedance Spectroscopy of Supercapacitors

    NASA Astrophysics Data System (ADS)

    Uchaikin, V. V.; Sibatov, R. T.; Ambrozevich, A. S.

    2016-10-01

    Supercapacitors are often characterized by responses measured by methods of impedance spectroscopy. In the frequency domain these responses have the form of power-law functions or their linear combinations. The inverse Fourier transform leads to relaxation equations with integro-differential operators of fractional order under assumption that the frequency response is independent of the working voltage. To compare long-term relaxation kinetics predicted by these equations with the observed one, charging-discharging of supercapacitors (with nominal capacitances of 0.22, 0.47, and 1.0 F) have been studied by means of registration of the current response to a step voltage signal. It is established that the reaction of devices under study to variations of the charging regime disagrees with the model of a homogeneous linear response. It is demonstrated that relaxation is well described by a fractional stretched exponent.

  1. Minimisation of the LCOE for the hybrid power supply system with the lead-acid battery

    NASA Astrophysics Data System (ADS)

    Kasprzyk, Leszek; Tomczewski, Andrzej; Bednarek, Karol; Bugała, Artur

    2017-10-01

    The paper presents the methodology of minimisation of the unit cost of production of energy generated in the hybrid system compatible with the lead-acid battery, and used to power a load with the known daily load curve. For this purpose, the objective function in the form of the LCOE and the genetic algorithm method were used. Simulation tests for three types of load with set daily load characteristics were performed. By taking advantage of the legal regulations applicable in the territory of Poland, regarding the energy storing in the power system, the optimal structure of the prosumer solar-wind system including the lead-acid battery, which meets the condition of maximum rated power, was established. An assumption was made that the whole solar energy supplied to the load would be generated in the optimised system.

  2. Convergence of Spectral Discretizations of the Vlasov--Poisson System

    DOE PAGES

    Manzini, G.; Funaro, D.; Delzanno, G. L.

    2017-09-26

    Here we prove the convergence of a spectral discretization of the Vlasov-Poisson system. The velocity term of the Vlasov equation is discretized using either Hermite functions on the infinite domain or Legendre polynomials on a bounded domain. The spatial term of the Vlasov and Poisson equations is discretized using periodic Fourier expansions. Boundary conditions are treated in weak form through a penalty type term that can be applied also in the Hermite case. As a matter of fact, stability properties of the approximated scheme descend from this added term. The convergence analysis is carried out in detail for the 1D-1Vmore » case, but results can be generalized to multidimensional domains, obtained as Cartesian product, in both space and velocity. The error estimates show the spectral convergence under suitable regularity assumptions on the exact solution.« less

  3. Competing risks regression for clustered data

    PubMed Central

    Zhou, Bingqing; Fine, Jason; Latouche, Aurelien; Labopin, Myriam

    2012-01-01

    A population average regression model is proposed to assess the marginal effects of covariates on the cumulative incidence function when there is dependence across individuals within a cluster in the competing risks setting. This method extends the Fine–Gray proportional hazards model for the subdistribution to situations, where individuals within a cluster may be correlated due to unobserved shared factors. Estimators of the regression parameters in the marginal model are developed under an independence working assumption where the correlation across individuals within a cluster is completely unspecified. The estimators are consistent and asymptotically normal, and variance estimation may be achieved without specifying the form of the dependence across individuals. A simulation study evidences that the inferential procedures perform well with realistic sample sizes. The practical utility of the methods is illustrated with data from the European Bone Marrow Transplant Registry. PMID:22045910

  4. Methodological Issues in Examining Measurement Equivalence in Patient Reported Outcomes Measures: Methods Overview to the Two-Part Series, “Measurement Equivalence of the Patient Reported Outcomes Measurement Information System® (PROMIS®) Short Forms”

    PubMed Central

    Teresi, Jeanne A.; Jones, Richard N.

    2017-01-01

    The purpose of this article is to introduce the methods used and challenges confronted by the authors of this two-part series of articles describing the results of analyses of measurement equivalence of the short form scales from the Patient Reported Outcomes Measurement Information System® (PROMIS®). Qualitative and quantitative approaches used to examine differential item functioning (DIF) are reviewed briefly. Qualitative methods focused on generation of DIF hypotheses. The basic quantitative approaches used all rely on a latent variable model, and examine parameters either derived directly from item response theory (IRT) or from structural equation models (SEM). A key methods focus of these articles is to describe state-of-the art approaches to examination of measurement equivalence in eight domains: physical health, pain, fatigue, sleep, depression, anxiety, cognition, and social function. These articles represent the first time that DIF has been examined systematically in the PROMIS short form measures, particularly among ethnically diverse groups. This is also the first set of analyses to examine the performance of PROMIS short forms in patients with cancer. Latent variable model state-of-the-art methods for examining measurement equivalence are introduced briefly in this paper to orient readers to the approaches adopted in this set of papers. Several methodological challenges underlying (DIF-free) anchor item selection and model assumption violations are presented as a backdrop for the articles in this two-part series on measurement equivalence of PROMIS measures. PMID:28983448

  5. Attractor cosmology from nonminimally coupled gravity

    NASA Astrophysics Data System (ADS)

    Odintsov, S. D.; Oikonomou, V. K.

    2018-03-01

    By using a bottom-up reconstruction technique for nonminimally coupled scalar-tensor theories, we realize the Einstein frame attractor cosmologies in the Ω (ϕ )-Jordan frame. For our approach, what is needed for the reconstruction method to work is the functional form of the nonminimal coupling Ω (ϕ ) and of the scalar-to-tensor ratio, and also the assumption of the slow-roll inflation in the Ω (ϕ )-Jordan frame. By appropriately choosing the scalar-to-tensor ratio, we demonstrate that the observational indices of the attractor cosmologies can be realized directly in the Ω (ϕ )-Jordan frame. We investigate the special conditions that are required to hold true in for this realization to occur, and we provide the analytic form of the potential in the Ω (ϕ )-Jordan frame. Also, by performing a conformal transformation, we find the corresponding Einstein frame canonical scalar-tensor theory, and we calculate in detail the corresponding observational indices. The result indicates that although the spectral index of the primordial curvature perturbations is the same in the Jordan and Einstein frames, at leading order in the e -foldings number, the scalar-to-tensor ratio differs. We discuss the possible reasons behind this discrepancy, and we argue that the difference is due to some approximation we performed to the functional form of the potential in the Einstein frame, in order to obtain analytical results, and also due to the difference in the definition of the e -foldings number in the two frames, which is also pointed out in the related literature. Finally, we find the F (R ) gravity corresponding to the Einstein frame canonical scalar-tensor theory.

  6. Social factors in space station interiors

    NASA Technical Reports Server (NTRS)

    Cranz, Galen; Eichold, Alice; Hottes, Klaus; Jones, Kevin; Weinstein, Linda

    1987-01-01

    Using the example of the chair, which is often written into space station planning but which serves no non-cultural function in zero gravity, difficulties in overcoming cultural assumptions are discussed. An experimental approach is called for which would allow designers to separate cultural assumptions from logistic, social and psychological necessities. Simulations, systematic doubt and monitored brainstorming are recommended as part of basic research so that the designer will approach the problems of space module design with a complete program.

  7. A computer program for uncertainty analysis integrating regression and Bayesian methods

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Hill, Mary C.; Poeter, Eileen P.; Curtis, Gary

    2014-01-01

    This work develops a new functionality in UCODE_2014 to evaluate Bayesian credible intervals using the Markov Chain Monte Carlo (MCMC) method. The MCMC capability in UCODE_2014 is based on the FORTRAN version of the differential evolution adaptive Metropolis (DREAM) algorithm of Vrugt et al. (2009), which estimates the posterior probability density function of model parameters in high-dimensional and multimodal sampling problems. The UCODE MCMC capability provides eleven prior probability distributions and three ways to initialize the sampling process. It evaluates parametric and predictive uncertainties and it has parallel computing capability based on multiple chains to accelerate the sampling process. This paper tests and demonstrates the MCMC capability using a 10-dimensional multimodal mathematical function, a 100-dimensional Gaussian function, and a groundwater reactive transport model. The use of the MCMC capability is made straightforward and flexible by adopting the JUPITER API protocol. With the new MCMC capability, UCODE_2014 can be used to calculate three types of uncertainty intervals, which all can account for prior information: (1) linear confidence intervals which require linearity and Gaussian error assumptions and typically 10s–100s of highly parallelizable model runs after optimization, (2) nonlinear confidence intervals which require a smooth objective function surface and Gaussian observation error assumptions and typically 100s–1,000s of partially parallelizable model runs after optimization, and (3) MCMC Bayesian credible intervals which require few assumptions and commonly 10,000s–100,000s or more partially parallelizable model runs. Ready access allows users to select methods best suited to their work, and to compare methods in many circumstances.

  8. Do causal concentration-response functions exist? A critical review of associational and causal relations between fine particulate matter and mortality.

    PubMed

    Cox, Louis Anthony Tony

    2017-08-01

    Concentration-response (C-R) functions relating concentrations of pollutants in ambient air to mortality risks or other adverse health effects provide the basis for many public health risk assessments, benefits estimates for clean air regulations, and recommendations for revisions to existing air quality standards. The assumption that C-R functions relating levels of exposure and levels of response estimated from historical data usefully predict how future changes in concentrations would change risks has seldom been carefully tested. This paper critically reviews literature on C-R functions for fine particulate matter (PM2.5) and mortality risks. We find that most of them describe historical associations rather than valid causal models for predicting effects of interventions that change concentrations. The few papers that explicitly attempt to model causality rely on unverified modeling assumptions, casting doubt on their predictions about effects of interventions. A large literature on modern causal inference algorithms for observational data has been little used in C-R modeling. Applying these methods to publicly available data from Boston and the South Coast Air Quality Management District around Los Angeles shows that C-R functions estimated for one do not hold for the other. Changes in month-specific PM2.5 concentrations from one year to the next do not help to predict corresponding changes in average elderly mortality rates in either location. Thus, the assumption that estimated C-R relations predict effects of pollution-reducing interventions may not be true. Better causal modeling methods are needed to better predict how reducing air pollution would affect public health.

  9. The new diatom training set from the Polish Baltic coast and diatom-based transfer functions as a tool for understanding past changes in the southern Baltic coastal lakes

    NASA Astrophysics Data System (ADS)

    Lutyńska, Monika; Szpikowska, Grażyna; Woszczyk, Michał; Suchińska, Anita; Burchardt, Lubomira; Messyasz, Beata

    2014-05-01

    The transfer function method has been developed as a useful tool for reconstruction of the past environmental changes. It is based on the assumption that the modern species, which ecological requirements are known, can be used to quantitative reconstructions of the past changes. The aim of the study was to gather test sets and to build diatom-based transfer function, which can be used to reconstruct changes in the trophic state and salinity in the coastal lakes on the Polish Baltic coast. In the previous years there were several attempts made to reconstruct these parameters in lagoonal waters on the Baltic coasts in Germany, Denmark, Finland, Netherland, Sweden and Norway. But so far there is no diatom test set and transfer function was built for the Polish coastal lakes. We sample diatoms form 12 lakes located along the polish Baltic coast. At the same time we monitor the physical-chemical conditions in the lakes, which includes lake water chemical composition (chlorides, phosphorous and sulphur), pH, salinity, conductivity, temperature, dissolved oxygen. We collect samples form the lakes as well as from the Baltic Sea and we analyse the whole phytoplankton composition, however the special focus in put on diatoms. The results of the analysis show seasonal changes in the chemical and physical water properties. The diatom assemblage composition and species frequency also changed significantly. This study is a contribution to the projects: NN 306 064 640 financed by National Science Centre, Poland and Virtual Institute ICLEA (Integrated Climate and Landscape Evolution Analysis) funded by the Helmholtz Association.

  10. Novel Semi-Parametric Algorithm for Interference-Immune Tunable Absorption Spectroscopy Gas Sensing

    PubMed Central

    Michelucci, Umberto; Venturini, Francesca

    2017-01-01

    One of the most common limits to gas sensor performance is the presence of unwanted interference fringes arising, for example, from multiple reflections between surfaces in the optical path. Additionally, since the amplitude and the frequency of these interferences depend on the distance and alignment of the optical elements, they are affected by temperature changes and mechanical disturbances, giving rise to a drift of the signal. In this work, we present a novel semi-parametric algorithm that allows the extraction of a signal, like the spectroscopic absorption line of a gas molecule, from a background containing arbitrary disturbances, without having to make any assumption on the functional form of these disturbances. The algorithm is applied first to simulated data and then to oxygen absorption measurements in the presence of strong fringes.To the best of the authors’ knowledge, the algorithm enables an unprecedented accuracy particularly if the fringes have a free spectral range and amplitude comparable to those of the signal to be detected. The described method presents the advantage of being based purely on post processing, and to be of extremely straightforward implementation if the functional form of the Fourier transform of the signal is known. Therefore, it has the potential to enable interference-immune absorption spectroscopy. Finally, its relevance goes beyond absorption spectroscopy for gas sensing, since it can be applied to any kind of spectroscopic data. PMID:28991161

  11. Disconcordance in Statistical Models of Bisphenol A and Chronic Disease Outcomes in NHANES 2003-08

    PubMed Central

    Casey, Martin F.; Neidell, Matthew

    2013-01-01

    Background Bisphenol A (BPA), a high production chemical commonly found in plastics, has drawn great attention from researchers due to the substance’s potential toxicity. Using data from three National Health and Nutrition Examination Survey (NHANES) cycles, we explored the consistency and robustness of BPA’s reported effects on coronary heart disease and diabetes. Methods And Findings We report the use of three different statistical models in the analysis of BPA: (1) logistic regression, (2) log-linear regression, and (3) dose-response logistic regression. In each variation, confounders were added in six blocks to account for demographics, urinary creatinine, source of BPA exposure, healthy behaviours, and phthalate exposure. Results were sensitive to the variations in functional form of our statistical models, but no single model yielded consistent results across NHANES cycles. Reported ORs were also found to be sensitive to inclusion/exclusion criteria. Further, observed effects, which were most pronounced in NHANES 2003-04, could not be explained away by confounding. Conclusions Limitations in the NHANES data and a poor understanding of the mode of action of BPA have made it difficult to develop informative statistical models. Given the sensitivity of effect estimates to functional form, researchers should report results using multiple specifications with different assumptions about BPA measurement, thus allowing for the identification of potential discrepancies in the data. PMID:24223205

  12. Interpreting "Personality" Taxonomies: Why Previous Models Cannot Capture Individual-Specific Experiencing, Behaviour, Functioning and Development. Major Taxonomic Tasks Still Lay Ahead.

    PubMed

    Uher, Jana

    2015-12-01

    As science seeks to make generalisations, a science of individual peculiarities encounters intricate challenges. This article explores these challenges by applying the Transdisciplinary Philosophy-of-Science Paradigm for Research on Individuals (TPS-Paradigm) and by exploring taxonomic "personality" research as an example. Analyses of researchers' interpretations of the taxonomic "personality" models, constructs and data that have been generated in the field reveal widespread erroneous assumptions about the abilities of previous methodologies to appropriately represent individual-specificity in the targeted phenomena. These assumptions, rooted in everyday thinking, fail to consider that individual-specificity and others' minds cannot be directly perceived, that abstract descriptions cannot serve as causal explanations, that between-individual structures cannot be isomorphic to within-individual structures, and that knowledge of compositional structures cannot explain the process structures of their functioning and development. These erroneous assumptions and serious methodological deficiencies in widely used standardised questionnaires have effectively prevented psychologists from establishing taxonomies that can comprehensively model individual-specificity in most of the kinds of phenomena explored as "personality", especially in experiencing and behaviour and in individuals' functioning and development. Contrary to previous assumptions, it is not universal models but rather different kinds of taxonomic models that are required for each of the different kinds of phenomena, variations and structures that are commonly conceived of as "personality". Consequently, to comprehensively explore individual-specificity, researchers have to apply a portfolio of complementary methodologies and develop different kinds of taxonomies, most of which have yet to be developed. Closing, the article derives some meta-desiderata for future research on individuals' "personality".

  13. Upon Accounting for the Impact of Isoenzyme Loss, Gene Deletion Costs Anticorrelate with Their Evolutionary Rates.

    PubMed

    Jacobs, Christopher; Lambourne, Luke; Xia, Yu; Segrè, Daniel

    2017-01-01

    System-level metabolic network models enable the computation of growth and metabolic phenotypes from an organism's genome. In particular, flux balance approaches have been used to estimate the contribution of individual metabolic genes to organismal fitness, offering the opportunity to test whether such contributions carry information about the evolutionary pressure on the corresponding genes. Previous failure to identify the expected negative correlation between such computed gene-loss cost and sequence-derived evolutionary rates in Saccharomyces cerevisiae has been ascribed to a real biological gap between a gene's fitness contribution to an organism "here and now" and the same gene's historical importance as evidenced by its accumulated mutations over millions of years of evolution. Here we show that this negative correlation does exist, and can be exposed by revisiting a broadly employed assumption of flux balance models. In particular, we introduce a new metric that we call "function-loss cost", which estimates the cost of a gene loss event as the total potential functional impairment caused by that loss. This new metric displays significant negative correlation with evolutionary rate, across several thousand minimal environments. We demonstrate that the improvement gained using function-loss cost over gene-loss cost is explained by replacing the base assumption that isoenzymes provide unlimited capacity for backup with the assumption that isoenzymes are completely non-redundant. We further show that this change of the assumption regarding isoenzymes increases the recall of epistatic interactions predicted by the flux balance model at the cost of a reduction in the precision of the predictions. In addition to suggesting that the gene-to-reaction mapping in genome-scale flux balance models should be used with caution, our analysis provides new evidence that evolutionary gene importance captures much more than strict essentiality.

  14. Extended screened exchange functional derived from transcorrelated density functional theory.

    PubMed

    Umezawa, Naoto

    2017-09-14

    We propose a new formulation of the correlation energy functional derived from the transcorrelated method in use in density functional theory (TC-DFT). An effective Hamiltonian, H TC , is introduced by a similarity transformation of a many-body Hamiltonian, H, with respect to a complex function F: H TC =1FHF. It is proved that an expectation value of H TC for a normalized single Slater determinant, D n , corresponds to the total energy: E[n] = ⟨Ψ n |H|Ψ n ⟩/⟨Ψ n |Ψ n ⟩ = ⟨D n |H TC |D n ⟩ under the two assumptions: (1) The electron density nr associated with a trial wave function Ψ n = D n F is v-representable and (2) Ψ n and D n give rise to the same electron density nr. This formulation, therefore, provides an alternative expression of the total energy that is useful for the development of novel correlation energy functionals. By substituting a specific function for F, we successfully derived a model correlation energy functional, which resembles the functional form of the screened exchange method. The proposed functional, named the extended screened exchange (ESX) functional, is described within two-body integrals and is parametrized for a numerically exact correlation energy of the homogeneous electron gas. The ESX functional does not contain any ingredients of (semi-)local functionals and thus is totally free from self-interactions. The computational cost for solving the self-consistent-field equation is comparable to that of the Hartree-Fock method. We apply the ESX functional to electronic structure calculations for a solid silicon, H - ion, and small atoms. The results demonstrate that the TC-DFT formulation is promising for the systematic improvement of the correlation energy functional.

  15. 20 CFR 404.1694 - Final accounting by the State.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... function. Disputes concerning final accounting issues which cannot be resolved between the State and us... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Final accounting by the State. 404.1694... DISABILITY INSURANCE (1950- ) Determinations of Disability Assumption of Disability Determination Function...

  16. Analysis of Online Composite Mirror Descent Algorithm.

    PubMed

    Lei, Yunwen; Zhou, Ding-Xuan

    2017-03-01

    We study the convergence of the online composite mirror descent algorithm, which involves a mirror map to reflect the geometry of the data and a convex objective function consisting of a loss and a regularizer possibly inducing sparsity. Our error analysis provides convergence rates in terms of properties of the strongly convex differentiable mirror map and the objective function. For a class of objective functions with Hölder continuous gradients, the convergence rates of the excess (regularized) risk under polynomially decaying step sizes have the order [Formula: see text] after [Formula: see text] iterates. Our results improve the existing error analysis for the online composite mirror descent algorithm by avoiding averaging and removing boundedness assumptions, and they sharpen the existing convergence rates of the last iterate for online gradient descent without any boundedness assumptions. Our methodology mainly depends on a novel error decomposition in terms of an excess Bregman distance, refined analysis of self-bounding properties of the objective function, and the resulting one-step progress bounds.

  17. Tests for the extraction of Boer-Mulders functions

    NASA Astrophysics Data System (ADS)

    Christova, Ekaterina; Leader, Elliot; Stoilov, Michail

    2017-12-01

    At present, the Boer-Mulders (BM) functions are extracted from asymmetry data using the simplifying assumption of their proportionality to the Sivers functions for each quark flavour. Here we present two independent tests for this assumption. We subject COMPASS data on semi-inclusive deep inelastic scattering on the 〈cos ϕh 〉, 〈cos 2ϕh 〉 and Sivers asymmetries to these tests. Our analysis shows that the tests are satisfied with the available data if the proportionality constant is the same for all quark flavours, which does not correspond to the flavour dependence used in existing analyses. This suggests that the published information on the BM functions may be unreliable. The 〈cos ϕh 〉, 〈cos 2ϕh 〉 asymmetries receive contributions also from the, in principle, calculable Cahn effect. We succeed in extracting the Cahn contributions from experiment (we believe for the first time) and compare with their calculated values, with interesting implications.

  18. Edemagenic gain and interstitial fluid volume regulation.

    PubMed

    Dongaonkar, R M; Quick, C M; Stewart, R H; Drake, R E; Cox, C S; Laine, G A

    2008-02-01

    Under physiological conditions, interstitial fluid volume is tightly regulated by balancing microvascular filtration and lymphatic return to the central venous circulation. Even though microvascular filtration and lymphatic return are governed by conservation of mass, their interaction can result in exceedingly complex behavior. Without making simplifying assumptions, investigators must solve the fluid balance equations numerically, which limits the generality of the results. We thus made critical simplifying assumptions to develop a simple solution to the standard fluid balance equations that is expressed as an algebraic formula. Using a classical approach to describe systems with negative feedback, we formulated our solution as a "gain" relating the change in interstitial fluid volume to a change in effective microvascular driving pressure. The resulting "edemagenic gain" is a function of microvascular filtration coefficient (K(f)), effective lymphatic resistance (R(L)), and interstitial compliance (C). This formulation suggests two types of gain: "multivariate" dependent on C, R(L), and K(f), and "compliance-dominated" approximately equal to C. The latter forms a basis of a novel method to estimate C without measuring interstitial fluid pressure. Data from ovine experiments illustrate how edemagenic gain is altered with pulmonary edema induced by venous hypertension, histamine, and endotoxin. Reformulation of the classical equations governing fluid balance in terms of edemagenic gain thus yields new insight into the factors affecting an organ's susceptibility to edema.

  19. Constraints on Black Hole Spin in a Sample of Broad Iron Line AGN

    NASA Technical Reports Server (NTRS)

    Brenneman, Laura W.; Reynolds, Christopher S.

    2008-01-01

    We present a uniform X-ray spectral analysis of nine type-1 active galactic nuclei (AGN) that have been previously found to harbor relativistically broadened iron emission lines. We show that the need for relativistic effects in the spectrum is robust even when one includes continuum "reflection" from the accretion disk. We then proceed to model these relativistic effects in order to constrain the spin of the supermassive black holes in these AGN. Our principal assumption, supported by recent simulations of geometrically-thin accretion disks, is that no iron line emission (or any associated Xray reflection features) can originate from the disk within the innermost stable circular orbit. Under this assumption, which tends to lead to constraints in the form of lower limits on the spin parameter, we obtain non-trivial spin constraints on five AGN. The spin parameters of these sources range from moderate (a approximates 0.6) to high (a > 0.96). Our results allow, for the first time, an observational constraint on the spin distribution function of local supermassive black holes. Parameterizing this as a power-law in dimensionless spin parameter (f(a) varies as absolute value of (a) exp zeta), we present the probability distribution for zeta implied by our results. Our results suggest 90% and 95% confidence limits of zeta > -0.09 and zeta > -0.3 respectively.

  20. The estimation of time-varying risks in asset pricing modelling using B-Spline method

    NASA Astrophysics Data System (ADS)

    Nurjannah; Solimun; Rinaldo, Adji

    2017-12-01

    Asset pricing modelling has been extensively studied in the past few decades to explore the risk-return relationship. The asset pricing literature typically assumed a static risk-return relationship. However, several studies found few anomalies in the asset pricing modelling which captured the presence of the risk instability. The dynamic model is proposed to offer a better model. The main problem highlighted in the dynamic model literature is that the set of conditioning information is unobservable and therefore some assumptions have to be made. Hence, the estimation requires additional assumptions about the dynamics of risk. To overcome this problem, the nonparametric estimators can also be used as an alternative for estimating risk. The flexibility of the nonparametric setting avoids the problem of misspecification derived from selecting a functional form. This paper investigates the estimation of time-varying asset pricing model using B-Spline, as one of nonparametric approach. The advantages of spline method is its computational speed and simplicity, as well as the clarity of controlling curvature directly. The three popular asset pricing models will be investigated namely CAPM (Capital Asset Pricing Model), Fama-French 3-factors model and Carhart 4-factors model. The results suggest that the estimated risks are time-varying and not stable overtime which confirms the risk instability anomaly. The results is more pronounced in Carhart’s 4-factors model.

  1. Donders is dead: cortical traveling waves and the limits of mental chronometry in cognitive neuroscience.

    PubMed

    Alexander, David M; Trengove, Chris; van Leeuwen, Cees

    2015-11-01

    An assumption nearly all researchers in cognitive neuroscience tacitly adhere to is that of space-time separability. Historically, it forms the basis of Donders' difference method, and to date, it underwrites all difference imaging and trial-averaging of cortical activity, including the customary techniques for analyzing fMRI and EEG/MEG data. We describe the assumption and how it licenses common methods in cognitive neuroscience; in particular, we show how it plays out in signal differencing and averaging, and how it misleads us into seeing the brain as a set of static activity sources. In fact, rather than being static, the domains of cortical activity change from moment to moment: Recent research has suggested the importance of traveling waves of activation in the cortex. Traveling waves have been described at a range of different spatial scales in the cortex; they explain a large proportion of the variance in phase measurements of EEG, MEG and ECoG, and are important for understanding cortical function. Critically, traveling waves are not space-time separable. Their prominence suggests that the correct frame of reference for analyzing cortical activity is the dynamical trajectory of the system, rather than the time and space coordinates of measurements. We illustrate what the failure of space-time separability implies for cortical activation, and what consequences this should have for cognitive neuroscience.

  2. Mathematical treatment of isotopologue and isotopomer speciation and fractionation in biochemical kinetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maggi, F.M.; Riley, W.J.

    2009-11-01

    We present a mathematical treatment of the kinetic equations that describe isotopologue and isotopomer speciation and fractionation during enzyme-catalyzed biochemical reactions. These equations, presented here with the name GEBIK (general equations for biochemical isotope kinetics) and GEBIF (general equations for biochemical isotope fractionation), take into account microbial biomass and enzyme dynamics, reaction stoichiometry, isotope substitution number, and isotope location within each isotopologue and isotopomer. In addition to solving the complete GEBIK and GEBIF, we also present and discuss two approximations to the full solutions under the assumption of biomass-free and enzyme steady-state, and under the quasi-steady-state assumption as applied tomore » the complexation rate. The complete and approximate approaches are applied to observations of biological denitrification in soils. Our analysis highlights that the full GEBIK and GEBIF provide a more accurate description of concentrations and isotopic compositions of substrates and products throughout the reaction than do the approximate forms. We demonstrate that the isotopic effects of a biochemical reaction depend, in the most general case, on substrate and complex concentrations and, therefore, the fractionation factor is a function of time. We also demonstrate that inverse isotopic effects can occur for values of the fractionation factor smaller than 1, and that reactions that do not discriminate isotopes do not necessarily imply a fractionation factor equal to 1.« less

  3. Estimation of the incubation period of influenza A (H1N1-2009) among imported cases: addressing censoring using outbreak data at the origin of importation.

    PubMed

    Nishiura, Hiroshi; Inaba, Hisashi

    2011-03-07

    Empirical estimates of the incubation period of influenza A (H1N1-2009) have been limited. We estimated the incubation period among confirmed imported cases who traveled to Japan from Hawaii during the early phase of the 2009 pandemic (n=72). We addressed censoring and employed an infection-age structured argument to explicitly model the daily frequency of illness onset after departure. We assumed uniform and exponential distributions for the frequency of exposure in Hawaii, and the hazard rate of infection for the latter assumption was retrieved, in Hawaii, from local outbreak data. The maximum likelihood estimates of the median incubation period range from 1.43 to 1.64 days according to different modeling assumptions, consistent with a published estimate based on a New York school outbreak. The likelihood values of the different modeling assumptions do not differ greatly from each other, although models with the exponential assumption yield slightly shorter incubation periods than those with the uniform exposure assumption. Differences between our proposed approach and a published method for doubly interval-censored analysis highlight the importance of accounting for the dependence of the frequency of exposure on the survival function of incubating individuals among imported cases. A truncation of the density function of the incubation period due to an absence of illness onset during the exposure period also needs to be considered. When the data generating process is similar to that among imported cases, and when the incubation period is close to or shorter than the length of exposure, accounting for these aspects is critical for long exposure times. Copyright © 2010 Elsevier Ltd. All rights reserved.

  4. Projecting treatment opportunities for current Minnesota forest conditions.

    Treesearch

    W. Brad Smith; Pamela J. Jakes

    1981-01-01

    Reviews opportunities for treatment of timber stands in Minnesota for the decade of 1977-1986. Under the assumptions and management guides specified, 27% of Minnesota's commercial forest land would require timber harvest or some other form of treatment during the decade.

  5. Mental health nursing and the problematic of supervision as a confessional act.

    PubMed

    Banks, D; Clifton, A V; Purdy, M J; Crawshaw, P

    2013-09-01

    Mental health nurses frequently draw on self-disclosure practices within their working relationships. These 'confessional' acts can in turn be predicated on traditional assumptions of moral authority exercised by more senior colleagues. More broadly, attention has been drawn to the increasing significance of 'technologies of the self' inside neo-liberal regimes of governance. Through various forms of self-disclosure people are obliged 'to speak the truth about themselves'. By publically declaring themselves as 'fit for purpose' nurses are required to be reflexive, self-monitoring individuals, capable of constructing their own identities and biographies, and guided by expert knowledges. In this way, risk becomes a form of governance, as the individuals continually find themselves balancing risks and opportunities. Foucault's insights into the importance of 'care of the self' and 'surveillance of the self' to systems of social order and governance, such as mental health services, are significant in identifying nursing as a potential form of confessional practice. 'Reflective practice' and 'clinical supervision' are therefore 'technologies', functioning as 'modes of surveillance', and as 'confessional practices'. So 'clinical supervision' may be understood as part of a process of 'governance' that does not necessarily empower nurses, but can act to guide, correct and modify ways in which they conduct themselves. © 2012 John Wiley & Sons Ltd.

  6. Implicit Priors in Galaxy Cluster Mass and Scaling Relation Determinations

    NASA Technical Reports Server (NTRS)

    Mantz, A.; Allen, S. W.

    2011-01-01

    Deriving the total masses of galaxy clusters from observations of the intracluster medium (ICM) generally requires some prior information, in addition to the assumptions of hydrostatic equilibrium and spherical symmetry. Often, this information takes the form of particular parametrized functions used to describe the cluster gas density and temperature profiles. In this paper, we investigate the implicit priors on hydrostatic masses that result from this fully parametric approach, and the implications of such priors for scaling relations formed from those masses. We show that the application of such fully parametric models of the ICM naturally imposes a prior on the slopes of the derived scaling relations, favoring the self-similar model, and argue that this prior may be influential in practice. In contrast, this bias does not exist for techniques which adopt an explicit prior on the form of the mass profile but describe the ICM non-parametrically. Constraints on the slope of the cluster mass-temperature relation in the literature show a separation based the approach employed, with the results from fully parametric ICM modeling clustering nearer the self-similar value. Given that a primary goal of scaling relation analyses is to test the self-similar model, the application of methods subject to strong, implicit priors should be avoided. Alternative methods and best practices are discussed.

  7. Essays on parametric and nonparametric modeling and estimation with applications to energy economics

    NASA Astrophysics Data System (ADS)

    Gao, Weiyu

    My dissertation research is composed of two parts: a theoretical part on semiparametric efficient estimation and an applied part in energy economics under different dynamic settings. The essays are related in terms of their applications as well as the way in which models are constructed and estimated. In the first essay, efficient estimation of the partially linear model is studied. We work out the efficient score functions and efficiency bounds under four stochastic restrictions---independence, conditional symmetry, conditional zero mean, and partially conditional zero mean. A feasible efficient estimation method for the linear part of the model is developed based on the efficient score. A battery of specification test that allows for choosing between the alternative assumptions is provided. A Monte Carlo simulation is also conducted. The second essay presents a dynamic optimization model for a stylized oilfield resembling the largest developed light oil field in Saudi Arabia, Ghawar. We use data from different sources to estimate the oil production cost function and the revenue function. We pay particular attention to the dynamic aspect of the oil production by employing petroleum-engineering software to simulate the interaction between control variables and reservoir state variables. Optimal solutions are studied under different scenarios to account for the possible changes in the exogenous variables and the uncertainty about the forecasts. The third essay examines the effect of oil price volatility on the level of innovation displayed by the U.S. economy. A measure of innovation is calculated by decomposing an output-based Malmquist index. We also construct a nonparametric measure for oil price volatility. Technical change and oil price volatility are then placed in a VAR system with oil price and a variable indicative of monetary policy. The system is estimated and analyzed for significant relationships. We find that oil price volatility displays a significant negative effect on innovation. A key point of this analysis lies in the fact that we impose no functional forms for technologies and the methods employed keep technical assumptions to a minimum.

  8. An Analysis of the Correspondence between Imagined Interaction Attributes and Functions

    ERIC Educational Resources Information Center

    Bodie, Graham D.; Honeycutt, James M.; Vickery, Andrea J.

    2013-01-01

    Imagined interaction (II) theory has been productive for communication and social cognition scholarship. There is, however, a yet untested assumption within II theory that the 8 attributes are related to all 6 functions and that II functions can be compared and contrasted in terms of II attributes. In addition, there is little research exploring…

  9. Automating Partial Period Bond Valuation with Excel's Day Counting Functions

    ERIC Educational Resources Information Center

    Vicknair, David; Spruell, James

    2009-01-01

    An Excel model for calculating the actual price of bonds under a 30 day/month, 360 day/year day counting assumption by nesting the DAYS360 function within the PV function is developed. When programmed into an Excel spreadsheet, the model can accommodate annual and semiannual payment bonds sold on or between interest dates using six fundamental…

  10. Statistical limitations in functional neuroimaging. I. Non-inferential methods and statistical models.

    PubMed Central

    Petersson, K M; Nichols, T E; Poline, J B; Holmes, A P

    1999-01-01

    Functional neuroimaging (FNI) provides experimental access to the intact living brain making it possible to study higher cognitive functions in humans. In this review and in a companion paper in this issue, we discuss some common methods used to analyse FNI data. The emphasis in both papers is on assumptions and limitations of the methods reviewed. There are several methods available to analyse FNI data indicating that none is optimal for all purposes. In order to make optimal use of the methods available it is important to know the limits of applicability. For the interpretation of FNI results it is also important to take into account the assumptions, approximations and inherent limitations of the methods used. This paper gives a brief overview over some non-inferential descriptive methods and common statistical models used in FNI. Issues relating to the complex problem of model selection are discussed. In general, proper model selection is a necessary prerequisite for the validity of the subsequent statistical inference. The non-inferential section describes methods that, combined with inspection of parameter estimates and other simple measures, can aid in the process of model selection and verification of assumptions. The section on statistical models covers approaches to global normalization and some aspects of univariate, multivariate, and Bayesian models. Finally, approaches to functional connectivity and effective connectivity are discussed. In the companion paper we review issues related to signal detection and statistical inference. PMID:10466149

  11. Spread of Epidemic on Complex Networks Under Voluntary Vaccination Mechanism

    NASA Astrophysics Data System (ADS)

    Xue, Shengjun; Ruan, Feng; Yin, Chuanyang; Zhang, Haifeng; Wang, Binghong

    Under the assumption that the decision of vaccination is a voluntary behavior, in this paper, we use two forms of risk functions to characterize how susceptible individuals estimate the perceived risk of infection. One is uniform case, where each susceptible individual estimates the perceived risk of infection only based on the density of infection at each time step, so the risk function is only a function of the density of infection; another is preferential case, where each susceptible individual estimates the perceived risk of infection not only based on the density of infection but only related to its own activities/immediate neighbors (in network terminology, the activity or the number of immediate neighbors is the degree of node), so the risk function is a function of the density of infection and the degree of individuals. By investigating two different ways of estimating the risk of infection for susceptible individuals on complex network, we find that, for the preferential case, the spread of epidemic can be effectively controlled; yet, for the uniform case, voluntary vaccination mechanism is almost invalid in controlling the spread of epidemic on networks. Furthermore, given the temporality of some vaccines, the waves of epidemic for two cases are also different. Therefore, our work insight that the way of estimating the perceived risk of infection determines the decision on vaccination options, and then determines the success or failure of control strategy.

  12. Evaluation of an unsteady flamelet progress variable model for autoignition and flame development in compositionally stratified mixtures

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Saumyadip; Abraham, John

    2012-07-01

    The unsteady flamelet progress variable (UFPV) model has been proposed by Pitsch and Ihme ["An unsteady/flamelet progress variable method for LES of nonpremixed turbulent combustion," AIAA Paper No. 2005-557, 2005] for modeling the averaged/filtered chemistry source terms in Reynolds averaged simulations and large eddy simulations of reacting non-premixed combustion. In the UFPV model, a look-up table of source terms is generated as a function of mixture fraction Z, scalar dissipation rate χ, and progress variable C by solving the unsteady flamelet equations. The assumption is that the unsteady flamelet represents the evolution of the reacting mixing layer in the non-premixed flame. We assess the accuracy of the model in predicting autoignition and flame development in compositionally stratified n-heptane/air mixtures using direct numerical simulations (DNS). The focus in this work is primarily on the assessment of accuracy of the probability density functions (PDFs) employed for obtaining averaged source terms. The performance of commonly employed presumed functions, such as the dirac-delta distribution function, the β distribution function, and statistically most likely distribution (SMLD) approach in approximating the shapes of the PDFs of the reactive and the conserved scalars is evaluated. For unimodal distributions, it is observed that functions that need two-moment information, e.g., the β distribution function and the SMLD approach with two-moment closure, are able to reasonably approximate the actual PDF. As the distribution becomes multimodal, higher moment information is required. Differences are observed between the ignition trends obtained from DNS and those predicted by the look-up table, especially for smaller gradients where the flamelet assumption becomes less applicable. The formulation assumes that the shape of the χ(Z) profile can be modeled by an error function which remains unchanged in the presence of heat release. We show that this assumption is not accurate.

  13. Differential molar heat capacities to test ideal solubility estimations.

    PubMed

    Neau, S H; Bhandarkar, S V; Hellmuth, E W

    1997-05-01

    Calculation of the ideal solubility of a crystalline solute in a liquid solvent requires knowledge of the difference in the molar heat capacity at constant pressure of the solid and the supercooled liquid forms of the solute, delta Cp. Since this parameter is not usually known, two assumptions have been used to simplify the expression. The first is that delta Cp can be considered equal to zero; the alternate assumption is that the molar entropy of fusion, delta Sf, is an estimate of delta Cp. Reports claiming the superiority of one assumption over the other, on the basis of calculations done using experimentally determined parameters, have appeared in the literature. The validity of the assumptions in predicting the ideal solubility of five structurally unrelated compounds of pharmaceutical interest, with melting points in the range 420 to 470 K, was evaluated in this study. Solid and liquid heat capacities of each compound near its melting point were determined using differential scanning calorimetry. Linear equations describing the heat capacities were extrapolated to the melting point to generate the differential molar heat capacity. Linear data were obtained for both crystal and liquid heat capacities of sample and test compounds. For each sample, ideal solubility at 298 K was calculated and compared to the two estimates generated using literature equations based on the differential molar heat capacity assumptions. For the compounds studied, delta Cp was not negligible and was closer to delta Sf than to zero. However, neither of the two assumptions was valid for accurately estimating the ideal solubility as given by the full equation.

  14. The friable sponge model of a cometary nucleus

    NASA Technical Reports Server (NTRS)

    Horanyi, M.; Gombosi, T. I.; Korosmezey, A.; Kecskemety, K.; Szego, K.; Cravens, T. E.; Nagy, A. F.

    1984-01-01

    The mantle/core model of cometary nuclei, first suggested by Whipple and subsequently developed by Mendis and Brin, is modified and extended. New terms are added to the heat conduction equation for the mantle, which is solved in order to obtain the temperature distribution in the mantle and the gas production rate as a function of mantle thickness and heliocentric distance. These results are then combined with some specific assumptions about the mantle structure (the friable sponge model) in order to make predictions for the variation of gas production rate and mantle thickness as functions of heliocentric distance for different comets. A solution of the time-dependent heat conduction equation is presented in order to check some of the assumptions.

  15. A general model for the absorption of ultrasound by biological tissues and experimental verification.

    PubMed

    Jongen, H A; Thijssen, J M; van den Aarssen, M; Verhoef, W A

    1986-02-01

    In this paper, a closed-form expression is derived for the absorption of ultrasound by biological tissues. In this expression, the viscothermal and viscoelastic theories of relaxation processes are combined. Three relaxation time distribution functions are introduced, and it is assumed that each of these distributions can be described by an identical and simple hyperbolic function. Several simplifying assumptions had to be made to enable the experimental verification of the derived closed-form expression of the absorption coefficient. The simplified expression leaves two degrees of freedom and it was fitted to the experimental data obtained from homogenized beef liver. The model produced a considerably better fit to the data than other, more pragmatic models for the absorption coefficient as a function of frequency that could be found in the literature. Scattering in beef liver was estimated indirectly from the difference between attenuation in in vitro liver tissue as compared to absorption in a homogenate. The frequency dependence of the scattering coefficient could be described by a power law with a power of the order of 2. A comparable figure was found in direct backscattering measurements, performed at our laboratory with the same liver samples [Van den Aarssen et al., J. Acoust. Soc. Am. (to be published)]. A model for scattering recently proposed by Sehgal and Greenleaf [Ultrason. Imag. 6, 60-80 (1984)] was fitted to the scattering data as well. This latter model enabled the estimation of a maximum scatterer distance, which appeared to be of the order of 25 micron.

  16. Systematic and simulation-free coarse graining of homopolymer melts: a relative-entropy-based study.

    PubMed

    Yang, Delian; Wang, Qiang

    2015-09-28

    We applied the systematic and simulation-free strategy proposed in our previous work (D. Yang and Q. Wang, J. Chem. Phys., 2015, 142, 054905) to the relative-entropy-based (RE-based) coarse graining of homopolymer melts. RE-based coarse graining provides a quantitative measure of the coarse-graining performance and can be used to select the appropriate analytic functional forms of the pair potentials between coarse-grained (CG) segments, which are more convenient to use than the tabulated (numerical) CG potentials obtained from structure-based coarse graining. In our general coarse-graining strategy for homopolymer melts using the RE framework proposed here, the bonding and non-bonded CG potentials are coupled and need to be solved simultaneously. Taking the hard-core Gaussian thread model (K. S. Schweizer and J. G. Curro, Chem. Phys., 1990, 149, 105) as the original system, we performed RE-based coarse graining using the polymer reference interaction site model theory under the assumption that the intrachain segment pair correlation functions of CG systems are the same as those in the original system, which de-couples the bonding and non-bonded CG potentials and simplifies our calculations (that is, we only calculated the latter). We compared the performance of various analytic functional forms of non-bonded CG pair potential and closures for CG systems in RE-based coarse graining, as well as the structural and thermodynamic properties of original and CG systems at various coarse-graining levels. Our results obtained from RE-based coarse graining are also compared with those from structure-based coarse graining.

  17. The level crossing rates and associated statistical properties of a random frequency response function

    NASA Astrophysics Data System (ADS)

    Langley, Robin S.

    2018-03-01

    This work is concerned with the statistical properties of the frequency response function of the energy of a random system. Earlier studies have considered the statistical distribution of the function at a single frequency, or alternatively the statistics of a band-average of the function. In contrast the present analysis considers the statistical fluctuations over a frequency band, and results are obtained for the mean rate at which the function crosses a specified level (or equivalently, the average number of times the level is crossed within the band). Results are also obtained for the probability of crossing a specified level at least once, the mean rate of occurrence of peaks, and the mean trough-to-peak height. The analysis is based on the assumption that the natural frequencies and mode shapes of the system have statistical properties that are governed by the Gaussian Orthogonal Ensemble (GOE), and the validity of this assumption is demonstrated by comparison with numerical simulations for a random plate. The work has application to the assessment of the performance of dynamic systems that are sensitive to random imperfections.

  18. Biased relevance filtering in the auditory system: A test of confidence-weighted first-impressions.

    PubMed

    Mullens, D; Winkler, I; Damaso, K; Heathcote, A; Whitson, L; Provost, A; Todd, J

    2016-03-01

    Although first-impressions are known to impact decision-making and to have prolonged effects on reasoning, it is less well known that the same type of rapidly formed assumptions can explain biases in automatic relevance filtering outside of deliberate behavior. This paper features two studies in which participants have been asked to ignore sequences of sound while focusing attention on a silent movie. The sequences consisted of blocks, each with a high-probability repetition interrupted by rare acoustic deviations (i.e., a sound of different pitch or duration). The probabilities of the two different sounds alternated across the concatenated blocks within the sequence (i.e., short-to-long and long-to-short). The sound probabilities are rapidly and automatically learned for each block and a perceptual inference is formed predicting the most likely characteristics of the upcoming sound. Deviations elicit a prediction-error signal known as mismatch negativity (MMN). Computational models of MMN generally assume that its elicitation is governed by transition statistics that define what sound attributes are most likely to follow the current sound. MMN amplitude reflects prediction confidence, which is derived from the stability of the current transition statistics. However, our prior research showed that MMN amplitude is modulated by a strong first-impression bias that outweighs transition statistics. Here we test the hypothesis that this bias can be attributed to assumptions about predictable vs. unpredictable nature of each tone within the first encountered context, which is weighted by the stability of that context. The results of Study 1 show that this bias is initially prevented if there is no 1:1 mapping between sound attributes and probability, but it returns once the auditory system determines which properties provide the highest predictive value. The results of Study 2 show that confidence in the first-impression bias drops if assumptions about the temporal stability of the transition-statistics are violated. Both studies provide compelling evidence that the auditory system extrapolates patterns on multiple timescales to adjust its response to prediction-errors, while profoundly distorting the effects of transition-statistics by the assumptions formed on the basis of first-impressions. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Deep Borehole Field Test Requirements and Controlled Assumptions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardin, Ernest

    2015-07-01

    This document presents design requirements and controlled assumptions intended for use in the engineering development and testing of: 1) prototype packages for radioactive waste disposal in deep boreholes; 2) a waste package surface handling system; and 3) a subsurface system for emplacing and retrieving packages in deep boreholes. Engineering development and testing is being performed as part of the Deep Borehole Field Test (DBFT; SNL 2014a). This document presents parallel sets of requirements for a waste disposal system and for the DBFT, showing the close relationship. In addition to design, it will also inform planning for drilling, construction, and scientificmore » characterization activities for the DBFT. The information presented here follows typical preparations for engineering design. It includes functional and operating requirements for handling and emplacement/retrieval equipment, waste package design and emplacement requirements, borehole construction requirements, sealing requirements, and performance criteria. Assumptions are included where they could impact engineering design. Design solutions are avoided in the requirements discussion. Deep Borehole Field Test Requirements and Controlled Assumptions July 21, 2015 iv ACKNOWLEDGEMENTS This set of requirements and assumptions has benefited greatly from reviews by Gordon Appel, Geoff Freeze, Kris Kuhlman, Bob MacKinnon, Steve Pye, David Sassani, Dave Sevougian, and Jiann Su.« less

  20. Relating color working memory and color perception.

    PubMed

    Allred, Sarah R; Flombaum, Jonathan I

    2014-11-01

    Color is the most frequently studied feature in visual working memory (VWM). Oddly, much of this work de-emphasizes perception, instead making simplifying assumptions about the inputs served to memory. We question these assumptions in light of perception research, and we identify important points of contact between perception and working memory in the case of color. Better characterization of its perceptual inputs will be crucial for elucidating the structure and function of VWM. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Neuroscience, quantum indeterminism and the Cartesian soul.

    PubMed

    Clarke, Peter G H

    2014-02-01

    Quantum indeterminism is frequently invoked as a solution to the problem of how a disembodied soul might interact with the brain (as Descartes proposed), and is sometimes invoked in theories of libertarian free will even when they do not involve dualistic assumptions. Taking as example the Eccles-Beck model of interaction between self (or soul) and brain at the level of synaptic exocytosis, I here evaluate the plausibility of these approaches. I conclude that Heisenbergian uncertainty is too small to affect synaptic function, and that amplification by chaos or by other means does not provide a solution to this problem. Furthermore, even if Heisenbergian effects did modify brain functioning, the changes would be swamped by those due to thermal noise. Cells and neural circuits have powerful noise-resistance mechanisms, that are adequate protection against thermal noise and must therefore be more than sufficient to buffer against Heisenbergian effects. Other forms of quantum indeterminism must be considered, because these can be much greater than Heisenbergian uncertainty, but these have not so far been shown to play a role in the brain. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Criticality of the electron-nucleus cusp condition to local effective potential-energy theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan Xiaoyin; Sahni, Viraht; Graduate School of the City University of New York, 360 Fifth Avenue, New York, New York 10016

    2003-01-01

    Local(multiplicative) effective potential energy-theories of electronic structure comprise the transformation of the Schroedinger equation for interacting Fermi systems to model noninteracting Fermi or Bose systems whereby the equivalent density and energy are obtained. By employing the integrated form of the Kato electron-nucleus cusp condition, we prove that the effective electron-interaction potential energy of these model fermions or bosons is finite at a nucleus. The proof is general and valid for arbitrary system whether it be atomic, molecular, or solid state, and for arbitrary state and symmetry. This then provides justification for all prior work in the literature based on themore » assumption of finiteness of this potential energy at a nucleus. We further demonstrate the criticality of the electron-nucleus cusp condition to such theories by an example of the hydrogen molecule. We show thereby that both model system effective electron-interaction potential energies, as determined from densities derived from accurate wave functions, will be singular at the nucleus unless the wave function satisfies the electron-nucleus cusp condition.« less

  3. Assessment of Morphological and Functional Changes in Organs of Rats after Intramuscular Introduction of Iron Nanoparticles and Their Agglomerates

    PubMed Central

    Sizova, Elena; Miroshnikov, Sergey; Yausheva, Elena; Polyakova, Valentina

    2015-01-01

    The research was performed on male Wistar rats based on assumptions that new microelement preparations containing metal nanoparticles and their agglomerates had potential. Morphological and functional changes in tissues in the injection site and dynamics of chemical element metabolism (25 indicators) in body were assessed after repeated intramuscular injections (total, 7) with preparation containing agglomerate of iron nanoparticles. As a result, iron depot was formed in myosymplasts of injection sites. The quantity of muscle fibers having positive Perls' stain increased with increasing number of injections. However, the concentration of the most chemical elements and iron significantly decreased in the whole skeletal muscle system (injection sites are not included). Consequently, it increased up to the control level after the sixth and the seventh injections. Among the studied organs (liver, kidneys, and spleen), Caspase-3 expression was revealed only in spleen. The expression had a direct dependence on the number of injections. Processes of iron elimination from preparation containing nanoparticles and their agglomerates had different intensity. PMID:25789310

  4. Numerical approach of collision avoidance and optimal control on robotic manipulators

    NASA Technical Reports Server (NTRS)

    Wang, Jyhshing Jack

    1990-01-01

    Collision-free optimal motion and trajectory planning for robotic manipulators are solved by a method of sequential gradient restoration algorithm. Numerical examples of a two degree-of-freedom (DOF) robotic manipulator are demonstrated to show the excellence of the optimization technique and obstacle avoidance scheme. The obstacle is put on the midway, or even further inward on purpose, of the previous no-obstacle optimal trajectory. For the minimum-time purpose, the trajectory grazes by the obstacle and the minimum-time motion successfully avoids the obstacle. The minimum-time is longer for the obstacle avoidance cases than the one without obstacle. The obstacle avoidance scheme can deal with multiple obstacles in any ellipsoid forms by using artificial potential fields as penalty functions via distance functions. The method is promising in solving collision-free optimal control problems for robotics and can be applied to any DOF robotic manipulators with any performance indices and mobile robots as well. Since this method generates optimum solution based on Pontryagin Extremum Principle, rather than based on assumptions, the results provide a benchmark against which any optimization techniques can be measured.

  5. Structure of cold nuclear matter at subnuclear densities by quantum molecular dynamics

    NASA Astrophysics Data System (ADS)

    Watanabe, Gentaro; Sato, Katsuhiko; Yasuoka, Kenji; Ebisuzaki, Toshikazu

    2003-09-01

    Structure of cold nuclear matter at subnuclear densities for the proton fraction x=0.5, 0.3, and 0.1 is investigated by quantum molecular dynamics (QMD) simulations. We demonstrate that the phases with slablike and rodlike nuclei, etc. can be formed dynamically from hot uniform nuclear matter without any assumptions on nuclear shape, and also systematically analyze the structure of cold matter using two-point correlation functions and Minkowski functionals. In our simulations, we also observe intermediate phases, which have complicated nuclear shapes. It has been found out that these phases can be characterized as those with negative Euler characteristic. Our result implies the existence of these kinds of phases in addition to the simple “pasta” phases in neutron star crusts and supernova inner cores. In addition, we investigate the properties of the effective QMD interaction used in the present work to examine the validity of our results. The resultant energy per nucleon ɛn of the pure neutron matter, the proton chemical μ(0)p in pure neutron matter and the nuclear surface tension Esurf are generally reasonable in comparison with other nuclear interactions.

  6. Ab Initio Calculation of XAFS Debye-Waller Factors for Crystalline Materials

    NASA Astrophysics Data System (ADS)

    Dimakis, Nicholas

    2007-02-01

    A direct an accurate technique for calculating the thermal X-ray absorption fine structure (XAFS) Debye-Waller factors (DWF) for materials of crystalline structure is presented. Using the Density Functional Theory (DFT) under the hybrid X3LYP functional, a library of MnO spin—optimized clusters are built and their phonon spectrum properties are calculated; these properties in the form of normal mode eigenfrequencies and eigenvectors are in turn used for calculation of the single and multiple scattering XAFS DWF. DWF obtained via this technique are temperature dependent expressions and can be used to substantially reduce the number of fitting parameters when experimental spectra are fitted with a hypothetical structure without any ad hoc assumptions. Due to the high computational demand a hybrid approach of mixing the DFT calculated DWF with the correlated Debye model for inner and outer shells respectively is presented. DFT obtained DWFs are compared with corresponding values from experimental XAFS spectra on manganosite. The cluster size effect and the spin parameter on the DFT calculated DWFs are discussed.

  7. Stochastic effects in a thermochemical system with Newtonian heat exchange.

    PubMed

    Nowakowski, B; Lemarchand, A

    2001-12-01

    We develop a mesoscopic description of stochastic effects in the Newtonian heat exchange between a diluted gas system and a thermostat. We explicitly study the homogeneous Semenov model involving a thermochemical reaction and neglecting consumption of reactants. The master equation includes a transition rate for the thermal transfer process, which is derived on the basis of the statistics for inelastic collisions between gas particles and walls of the thermostat. The main assumption is that the perturbation of the Maxwellian particle velocity distribution can be neglected. The transition function for the thermal process admits a continuous spectrum of temperature changes, and consequently, the master equation has a complicated integro-differential form. We perform Monte Carlo simulations based on this equation to study the stochastic effects in the Semenov system in the explosive regime. The dispersion of ignition times is calculated as a function of system size. For sufficiently small systems, the probability distribution of temperature displays transient bimodality during the ignition period. The results of the stochastic description are successfully compared with those of direct simulations of microscopic particle dynamics.

  8. 20 CFR 416.1094 - Final accounting by the State.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... function. Disputes concerning final accounting issues which cannot be resolved between the State and us... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Final accounting by the State. 416.1094... AGED, BLIND, AND DISABLED Determinations of Disability Assumption of Disability Determination Function...

  9. Quantifying Multi-variables in Urban Watershed Adaptation: Challenges and Opportunities

    EPA Science Inventory

    Climate change and rapid socioeconomic developments are considered to be the principle variables affecting evolution of an urban watershed, the forms and sustainability of its built environment. In the traditional approach, we are accustomed to the assumption of a stationary cli...

  10. The Bangalore Procedural Syllabus.

    ERIC Educational Resources Information Center

    Brumfit, Christopher

    1984-01-01

    Discusses the content, advantages, and disadvantages of a syllabus designed to teach English as a second language in a number of primary school classes in South India. The syllabus is based on the assumption that "form is best learned when the learner's attention is on meaning." (SED)

  11. Degradation in finite-harmonic subcarrier demodulation

    NASA Technical Reports Server (NTRS)

    Feria, Y.; Townes, S.; Pham, T.

    1995-01-01

    Previous estimates on the degradations due to a subcarrier loop assume a square-wave subcarrier. This article provides a closed-form expression for the degradations due to the subcarrier loop when a finite number of harmonics are used to demodulate the subcarrier, as in the case of the buffered telemetry demodulator. We compared the degradations using a square wave and using finite harmonics in the subcarrier demodulation and found that, for a low loop signal-to-noise ratio, using finite harmonics leads to a lower degradation. The analysis is under the assumption that the phase noise in the subcarrier (SC) loop has a Tikhonov distribution. This assumption is valid for first-order loops.

  12. Spectral Generation from the Ames Mars GCM for the Study of Martian Clouds

    NASA Astrophysics Data System (ADS)

    Klassen, David R.; Kahre, Melinda A.; Wolff, Michael J.; Haberle, Robert; Hollingsworth, Jeffery L.

    2017-10-01

    Studies of martian clouds come from two distinct groups of researchers: those modeling the martian system from first principles and those observing Mars from ground-based and orbital platforms. The model-view begins with global circulation models (GCMs) or mesoscale models to track a multitude of state variables over a prescribed set of spatial and temporal resolutions. The state variables can then be processed into distinct maps of derived product variables, such as integrated optical depth of aerosol (e.g., water ice cloud, dust) or column integrated water vapor for comparison to observational results. The observer view begins, typically, with spectral images or imaging spectra, calibrated to some form of absolute units then run through some form of radiative transfer model to also produce distinct maps of derived product variables. Both groups of researchers work to adjust model parameters and assumptions until some level of agreement in derived product variables is achieved. While this system appears to work well, it is in some sense only an implicit confirmation of the model assumptions that attribute to the work from both sides. We have begun a project of testing the NASA Ames Mars GCM and key aerosol model assumptions more directly by taking the model output and creating synthetic TES-spectra from them for comparison to actual raw-reduced TES spectra. We will present some preliminary generated GCM spectra and TES comparisons.

  13. Motivation and Job Satisfaction for Middle Level Career Army Officers

    DTIC Science & Technology

    1975-06-06

    Improves performance and performance ultimately leads to reward in the form of need satisfaction . The individual’s perception of this assumption and the... Satisfaction for Middle Level Career Army Officers Colin 0. Halvorson, CPT, USA U.S. Army Command and General Staff College Fort Leavenworth, Kansas...FORM 3. RECIPIENT’S CAT ALOG NUMBER V TI^LE (onJ Sublltta) Motivation and Job Satisfaction for Middle Level Career Army Officers 7

  14. Aggregating Political Dimensions: Of the Feasibility of Political Indicators

    ERIC Educational Resources Information Center

    Sanin, Francisco Gutierrez; Buitrago, Diana; Gonzalez, Andrea

    2013-01-01

    Political indicators are widely used in academic writing and decision making, but remain controversial. This paper discusses the problems related to the aggregation functions they use. Almost always, political indicators are aggregated by weighted averages or summations. The use of such functions is based on untenable assumptions (existence of…

  15. The SFR-M∗ main sequence archetypal star-formation history and analytical models

    NASA Astrophysics Data System (ADS)

    Ciesla, L.; Elbaz, D.; Fensch, J.

    2017-12-01

    The star-formation history (SFH) of galaxies is a key assumption to derive their physical properties and can lead to strong biases. In this work, we derive the SFH of main sequence (MS) galaxies and show how the peak SFH of a galaxy depends on its seed mass at, for example, z = 5. This seed mass reflects the galaxy's underlying dark matter (DM) halo environment. We show that, following the MS, galaxies undergo a drastic slow down of their stellar mass growth after reaching the peak of their SFH. According to abundance matching, these masses correspond to hot and massive DM halos which state could result in less efficient gas inflows on the galaxies and thus could be the origin of limited stellar mass growth. As a result, we show that galaxies, still on the MS, can enter the passive region of the UVJ diagram while still forming stars. The best fit to the MS SFH is provided by a right skew peak function for which we provide parameters depending on the seed mass of the galaxy. The ability of the classical analytical SFHs to retrieve the star-formation rate (SFR) of galaxies from spectral energy distribution (SED) fitting is studied. Due to mathematical limitations, the exponentially declining and delayed SFH struggle to model high SFR, which starts to be problematic at z > 2. The exponentially rising and log-normal SFHs exhibit the opposite behavior with the ability to reach very high SFR, and thus model starburst galaxies, but they are not able to model low values such as those expected at low redshift for massive galaxies. By simulating galaxies SED from the MS SFH, we show that these four analytical forms recover the SFR of MS galaxies with an error dependent on the model and the redshift. They are, however, sensitive enough to probe small variations of SFR within the MS, with an error ranging from 5 to 40% depending on the SFH assumption and redshift; but all the four fail to recover the SFR of rapidly quenched galaxies. However, these SFHs lead to an artificial gradient of age, parallel to the MS, which is not exhibited by the simulated sample. This gradient is also produced on real data as we show using a sample of real galaxies with redshifts between 1.5 and 2.5. Here, we propose an SFH composed of a delayed form to model the bulk of stellar population with the addition of a flexibility in the recent SFH. This SFH provides very good estimates of the SFR of MS, starbursts, and rapidly quenched galaxies at all redshift. Furthermore, when used on the real sample, the age gradient disappears which show its dependency on the SFH assumption made to perform the SED fitting.

  16. Towards Full-Waveform Ambient Noise Inversion

    NASA Astrophysics Data System (ADS)

    Sager, K.; Ermert, L. A.; Boehm, C.; Fichtner, A.

    2016-12-01

    Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green function between the two receivers. This assumption, however, is only met under specific conditions, e.g. wavefield diffusivity and equipartitioning, or the isotropic distribution of both mono- and dipolar uncorrelated noise sources. These assumptions are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations in order to constrain Earth structure and noise generation. To overcome this limitation, we attempt to develop a method that consistently accounts for the distribution of noise sources, 3D heterogeneous Earth structure and the full seismic wave propagation physics. This is intended to improve the resolution of tomographic images, to refine noise source location, and thereby to contribute to a better understanding of noise generation. We introduce an operator-based formulation for the computation of correlation functions and apply the continuous adjoint method that allows us to compute first and second derivatives of misfit functionals with respect to source distribution and Earth structure efficiently. Based on these developments we design an inversion scheme using a 2D finite-difference code. To enable a joint inversion for noise sources and Earth structure, we investigate the following aspects: The capability of different misfit functionals to image wave speed anomalies and source distribution. Possible source-structure trade-offs, especially to what extent unresolvable structure can be mapped into the inverted noise source distribution and vice versa. In anticipation of real-data applications, we present an extension of the open-source waveform modelling and inversion package Salvus, which allows us to compute correlation functions in 3D media with heterogeneous noise sources at the surface.

  17. A lattice Boltzmann model for the Burgers-Fisher equation.

    PubMed

    Zhang, Jianying; Yan, Guangwu

    2010-06-01

    A lattice Boltzmann model is developed for the one- and two-dimensional Burgers-Fisher equation based on the method of the higher-order moment of equilibrium distribution functions and a series of partial differential equations in different time scales. In order to obtain the two-dimensional Burgers-Fisher equation, vector sigma(j) has been used. And in order to overcome the drawbacks of "error rebound," a new assumption of additional distribution is presented, where two additional terms, in first order and second order separately, are used. Comparisons with the results obtained by other methods reveal that the numerical solutions obtained by the proposed method converge to exact solutions. The model under new assumption gives better results than that with second order assumption. (c) 2010 American Institute of Physics.

  18. Recent advances in statistical energy analysis

    NASA Technical Reports Server (NTRS)

    Heron, K. H.

    1992-01-01

    Statistical Energy Analysis (SEA) has traditionally been developed using modal summation and averaging approach, and has led to the need for many restrictive SEA assumptions. The assumption of 'weak coupling' is particularly unacceptable when attempts are made to apply SEA to structural coupling. It is now believed that this assumption is more a function of the modal formulation rather than a necessary formulation of SEA. The present analysis ignores this restriction and describes a wave approach to the calculation of plate-plate coupling loss factors. Predictions based on this method are compared with results obtained from experiments using point excitation on one side of an irregular six-sided box structure. Conclusions show that the use and calculation of infinite transmission coefficients is the way forward for the development of a purely predictive SEA code.

  19. Bartnik’s splitting conjecture and Lorentzian Busemann function

    NASA Astrophysics Data System (ADS)

    Amini, Roya; Sharifzadeh, Mehdi; Bahrampour, Yousof

    2018-05-01

    In 1988 Bartnik posed the splitting conjecture about the cosmological space-time. This conjecture has been proved by several people, with different approaches and by using some additional assumptions such as ‘S-ray condition’ and ‘level set condition’. It is known that the ‘S-ray condition’ yields the ‘level set condition’. We have proved that the two are indeed equivalent, by giving a different proof under the assumption of the ‘level set condition’. In addition, we have shown several properties of the cosmological space-time, under the presence of the ‘level set condition’. Finally we have provided a proof of the conjecture under a different assumption on the cosmological space-time. But we first prove some results without the timelike convergence condition which help us to state our proofs.

  20. Constructing inquiry: One school's journey to develop an inquiry-based school for teachers and students

    NASA Astrophysics Data System (ADS)

    Sisk-Hilton, Stephanie Lee

    This study examines the two way relationship between an inquiry-based professional development model and teacher enactors. The two year study follows a group of teachers enacting the emergent Supporting Knowledge Integration for Inquiry Practice (SKIIP) professional development model. This study seeks to: (a) identify activity structures in the model that interact with teachers' underlying assumptions regarding professional development and inquiry learning; (b) explain key decision points during implementation in terms of these underlying assumptions; and (c) examine the impact of key activity structures on individual teachers' stated belief structures regarding inquiry learning. Linn's knowledge integration framework facilitates description and analysis of teacher development. Three sets of tensions emerge as themes that describe and constrain participants' interaction with and learning through the model. These are: learning from the group vs. learning on one's own; choosing and evaluating evidence based on impressions vs. specific criteria; and acquiring new knowledge vs. maintaining feelings of autonomy and efficacy. In each of these tensions, existing group goals and operating assumptions initially fell at one end of the tension, while the professional development goals and forms fell at the other. Changes to the model occurred as participants reacted to and negotiated these points of tension. As the group engaged in and modified the SKIIP model, they had repeated opportunities to articulate goals and to make connections between goals and model activity structures. Over time, decisions to modify the model took into consideration an increasingly complex set of underlying assumptions and goals. Teachers identified and sought to balance these tensions. This led to more complex and nuanced decision making, which reflected growing capacity to consider multiple goals in choosing activity structures to enact. The study identifies key activity structures that scaffolded this process for teachers, and which ultimately promoted knowledge integration at both the group and individual levels. This study is an "extreme case" which examines implementation of the SKIIP model under very favorable conditions. Lessons learned regarding appropriate levels of model responsiveness, likely areas of conflict between model form and teacher underlying assumptions, and activity structures that scaffold knowledge integration provide a starting point for future, larger scale implementation.

  1. Assessing women's sexuality after cancer therapy: checking assumptions with the focus group technique.

    PubMed

    Bruner, D W; Boyd, C P

    1999-12-01

    Cancer and cancer therapies impair sexual health in a multitude of ways. The promotion of sexual health is therefore vital for preserving quality of life and is an integral part of total or holistic cancer management. Nursing, to provide holistic care, requires research that is meaningful to patients as well as the profession to develop educational and interventional studies to promote sexual health and coping. To obtain meaningful research data instruments that are reliable, valid, and pertinent to patients' needs are required. Several sexual functioning instruments were reviewed for this study and found to be lacking in either a conceptual foundation or psychometric validation. Without a defined conceptual framework, authors of the instruments must have made certain assumptions regarding what women undergoing cancer therapy experience and what they perceive as important. To check these assumptions before assessing women's sexuality after cancer therapies in a larger study, a pilot study was designed to compare what women experience and perceive as important regarding their sexuality with what is assessed in several currently available research instruments, using the focus group technique. Based on the focus group findings, current sexual functioning questionnaires may be lacking in pertinent areas of concern for women treated for breast or gynecologic malignancies. Better conceptual foundations may help future questionnaire design. Self-regulation theory may provide an acceptable conceptual framework from which to develop a sexual functioning questionnaire.

  2. Robustness Regions for Dichotomous Decisions.

    ERIC Educational Resources Information Center

    Vijn, Pieter; Molenaar, Ivo W.

    1981-01-01

    In the case of dichotomous decisions, the total set of all assumptions/specifications for which the decision would have been the same is the robustness region. Inspection of this (data-dependent) region is a form of sensitivity analysis which may lead to improved decision making. (Author/BW)

  3. A Manifesto for Instructional Technology: Hyperpedagogy.

    ERIC Educational Resources Information Center

    Dwight, Jim; Garrison, Jim

    2003-01-01

    Calls for digital technology in education to embrace forms of pedagogy appropriate for hypertext, challenging western metaphysics and relying on the philosophy of John Dewey to propose an alternative. The paper reviews dominant models of curriculum, especially Ralph Tyler's, revealing their concealed metaphysical assumptions; shows that the…

  4. A Reflection on Reflection.

    ERIC Educational Resources Information Center

    Smith, Pat

    2002-01-01

    Reflects on the articles in this themed issue on reflective practice. Notes that these teacher/authors have been influenced by prior learning, past experience, feelings, attitudes, values, the school constraints on the learning environment, and their own assumptions about teaching. Describes how teachers have formed a learning community to…

  5. 78 FR 54954 - Reports, Forms, and Record Keeping Requirements

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-06

    ... assumptions used; (iii) How to enhance the quality, utility, and clarity of the information to be collected... information: Title: National Automotive Sampling System (NASS) Law Enforcement Information Type of Request.... NHTSA's National Automotive Sampling System (NASS) collects crash data on a nationally representative...

  6. A porous flow model for the geometrical form of volcanoes - Critical comments

    NASA Technical Reports Server (NTRS)

    Wadge, G.; Francis, P.

    1982-01-01

    A critical evaluation is presented of the assumptions on which the mathematical model for the geometrical form of a volcano arising from the flow of magma in a porous medium of Lacey et al. (1981) is based. The lack of evidence for an equipotential surface or its equivalent in volcanoes prior to eruption is pointed out, and the preference of volcanic eruptions for low ground is attributed to the local stress field produced by topographic loading rather than a rising magma table. Other difficulties with the model involve the neglect of the surface flow of lava under gravity away from the vent, and the use of the Dupuit approximation for unconfined flow and the assumption of essentially horizontal magma flow. Comparisons of model predictions with the shapes of actual volcanoes reveal the model not to fit lava shield volcanoes, for which the cone represents the solidification of small lava flows, and to provide a poor fit to composite central volcanoes.

  7. Liposomogenic UV Absorbers are Water-Resistant on Pig Skin-A Model Study With Relevance for Sunscreens.

    PubMed

    Herzog, Bernd; Hüglin, Dietmar; Luther, Helmut

    2017-02-01

    An important property of sunscreens is their water resistance after the application on human skin. In this work, the hypothesis that UV absorber molecules which are able to form liposomes, so-called liposomogenic UV absorbers, show better water resistance on a pig skin model than UV-absorbing molecules lacking this ability was tested. The assumption behind is that molecules which can form liposomes are able to integrate into the stratum corneum lipids of the skin. Three different liposomogenic UV absorbers were synthesized and their behavior investigated, leading to the confirmation of the hypothesis. With one of the liposomogenic UV absorbers, it was possible to show the integration of the UV absorber molecules into the bilayers of another liposome consisting of phosphatidylcholine, supporting the assumption that liposomogenic UV absorbers exhibit improved water resistance because they integrate into the skin lipids. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  8. Geometrically nonlinear continuum thermomechanics with surface energies coupled to diffusion

    NASA Astrophysics Data System (ADS)

    McBride, A. T.; Javili, A.; Steinmann, P.; Bargmann, S.

    2011-10-01

    Surfaces can have a significant influence on the overall response of a continuum body but are often neglected or accounted for in an ad hoc manner. This work is concerned with a nonlinear continuum thermomechanics formulation which accounts for surface structures and includes the effects of diffusion and viscoelasticity. The formulation is presented within a thermodynamically consistent framework and elucidates the nature of the coupling between the various fields, and the surface and the bulk. Conservation principles are used to determine the form of the constitutive relations and the evolution equations. Restrictions on the jump in the temperature and the chemical potential between the surface and the bulk are not a priori assumptions, rather they arise from the reduced dissipation inequality on the surface and are shown to be satisfiable without imposing the standard assumptions of thermal and chemical slavery. The nature of the constitutive relations is made clear via an example wherein the form of the Helmholtz energy is explicitly given.

  9. Characterization of external potential for field emission resonances and its applications on nanometer-scale measurements

    NASA Astrophysics Data System (ADS)

    Lu, Shin-Ming; Chan, Wen-Yuan; Su, Wei-Bin; Pai, Woei Wu; Liu, Hsiang-Lin; Chang, Chia-Seng

    2018-04-01

    The form of the external potential (FEP) for generating field emission resonance (FER) in a scanning tunneling microscopy (STM) junction is usually assumed to be triangular. We demonstrate that this assumption can be examined using a plot that can characterize FEP. The plot is FER energies versus the corresponding distances between the tip and sample. Through this energy–distance relationship, we discover that the FEP is nearly triangular for a blunt STM tip. However, the assumption of a triangular potential form is invalid for a sharp tip. The disparity becomes more severe as the tip is sharper. We demonstrate that the energy–distance plot can be exploited to determine the barrier width in field emission and estimate the effective sharpness of an STM tip. Because FERs were observed on Pb islands grown on the Cu(111) surface in this study, determination of the tip sharpness enabled the derivation of the subtle expansion deformation of Pb islands due to electrostatic force in the STM junction.

  10. The Assumption of a Reliable Instrument and Other Pitfalls to Avoid When Considering the Reliability of Data

    PubMed Central

    Nimon, Kim; Zientek, Linda Reichwein; Henson, Robin K.

    2012-01-01

    The purpose of this article is to help researchers avoid common pitfalls associated with reliability including incorrectly assuming that (a) measurement error always attenuates observed score correlations, (b) different sources of measurement error originate from the same source, and (c) reliability is a function of instrumentation. To accomplish our purpose, we first describe what reliability is and why researchers should care about it with focus on its impact on effect sizes. Second, we review how reliability is assessed with comment on the consequences of cumulative measurement error. Third, we consider how researchers can use reliability generalization as a prescriptive method when designing their research studies to form hypotheses about whether or not reliability estimates will be acceptable given their sample and testing conditions. Finally, we discuss options that researchers may consider when faced with analyzing unreliable data. PMID:22518107

  11. A Molecular Dynamics Simulation of the Turbulent Couette Minimal Flow Unit

    NASA Astrophysics Data System (ADS)

    Smith, Edward

    2016-11-01

    What happens to turbulent motions below the Kolmogorov length scale? In order to explore this question, a 300 million molecule Molecular Dynamics (MD) simulation is presented for the minimal Couette channel in which turbulence can be sustained. The regeneration cycle and turbulent statistics show excellent agreement to continuum based computational fluid dynamics (CFD) at Re=400. As MD requires only Newton's laws and a form of inter-molecular potential, it captures a much greater range of phenomena without requiring the assumptions of Newton's law of viscosity, thermodynamic equilibrium, fluid isotropy or the limitation of grid resolution. The fundamental nature of MD means it is uniquely placed to explore the nature of turbulent transport. A number of unique insights from MD are presented, including energy budgets, sub-grid turbulent energy spectra, probability density functions, Lagrangian statistics and fluid wall interactions. EPSRC Post Doctoral Prize Fellowship.

  12. The electrostatics of a dusty plasma

    NASA Technical Reports Server (NTRS)

    Whipple, E. C.; Mendis, D. A.; Northrop, T. G.

    1986-01-01

    The potential distribution in a plasma containing dust grains were derived where the Debye length can be larger or smaller than the average intergrain spacing. Three models were treated for the grain-plasma system, with the assumption that the system of dust and plasma is charge-neutral: a permeable grain model, an impermeable grain model, and a capacitor model that does not require the nearest neighbor approximation of the other two models. A gauge-invariant form of Poisson's equation was used which is linearized about the average potential in the system. The charging currents to a grain are functions of the difference between the grain potential and this average potential. Expressions were obtained for the equilibrium potential of the grain and for the gauge-invariant capacitance between the grain and the plasma. The charge on a grain is determined by the product of this capacitance and the grain-plasma potential difference.

  13. Finite-temperature phase transitions of third and higher order in gauge theories at large N

    DOE PAGES

    Nishimura, Hiromichi; Pisarski, Robert D.; Skokov, Vladimir V.

    2018-02-15

    We study phase transitions in SU(∞) gauge theories at nonzero temperature using matrix models. Our basic assumption is that the effective potential is dominated by double trace terms for the Polyakov loops. As a function of the various parameters, related to terms linear, quadratic, and quartic in the Polyakov loop, the phase diagram exhibits a universal structure. In a large region of this parameter space, there is a continuous phase transition whose order is larger than second. This is a generalization of the phase transition of Gross, Witten, and Wadia (GWW). Depending upon the detailed form of the matrix model,more » the eigenvalue density and the behavior of the specific heat near the transition differ drastically. Here, we speculate that in the pure gauge theory, that although the deconfining transition is thermodynamically of first order, it can be nevertheless conformally symmetric at infnite N.« less

  14. Mixture of Segmenters with Discriminative Spatial Regularization and Sparse Weight Selection*

    PubMed Central

    Chen, Ting; Rangarajan, Anand; Eisenschenk, Stephan J.

    2011-01-01

    This paper presents a novel segmentation algorithm which automatically learns the combination of weak segmenters and builds a strong one based on the assumption that the locally weighted combination varies w.r.t. both the weak segmenters and the training images. We learn the weighted combination during the training stage using a discriminative spatial regularization which depends on training set labels. A closed form solution to the cost function is derived for this approach. In the testing stage, a sparse regularization scheme is imposed to avoid overfitting. To the best of our knowledge, such a segmentation technique has never been reported in literature and we empirically show that it significantly improves on the performances of the weak segmenters. After showcasing the performance of the algorithm in the context of atlas-based segmentation, we present comparisons to the existing weak segmenter combination strategies on a hippocampal data set. PMID:22003748

  15. Material Perception.

    PubMed

    Fleming, Roland W

    2017-09-15

    Under typical viewing conditions, human observers effortlessly recognize materials and infer their physical, functional, and multisensory properties at a glance. Without touching materials, we can usually tell whether they would feel hard or soft, rough or smooth, wet or dry. We have vivid visual intuitions about how deformable materials like liquids or textiles respond to external forces and how surfaces like chrome, wax, or leather change appearance when formed into different shapes or viewed under different lighting. These achievements are impressive because the retinal image results from complex optical interactions between lighting, shape, and material, which cannot easily be disentangled. Here I argue that because of the diversity, mutability, and complexity of materials, they pose enormous challenges to vision science: What is material appearance, and how do we measure it? How are material properties estimated and represented? Resolving these questions causes us to scrutinize the basic assumptions of mid-level vision.

  16. An innovative method for offshore wind farm site selection based on the interval number with probability distribution

    NASA Astrophysics Data System (ADS)

    Wu, Yunna; Chen, Kaifeng; Xu, Hu; Xu, Chuanbo; Zhang, Haobo; Yang, Meng

    2017-12-01

    There is insufficient research relating to offshore wind farm site selection in China. The current methods for site selection have some defects. First, information loss is caused by two aspects: the implicit assumption that the probability distribution on the interval number is uniform; and ignoring the value of decision makers' (DMs') common opinion on the criteria information evaluation. Secondly, the difference in DMs' utility function has failed to receive attention. An innovative method is proposed in this article to solve these drawbacks. First, a new form of interval number and its weighted operator are proposed to reflect the uncertainty and reduce information loss. Secondly, a new stochastic dominance degree is proposed to quantify the interval number with a probability distribution. Thirdly, a two-stage method integrating the weighted operator with stochastic dominance degree is proposed to evaluate the alternatives. Finally, a case from China proves the effectiveness of this method.

  17. Computational procedure of optimal inventory model involving controllable backorder rate and variable lead time with defective units

    NASA Astrophysics Data System (ADS)

    Lee, Wen-Chuan; Wu, Jong-Wuu; Tsou, Hsin-Hui; Lei, Chia-Ling

    2012-10-01

    This article considers that the number of defective units in an arrival order is a binominal random variable. We derive a modified mixture inventory model with backorders and lost sales, in which the order quantity and lead time are decision variables. In our studies, we also assume that the backorder rate is dependent on the length of lead time through the amount of shortages and let the backorder rate be a control variable. In addition, we assume that the lead time demand follows a mixture of normal distributions, and then relax the assumption about the form of the mixture of distribution functions of the lead time demand and apply the minimax distribution free procedure to solve the problem. Furthermore, we develop an algorithm procedure to obtain the optimal ordering strategy for each case. Finally, three numerical examples are also given to illustrate the results.

  18. First measurement of the muon neutrino charged current quasielastic double differential cross section

    NASA Astrophysics Data System (ADS)

    Aguilar-Arevalo, A. A.; Anderson, C. E.; Bazarko, A. O.; Brice, S. J.; Brown, B. C.; Bugel, L.; Cao, J.; Coney, L.; Conrad, J. M.; Cox, D. C.; Curioni, A.; Djurcic, Z.; Finley, D. A.; Fleming, B. T.; Ford, R.; Garcia, F. G.; Garvey, G. T.; Grange, J.; Green, C.; Green, J. A.; Hart, T. L.; Hawker, E.; Imlay, R.; Johnson, R. A.; Karagiorgi, G.; Kasper, P.; Katori, T.; Kobilarcik, T.; Kourbanis, I.; Koutsoliotas, S.; Laird, E. M.; Linden, S. K.; Link, J. M.; Liu, Y.; Liu, Y.; Louis, W. C.; Mahn, K. B. M.; Marsh, W.; Mauger, C.; McGary, V. T.; McGregor, G.; Metcalf, W.; Meyers, P. D.; Mills, F.; Mills, G. B.; Monroe, J.; Moore, C. D.; Mousseau, J.; Nelson, R. H.; Nienaber, P.; Nowak, J. A.; Osmanov, B.; Ouedraogo, S.; Patterson, R. B.; Pavlovic, Z.; Perevalov, D.; Polly, C. C.; Prebys, E.; Raaf, J. L.; Ray, H.; Roe, B. P.; Russell, A. D.; Sandberg, V.; Schirato, R.; Schmitz, D.; Shaevitz, M. H.; Shoemaker, F. C.; Smith, D.; Soderberg, M.; Sorel, M.; Spentzouris, P.; Spitz, J.; Stancu, I.; Stefanski, R. J.; Sung, M.; Tanaka, H. A.; Tayloe, R.; Tzanov, M.; van de Water, R. G.; Wascko, M. O.; White, D. H.; Wilking, M. J.; Yang, H. J.; Zeller, G. P.; Zimmerman, E. D.; MiniBooNE Collaboration

    2010-05-01

    A high-statistics sample of charged-current muon neutrino scattering events collected with the MiniBooNE experiment is analyzed to extract the first measurement of the double differential cross section ((d2σ)/(dTμdcos⁡θμ)) for charged-current quasielastic (CCQE) scattering on carbon. This result features minimal model dependence and provides the most complete information on this process to date. With the assumption of CCQE scattering, the absolute cross section as a function of neutrino energy (σ[Eν]) and the single differential cross section ((dσ)/(dQ2)) are extracted to facilitate comparison with previous measurements. These quantities may be used to characterize an effective axial-vector form factor of the nucleon and to improve the modeling of low-energy neutrino interactions on nuclear targets. The results are relevant for experiments searching for neutrino oscillations.

  19. Direct measurement of weakly nonequilibrium system entropy is consistent with Gibbs–Shannon form

    PubMed Central

    2017-01-01

    Stochastic thermodynamics extends classical thermodynamics to small systems in contact with one or more heat baths. It can account for the effects of thermal fluctuations and describe systems far from thermodynamic equilibrium. A basic assumption is that the expression for Shannon entropy is the appropriate description for the entropy of a nonequilibrium system in such a setting. Here we measure experimentally this function in a system that is in local but not global equilibrium. Our system is a micron-scale colloidal particle in water, in a virtual double-well potential created by a feedback trap. We measure the work to erase a fraction of a bit of information and show that it is bounded by the Shannon entropy for a two-state system. Further, by measuring directly the reversibility of slow protocols, we can distinguish unambiguously between protocols that can and cannot reach the expected thermodynamic bounds. PMID:29073017

  20. Shapes and features of the primordial bispectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, Jinn-Ouk; Palma, Gonzalo A.; Sypsas, Spyros, E-mail: jinn-ouk.gong@apctp.org, E-mail: gpalmaquilod@ing.uchile.cl, E-mail: s.sypsas@gmail.com

    If time-dependent disruptions from slow-roll occur during inflation, the correlation functions of the primordial curvature perturbation should have scale-dependent features, a case which is marginally supported from the cosmic microwave background (CMB) data. We offer a new approach to analyze the appearance of such features in the primordial bispectrum that yields new consistency relations and justifies the search of oscillating patterns modulated by orthogonal and local templates. Under the assumption of sharp features, we find that the cubic couplings of the curvature perturbation can be expressed in terms of the bispectrum in two specific momentum configurations, for example local andmore » equilateral. This allows us to derive consistency relations among different bispectrum shapes, which in principle could be tested in future CMB surveys. Furthermore, based on the form of the consistency relations, we construct new two-parameter templates for features that include all the known shapes.« less

  1. Market penetration of energy supply technologies

    NASA Astrophysics Data System (ADS)

    Condap, R. J.

    1980-03-01

    Techniques to incorporate the concepts of profit-induced growth and risk aversion into policy-oriented optimization models of the domestic energy sector are examined. After reviewing the pertinent market penetration literature, simple mathematical programs in which the introduction of new energy technologies is constrained primarily by the reinvestment of profits are formulated. The main results involve the convergence behavior of technology production levels under various assumptions about the form of the energy demand function. Next, profitability growth constraints are embedded in a full-scale model of U.S. energy-economy interactions. A rapidly convergent algorithm is developed to utilize optimal shadow prices in the computation of profitability for individual technologies. Allowance is made for additional policy variables such as government funding and taxation. The result is an optimal deployment schedule for current and future energy technologies which is consistent with the sector's ability to finance capacity expansion.

  2. A size-structured model of bacterial growth and reproduction.

    PubMed

    Ellermeyer, S F; Pilyugin, S S

    2012-01-01

    We consider a size-structured bacterial population model in which the rate of cell growth is both size- and time-dependent and the average per capita reproduction rate is specified as a model parameter. It is shown that the model admits classical solutions. The population-level and distribution-level behaviours of these solutions are then determined in terms of the model parameters. The distribution-level behaviour is found to be different from that found in similar models of bacterial population dynamics. Rather than convergence to a stable size distribution, we find that size distributions repeat in cycles. This phenomenon is observed in similar models only under special assumptions on the functional form of the size-dependent growth rate factor. Our main results are illustrated with examples, and we also provide an introductory study of the bacterial growth in a chemostat within the framework of our model.

  3. Finite-temperature phase transitions of third and higher order in gauge theories at large N

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishimura, Hiromichi; Pisarski, Robert D.; Skokov, Vladimir V.

    We study phase transitions in SU(∞) gauge theories at nonzero temperature using matrix models. Our basic assumption is that the effective potential is dominated by double trace terms for the Polyakov loops. As a function of the various parameters, related to terms linear, quadratic, and quartic in the Polyakov loop, the phase diagram exhibits a universal structure. In a large region of this parameter space, there is a continuous phase transition whose order is larger than second. This is a generalization of the phase transition of Gross, Witten, and Wadia (GWW). Depending upon the detailed form of the matrix model,more » the eigenvalue density and the behavior of the specific heat near the transition differ drastically. Here, we speculate that in the pure gauge theory, that although the deconfining transition is thermodynamically of first order, it can be nevertheless conformally symmetric at infnite N.« less

  4. A B-B-G-K-Y framework for fluid turbulence

    NASA Technical Reports Server (NTRS)

    Montgomery, D.

    1975-01-01

    A kinetic theory for fluid turbulence is developed from the Liouville equation and the associated BBGKY hierarchy. Real and imaginary parts of Fourier coefficients of fluid variables play the roles of particles. Closure is achieved by the assumption of negligible five-coefficient correlation functions and probability distributions of Fourier coefficients are the basic variables of the theory. An additional approximation leads to a closed-moment description similar to the so-called eddy-damped Markovian approximation. A kinetic equation is derived for which conservation laws and an H-theorem can be rigorously established, the H-theorem implying relaxation of the absolute equilibrium of Kraichnan. The equation can be cast in the Fokker-Planck form, and relaxation times estimated from its friction and diffusion coefficients. An undetermined parameter in the theory is the free decay time for triplet correlations. Some attention is given to the inclusion of viscous damping and external driving forces.

  5. Hydration of nonelectrolytes in binary aqueous solutions

    NASA Astrophysics Data System (ADS)

    Rudakov, A. M.; Sergievskii, V. V.

    2010-10-01

    Literature data on the thermodynamic properties of binary aqueous solutions of nonelectrolytes that show negative deviations from Raoult's law due largely to the contribution of the hydration of the solute are briefly surveyed. Attention is focused on simulating the thermodynamic properties of solutions using equations of the cluster model. It is shown that the model is based on the assumption that there exists a distribution of stoichiometric hydrates over hydration numbers. In terms of the theory of ideal associated solutions, the equations for activity coefficients, osmotic coefficients, vapor pressure, and excess thermodynamic functions (volume, Gibbs energy, enthalpy, entropy) are obtained in analytical form. Basic parameters in the equations are the hydration numbers of the nonelectrolyte (the mathematical expectation of the distribution of hydrates) and the dispersions of the distribution. It is concluded that the model equations adequately describe the thermodynamic properties of a wide range of nonelectrolytes partly or completely soluble in water.

  6. Evaluating scaling models in biology using hierarchical Bayesian approaches

    PubMed Central

    Price, Charles A; Ogle, Kiona; White, Ethan P; Weitz, Joshua S

    2009-01-01

    Theoretical models for allometric relationships between organismal form and function are typically tested by comparing a single predicted relationship with empirical data. Several prominent models, however, predict more than one allometric relationship, and comparisons among alternative models have not taken this into account. Here we evaluate several different scaling models of plant morphology within a hierarchical Bayesian framework that simultaneously fits multiple scaling relationships to three large allometric datasets. The scaling models include: inflexible universal models derived from biophysical assumptions (e.g. elastic similarity or fractal networks), a flexible variation of a fractal network model, and a highly flexible model constrained only by basic algebraic relationships. We demonstrate that variation in intraspecific allometric scaling exponents is inconsistent with the universal models, and that more flexible approaches that allow for biological variability at the species level outperform universal models, even when accounting for relative increases in model complexity. PMID:19453621

  7. An uncertainty analysis of the flood-stage upstream from a bridge.

    PubMed

    Sowiński, M

    2006-01-01

    The paper begins with the formulation of the problem in the form of a general performance function. Next the Latin hypercube sampling (LHS) technique--a modified version of the Monte Carlo method is briefly described. The essential uncertainty analysis of the flood-stage upstream from a bridge starts with a description of the hydraulic model. This model concept is based on the HEC-RAS model developed for subcritical flow under a bridge without piers in which the energy equation is applied. The next section contains the characteristic of the basic variables including a specification of their statistics (means and variances). Next the problem of correlated variables is discussed and assumptions concerning correlation among basic variables are formulated. The analysis of results is based on LHS ranking lists obtained from the computer package UNCSAM. Results fot two examples are given: one for independent and the other for correlated variables.

  8. Ultrafast Method for the Analysis of Fluorescence Lifetime Imaging Microscopy Data Based on the Laguerre Expansion Technique

    PubMed Central

    Jo, Javier A.; Fang, Qiyin; Marcu, Laura

    2007-01-01

    We report a new deconvolution method for fluorescence lifetime imaging microscopy (FLIM) based on the Laguerre expansion technique. The performance of this method was tested on synthetic and real FLIM images. The following interesting properties of this technique were demonstrated. 1) The fluorescence intensity decay can be estimated simultaneously for all pixels, without a priori assumption of the decay functional form. 2) The computation speed is extremely fast, performing at least two orders of magnitude faster than current algorithms. 3) The estimated maps of Laguerre expansion coefficients provide a new domain for representing FLIM information. 4) The number of images required for the analysis is relatively small, allowing reduction of the acquisition time. These findings indicate that the developed Laguerre expansion technique for FLIM analysis represents a robust and extremely fast deconvolution method that enables practical applications of FLIM in medicine, biology, biochemistry, and chemistry. PMID:19444338

  9. A two-equation model for heat transport in wall turbulent shear flows

    NASA Astrophysics Data System (ADS)

    Nagano, Y.; Kim, C.

    1988-08-01

    A new proposal for closing the energy equation is presented at the two-equation level of turbulence modeling. The eddy diffusivity concept is used in modeling. However, just as the eddy viscosity is determined from solutions of the k and epsilon equations, so the eddy diffusivity for heat is given as functions of temperature variance, and the dissipation rate of temperature fluctuations, together with k and epsilon. Thus, the proposed model does not require any questionable assumptions for the 'turbulent Prandtl number'. Modeled forms of the equations are developed to account for the physical effects of molecular Prandtl number and near-wall turbulence. The model is tested by application to a flat-plate boundary layer, the thermal entrance region of a pipe, and the turbulent heat transfer in fluids over a wide range of the Prandtl number. Agreement with the experiment is generally very satisfactory.

  10. A verified design of a fault-tolerant clock synchronization circuit: Preliminary investigations

    NASA Technical Reports Server (NTRS)

    Miner, Paul S.

    1992-01-01

    Schneider demonstrates that many fault tolerant clock synchronization algorithms can be represented as refinements of a single proven correct paradigm. Shankar provides mechanical proof that Schneider's schema achieves Byzantine fault tolerant clock synchronization provided that 11 constraints are satisfied. Some of the constraints are assumptions about physical properties of the system and cannot be established formally. Proofs are given that the fault tolerant midpoint convergence function satisfies three of the constraints. A hardware design is presented, implementing the fault tolerant midpoint function, which is shown to satisfy the remaining constraints. The synchronization circuit will recover completely from transient faults provided the maximum fault assumption is not violated. The initialization protocol for the circuit also provides a recovery mechanism from total system failure caused by correlated transient faults.

  11. Boltzmann equations for a binary one-dimensional ideal gas.

    PubMed

    Boozer, A D

    2011-09-01

    We consider a time-reversal invariant dynamical model of a binary ideal gas of N molecules in one spatial dimension. By making time-asymmetric assumptions about the behavior of the gas, we derive Boltzmann and anti-Boltzmann equations that describe the evolution of the single-molecule velocity distribution functions for an ensemble of such systems. We show that for a special class of initial states of the ensemble one can obtain an exact expression for the N-molecule velocity distribution function, and we use this expression to rigorously prove that the time-asymmetric assumptions needed to derive the Boltzmann and anti-Boltzmann equations hold in the limit of large N. Our results clarify some subtle issues regarding the origin of the time asymmetry of Boltzmann's H theorem.

  12. Analysis of functional importance of binding sites in the Drosophila gap gene network model.

    PubMed

    Kozlov, Konstantin; Gursky, Vitaly V; Kulakovskiy, Ivan V; Dymova, Arina; Samsonova, Maria

    2015-01-01

    The statistical thermodynamics based approach provides a promising framework for construction of the genotype-phenotype map in many biological systems. Among important aspects of a good model connecting the DNA sequence information with that of a molecular phenotype (gene expression) is the selection of regulatory interactions and relevant transcription factor bindings sites. As the model may predict different levels of the functional importance of specific binding sites in different genomic and regulatory contexts, it is essential to formulate and study such models under different modeling assumptions. We elaborate a two-layer model for the Drosophila gap gene network and include in the model a combined set of transcription factor binding sites and concentration dependent regulatory interaction between gap genes hunchback and Kruppel. We show that the new variants of the model are more consistent in terms of gene expression predictions for various genetic constructs in comparison to previous work. We quantify the functional importance of binding sites by calculating their impact on gene expression in the model and calculate how these impacts correlate across all sites under different modeling assumptions. The assumption about the dual interaction between hb and Kr leads to the most consistent modeling results, but, on the other hand, may obscure existence of indirect interactions between binding sites in regulatory regions of distinct genes. The analysis confirms the previously formulated regulation concept of many weak binding sites working in concert. The model predicts a more or less uniform distribution of functionally important binding sites over the sets of experimentally characterized regulatory modules and other open chromatin domains.

  13. The Maximum Likelihood Solution for Inclination-only Data

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2006-12-01

    The arithmetic means of inclination-only data are known to introduce a shallowing bias. Several methods have been proposed to estimate unbiased means of the inclination along with measures of the precision. Most of the inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all these methods require various assumptions and approximations that are inappropriate for many data sets. For some steep and dispersed data sets, the estimates provided by these methods are significantly displaced from the peak of the likelihood function to systematically shallower inclinations. The problem in locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest. This is because some elements of the log-likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study we succeeded in analytically cancelling exponential elements from the likelihood function, and we are now able to calculate its value for any location in the parameter space and for any inclination-only data set, with full accuracy. Furtermore, we can now calculate the partial derivatives of the likelihood function with desired accuracy. Locating the maximum likelihood without the assumptions required by previous methods is now straight forward. The information to separate the mean inclination from the precision parameter will be lost for very steep and dispersed data sets. It is worth noting that the likelihood function always has a maximum value. However, for some dispersed and steep data sets with few samples, the likelihood function takes its highest value on the boundary of the parameter space, i.e. at inclinations of +/- 90 degrees, but with relatively well defined dispersion. Our simulations indicate that this occurs quite frequently for certain data sets, and relatively small perturbations in the data will drive the maxima to the boundary. We interpret this to indicate that, for such data sets, the information needed to separate the mean inclination and the precision parameter is permanently lost. To assess the reliability and accuracy of our method we generated large number of random Fisher-distributed data sets and used seven methods to estimate the mean inclination and precision paramenter. These comparisons are described by Levi and Arason at the 2006 AGU Fall meeting. The results of the various methods is very favourable to our new robust maximum likelihood method, which, on average, is the most reliable, and the mean inclination estimates are the least biased toward shallow values. Further information on our inclination-only analysis can be obtained from: http://www.vedur.is/~arason/paleomag

  14. The mismeasure of machine: Synthetic biology and the trouble with engineering metaphors.

    PubMed

    Boudry, Maarten; Pigliucci, Massimo

    2013-12-01

    The scientific study of living organisms is permeated by machine and design metaphors. Genes are thought of as the "blueprint" of an organism, organisms are "reverse engineered" to discover their functionality, and living cells are compared to biochemical factories, complete with assembly lines, transport systems, messenger circuits, etc. Although the notion of design is indispensable to think about adaptations, and engineering analogies have considerable heuristic value (e.g., optimality assumptions), we argue they are limited in several important respects. In particular, the analogy with human-made machines falters when we move down to the level of molecular biology and genetics. Living organisms are far more messy and less transparent than human-made machines. Notoriously, evolution is an opportunistic tinkerer, blindly stumbling on "designs" that no sensible engineer would come up with. Despite impressive technological innovation, the prospect of artificially designing new life forms from scratch has proven more difficult than the superficial analogy with "programming" the right "software" would suggest. The idea of applying straightforward engineering approaches to living systems and their genomes-isolating functional components, designing new parts from scratch, recombining and assembling them into novel life forms-pushes the analogy with human artifacts beyond its limits. In the absence of a one-to-one correspondence between genotype and phenotype, there is no straightforward way to implement novel biological functions and design new life forms. Both the developmental complexity of gene expression and the multifarious interactions of genes and environments are serious obstacles for "engineering" a particular phenotype. The problem of reverse-engineering a desired phenotype to its genetic "instructions" is probably intractable for any but the most simple phenotypes. Recent developments in the field of bio-engineering and synthetic biology reflect these limitations. Instead of genetically engineering a desired trait from scratch, as the machine/engineering metaphor promises, researchers are making greater strides by co-opting natural selection to "search" for a suitable genotype, or by borrowing and recombining genetic material from extant life forms. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Implicit assumptions underlying simple harvest models of marine bird populations can mislead environmental management decisions.

    PubMed

    O'Brien, Susan H; Cook, Aonghais S C P; Robinson, Robert A

    2017-10-01

    Assessing the potential impact of additional mortality from anthropogenic causes on animal populations requires detailed demographic information. However, these data are frequently lacking, making simple algorithms, which require little data, appealing. Because of their simplicity, these algorithms often rely on implicit assumptions, some of which may be quite restrictive. Potential Biological Removal (PBR) is a simple harvest model that estimates the number of additional mortalities that a population can theoretically sustain without causing population extinction. However, PBR relies on a number of implicit assumptions, particularly around density dependence and population trajectory that limit its applicability in many situations. Among several uses, it has been widely employed in Europe in Environmental Impact Assessments (EIA), to examine the acceptability of potential effects of offshore wind farms on marine bird populations. As a case study, we use PBR to estimate the number of additional mortalities that a population with characteristics typical of a seabird population can theoretically sustain. We incorporated this level of additional mortality within Leslie matrix models to test assumptions within the PBR algorithm about density dependence and current population trajectory. Our analyses suggest that the PBR algorithm identifies levels of mortality which cause population declines for most population trajectories and forms of population regulation. Consequently, we recommend that practitioners do not use PBR in an EIA context for offshore wind energy developments. Rather than using simple algorithms that rely on potentially invalid implicit assumptions, we recommend use of Leslie matrix models for assessing the impact of additional mortality on a population, enabling the user to explicitly define assumptions and test their importance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. A simple exposure-time theory for all time-nonlocal transport formulations and beyond.

    NASA Astrophysics Data System (ADS)

    Ginn, T. R.; Schreyer, L. G.

    2016-12-01

    Anomalous transport or better put, anomalous non-transport, of solutes or flowing water or suspended colloids or bacteria etc. has been the subject of intense analyses with multiple formulations appearing in scientific literature from hydrology to geomorphology to chemical engineering, to environmental microbiology to mathematical physics. Primary focus has recently been on time-nonlocal mass conservation formulations such as multirate mass transfer, fractional-time advection-dispersion, continuous-time random walks, and dual porosity modeling approaches, that employ a convolution with a memory function to reflect respective conceptual models of delays in transport. These approaches are effective or "proxy" ones that do not always distinguish transport from immobilzation delays, are generally without connection to measurable physicochemical properties, and involve variously fractional calculus, inverse Laplace or Fourier transformations, and/or complex stochastic notions including assumptions of stationarity or ergodicity at the observation scale. Here we show a much simpler approach to time-nonlocal (non-)transport that is free of all these things, and is based on expressing the memory function in terms of a rate of mobilization of immobilized mass that is a function of the continguous time immobilized. Our approach treats mass transfer completely independently from the transport process, and it allows specification of actual immobilization mechanisms or delays. To our surprize we found that for all practical purposes any memory function can be expressed this way, including all of those associated with the multi-rate mass transfer approaches, original powerlaw, different truncated powerlaws, fractional-derivative, etc. More intriguing is the fact that the exposure-time approach can be used to construct heretofore unseen memory functions, e.g., forms that generate oscillating tails of breakthrough curves such as may occur in sediment transport, forms for delay-differential equations, and so on. Because the exposure-time approach is both simple and localized, it provides a promising platform for launching forays into non-Markovian and/or nonlinear processes and into upscaling age-dependent multicomponent reaction systems.

  17. Time-varying trends of global vegetation activity

    NASA Astrophysics Data System (ADS)

    Pan, N.; Feng, X.; Fu, B.

    2016-12-01

    Vegetation plays an important role in regulating the energy change, water cycle and biochemical cycle in terrestrial ecosystems. Monitoring the dynamics of vegetation activity and understanding their driving factors have been an important issue in global change research. Normalized Difference Vegetation Index (NDVI), an indicator of vegetation activity, has been widely used in investigating vegetation changes at regional and global scales. Most studies utilized linear regression or piecewise linear regression approaches to obtain an averaged changing rate over a certain time span, with an implicit assumption that the trend didn't change over time during that period. However, no evidence shows that this assumption is right for the non-linear and non-stationary NDVI time series. In this study, we adopted the multidimensional ensemble empirical mode decomposition (MEEMD) method to extract the time-varying trends of NDVI from original signals without any a priori assumption of their functional form. Our results show that vegetation trends are spatially and temporally non-uniform during 1982-2013. Most vegetated area exhibited greening trends in the 1980s. Nevertheless, the area with greening trends decreased over time since the early 1990s, and the greening trends have stalled or even reversed in many places. Regions with browning trends were mainly located in southern low latitudes in the 1980s, whose area decreased before the middle 1990s and then increased at an accelerated rate. The greening-to-browning reversals were widespread across all continents except Oceania (43% of the vegetated areas), most of which happened after the middle 1990s. In contrast, the browning-to-greening reversals occurred in smaller area and earlier time. The area with monotonic greening and browning trends accounted for 33% and 5% of the vegetated area, respectively. By performing partial correlation analyses between NDVI and climatic elements (temperature, precipitation and cloud cover) and analyzing the MEEMD-extracted trends of these climatic elements, we discussed possible driving factors of the time-varying trends of NDVI in several specific regions where trend reversals occurred.

  18. Ideas and perspectives: how coupled is the vegetation to the boundary layer?

    NASA Astrophysics Data System (ADS)

    De Kauwe, Martin G.; Medlyn, Belinda E.; Knauer, Jürgen; Williams, Christopher A.

    2017-10-01

    Understanding the sensitivity of transpiration to stomatal conductance is critical to simulating the water cycle. This sensitivity is a function of the degree of coupling between the vegetation and the atmosphere and is commonly expressed by the decoupling factor. The degree of coupling assumed by models varies considerably and has previously been shown to be a major cause of model disagreement when simulating changes in transpiration in response to elevated CO2. The degree of coupling also offers us insight into how different vegetation types control transpiration fluxes, which is fundamental to our understanding of land-atmosphere interactions. To explore this issue, we combined an extensive literature summary from 41 studies with estimates of the decoupling coefficient estimated from FLUXNET data. We found some notable departures from the values previously reported in single-site studies. There was large variability in estimated decoupling coefficients (range 0.05-0.51) for evergreen needleleaf forests. This is a result that was broadly supported by our literature review but contrasts with the early literature which suggests that evergreen needleleaf forests are generally well coupled. Estimates from FLUXNET indicated that evergreen broadleaved forests were the most tightly coupled, differing from our literature review and instead suggesting that it was evergreen needleleaf forests. We also found that the assumption that grasses would be strongly decoupled (due to vegetation stature) was only true for high precipitation sites. These results were robust to assumptions about aerodynamic conductance and, to a lesser extent, energy balance closure. Thus, these data form a benchmarking metric against which to test model assumptions about coupling. Our results identify a clear need to improve the quantification of the processes involved in scaling from the leaf to the whole ecosystem. Progress could be made with targeted measurement campaigns at flux sites and greater site characteristic information across the FLUXNET network.

  19. Khokhlov Zabolotskaya Kuznetsov type equation: nonlinear acoustics in heterogeneous media

    NASA Astrophysics Data System (ADS)

    Kostin, Ilya; Panasenko, Grigory

    2006-04-01

    The KZK type equation introduced in this Note differs from the traditional form of the KZK model known in acoustics by the assumptions on the nonlinear term. For this modified form, a global existence and uniqueness result is established for the case of non-constant coefficients. Afterwards the asymptotic behaviour of the solution of the KZK type equation with rapidly oscillating coefficients is studied. To cite this article: I. Kostin, G. Panasenko, C. R. Mecanique 334 (2006).

  20. Cabin Environment Physics Risk Model

    NASA Technical Reports Server (NTRS)

    Mattenberger, Christopher J.; Mathias, Donovan Leigh

    2014-01-01

    This paper presents a Cabin Environment Physics Risk (CEPR) model that predicts the time for an initial failure of Environmental Control and Life Support System (ECLSS) functionality to propagate into a hazardous environment and trigger a loss-of-crew (LOC) event. This physics-of failure model allows a probabilistic risk assessment of a crewed spacecraft to account for the cabin environment, which can serve as a buffer to protect the crew during an abort from orbit and ultimately enable a safe return. The results of the CEPR model replace the assumption that failure of the crew critical ECLSS functionality causes LOC instantly, and provide a more accurate representation of the spacecraft's risk posture. The instant-LOC assumption is shown to be excessively conservative and, moreover, can impact the relative risk drivers identified for the spacecraft. This, in turn, could lead the design team to allocate mass for equipment to reduce overly conservative risk estimates in a suboptimal configuration, which inherently increases the overall risk to the crew. For example, available mass could be poorly used to add redundant ECLSS components that have a negligible benefit but appear to make the vehicle safer due to poor assumptions about the propagation time of ECLSS failures.

  1. Welcome to PKTI.

    ERIC Educational Resources Information Center

    Whitmore, Kathryn F.; Norton-Meier, Lori A.

    2000-01-01

    Forms part of a themed issue describing "Parent-Kid-Teacher Investigators," a program in which parents, children, and teachers gather regularly to use language and literacy for action research projects. Explains the philosophy, assumptions, and intentions behind the program, and research that supports it. Offers action examples of how each belief…

  2. 14 CFR 171.265 - Glide path performance requirements.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... impressed on the microwave carrier of the radiated glide slope signal in the form of a unique summation of... TRANSPORTATION (CONTINUED) NAVIGATIONAL FACILITIES NON-FEDERAL NAVIGATION FACILITIES Interim Standard Microwave... assumption that the aircraft is heading directly toward the facility. (a) The glide slope antenna system must...

  3. 14 CFR 171.265 - Glide path performance requirements.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... impressed on the microwave carrier of the radiated glide slope signal in the form of a unique summation of... TRANSPORTATION (CONTINUED) NAVIGATIONAL FACILITIES NON-FEDERAL NAVIGATION FACILITIES Interim Standard Microwave... assumption that the aircraft is heading directly toward the facility. (a) The glide slope antenna system must...

  4. 14 CFR 171.265 - Glide path performance requirements.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... impressed on the microwave carrier of the radiated glide slope signal in the form of a unique summation of... TRANSPORTATION (CONTINUED) NAVIGATIONAL FACILITIES NON-FEDERAL NAVIGATION FACILITIES Interim Standard Microwave... assumption that the aircraft is heading directly toward the facility. (a) The glide slope antenna system must...

  5. 14 CFR 171.265 - Glide path performance requirements.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... impressed on the microwave carrier of the radiated glide slope signal in the form of a unique summation of... TRANSPORTATION (CONTINUED) NAVIGATIONAL FACILITIES NON-FEDERAL NAVIGATION FACILITIES Interim Standard Microwave... assumption that the aircraft is heading directly toward the facility. (a) The glide slope antenna system must...

  6. Developing a Conceptual Framework for Student Learning during International Community Engagement

    ERIC Educational Resources Information Center

    Pink, Matthew A.; Taouk, Youssef; Guinea, Stephen; Bunch, Katie; Flowers, Karen; Nightingale, Karen

    2016-01-01

    University-community engagement often involves students engaging with people who experience multiple forms of disadvantage or marginalization. This is particularly true when universities work with communities in developing nations. Participation in these projects can be challenging for students. Assumptions about themselves, their professional…

  7. Alternatives for discounting in the analysis of noninferiority trials.

    PubMed

    Snapinn, Steven M

    2004-05-01

    Determining the efficacy of an experimental therapy relative to placebo on the basis of an active-control noninferiority trial requires reference to historical placebo-controlled trials. The validity of the resulting comparison depends on two key assumptions: assay sensitivity and constancy. Since the truth of these assumptions cannot be verified, it seems logical to raise the standard of evidence required to declare efficacy; this concept is referred to as discounting. It is not often recognized that two common design and analysis approaches, setting a noninferiority margin and requiring preservation of a fraction of the standard therapy's effect, are forms of discounting. The noninferiority margin is a particularly poor approach, since its degree of discounting depends on an irrelevant factor. Preservation of effect is more reasonable, but it addresses only the constancy assumption, not the issue of assay sensitivity. Gaining consensus on the most appropriate approach to the design and analysis of noninferiority trials will require a common understanding of the concept of discounting.

  8. Building Blocks of Psychology: on Remaking the Unkept Promises of Early Schools.

    PubMed

    Gozli, Davood G; Deng, Wei Sophia

    2018-03-01

    The appeal and popularity of "building blocks", i.e., simple and dissociable elements of behavior and experience, persists in psychological research. We begin our assessment of this research strategy with an historical review of structuralism (as espoused by E. B. Titchener) and behaviorism (espoused by J. B. Watson and B. F. Skinner), two movements that held the assumption in their attempts to provide a systematic and unified discipline. We point out the ways in which the elementism of the two schools selected, framed, and excluded topics of study. After the historical review, we turn to contemporary literature and highlight the persistence of research into building blocks and the associated framing and exclusions in psychological research. The assumption that complex categories of human psychology can be understood in terms of their elementary components and simplest forms seems indefensible. In specific cases, therefore, reliance on the assumption requires justification. Finally, we review alternative strategies that bypass the commitment to building blocks.

  9. The Role of Parametric Assumptions in Adaptive Bayesian Estimation

    ERIC Educational Resources Information Center

    Alcala-Quintana, Rocio; Garcia-Perez, Miguel A.

    2004-01-01

    Variants of adaptive Bayesian procedures for estimating the 5% point on a psychometric function were studied by simulation. Bias and standard error were the criteria to evaluate performance. The results indicated a superiority of (a) uniform priors, (b) model likelihood functions that are odd symmetric about threshold and that have parameter…

  10. DIFFERENCES IN THE STRUCTURE AND FUNCTION OF FATHEAD MINNOW AND HUMAN ERA: IMPLICATIONS FOR IN VITRO TESTING OF ENDOCRINE DISRUPTING CHEMICALS

    EPA Science Inventory

    Mammalian receptors and assay systems are generally used for in vitro analysis of endocrine disrupting chemicals (EDC) with the assumption that minor differences in amino acid sequences among species do not translate into significant differences in receptor function. We have fou...

  11. Parenting and Preschoolers' Symptoms as a Function of Child Gender and SES

    ERIC Educational Resources Information Center

    Kim, Hyun-Jeong; Arnold, David H.; Fisher, Paige H.; Zeljo, Alexandra

    2005-01-01

    Improving parental discipline practices is a central target of behavioral parent training programs, but little research has examined how discipline varies as a function of gender. Based on the assumption that socialization practices might be related to gender differences in psychopathology, we examined relations between parenting and problem…

  12. Caregivers' Agreement and Validity of Indirect Functional Analysis: A Cross Cultural Evaluation across Multiple Problem Behavior Topographies

    ERIC Educational Resources Information Center

    Virues-Ortega, Javier; Segui-Duran, David; Descalzo-Quero, Alberto; Carnerero, Jose Julio; Martin, Neil

    2011-01-01

    The Motivation Assessment Scale is an aid for hypothesis-driven functional analysis. This study presents its Spanish cross-cultural validation while examining psychometric attributes not yet explored. The study sample comprised 80 primary caregivers of children with autism. Acceptability, scaling assumptions, internal consistency, factor…

  13. A Bayesian Beta-Mixture Model for Nonparametric IRT (BBM-IRT)

    ERIC Educational Resources Information Center

    Arenson, Ethan A.; Karabatsos, George

    2017-01-01

    Item response models typically assume that the item characteristic (step) curves follow a logistic or normal cumulative distribution function, which are strictly monotone functions of person test ability. Such assumptions can be overly-restrictive for real item response data. We propose a simple and more flexible Bayesian nonparametric IRT model…

  14. Naïve Bayes classification in R.

    PubMed

    Zhang, Zhongheng

    2016-06-01

    Naïve Bayes classification is a kind of simple probabilistic classification methods based on Bayes' theorem with the assumption of independence between features. The model is trained on training dataset to make predictions by predict() function. This article introduces two functions naiveBayes() and train() for the performance of Naïve Bayes classification.

  15. Multi-Instance Metric Transfer Learning for Genome-Wide Protein Function Prediction.

    PubMed

    Xu, Yonghui; Min, Huaqing; Wu, Qingyao; Song, Hengjie; Ye, Bicui

    2017-02-06

    Multi-Instance (MI) learning has been proven to be effective for the genome-wide protein function prediction problems where each training example is associated with multiple instances. Many studies in this literature attempted to find an appropriate Multi-Instance Learning (MIL) method for genome-wide protein function prediction under a usual assumption, the underlying distribution from testing data (target domain, i.e., TD) is the same as that from training data (source domain, i.e., SD). However, this assumption may be violated in real practice. To tackle this problem, in this paper, we propose a Multi-Instance Metric Transfer Learning (MIMTL) approach for genome-wide protein function prediction. In MIMTL, we first transfer the source domain distribution to the target domain distribution by utilizing the bag weights. Then, we construct a distance metric learning method with the reweighted bags. At last, we develop an alternative optimization scheme for MIMTL. Comprehensive experimental evidence on seven real-world organisms verifies the effectiveness and efficiency of the proposed MIMTL approach over several state-of-the-art methods.

  16. Network discovery with DCM

    PubMed Central

    Friston, Karl J.; Li, Baojuan; Daunizeau, Jean; Stephan, Klaas E.

    2011-01-01

    This paper is about inferring or discovering the functional architecture of distributed systems using Dynamic Causal Modelling (DCM). We describe a scheme that recovers the (dynamic) Bayesian dependency graph (connections in a network) using observed network activity. This network discovery uses Bayesian model selection to identify the sparsity structure (absence of edges or connections) in a graph that best explains observed time-series. The implicit adjacency matrix specifies the form of the network (e.g., cyclic or acyclic) and its graph-theoretical attributes (e.g., degree distribution). The scheme is illustrated using functional magnetic resonance imaging (fMRI) time series to discover functional brain networks. Crucially, it can be applied to experimentally evoked responses (activation studies) or endogenous activity in task-free (resting state) fMRI studies. Unlike conventional approaches to network discovery, DCM permits the analysis of directed and cyclic graphs. Furthermore, it eschews (implausible) Markovian assumptions about the serial independence of random fluctuations. The scheme furnishes a network description of distributed activity in the brain that is optimal in the sense of having the greatest conditional probability, relative to other networks. The networks are characterised in terms of their connectivity or adjacency matrices and conditional distributions over the directed (and reciprocal) effective connectivity between connected nodes or regions. We envisage that this approach will provide a useful complement to current analyses of functional connectivity for both activation and resting-state studies. PMID:21182971

  17. Estimation and Application of Ecological Memory Functions in Time and Space

    NASA Astrophysics Data System (ADS)

    Itter, M.; Finley, A. O.; Dawson, A.

    2017-12-01

    A common goal in quantitative ecology is the estimation or prediction of ecological processes as a function of explanatory variables (or covariates). Frequently, the ecological process of interest and associated covariates vary in time, space, or both. Theory indicates many ecological processes exhibit memory to local, past conditions. Despite such theoretical understanding, few methods exist to integrate observations from the recent past or within a local neighborhood as drivers of these processes. We build upon recent methodological advances in ecology and spatial statistics to develop a Bayesian hierarchical framework to estimate so-called ecological memory functions; that is, weight-generating functions that specify the relative importance of local, past covariate observations to ecological processes. Memory functions are estimated using a set of basis functions in time and/or space, allowing for flexible ecological memory based on a reduced set of parameters. Ecological memory functions are entirely data driven under the Bayesian hierarchical framework—no a priori assumptions are made regarding functional forms. Memory function uncertainty follows directly from posterior distributions for model parameters allowing for tractable propagation of error to predictions of ecological processes. We apply the model framework to simulated spatio-temporal datasets generated using memory functions of varying complexity. The framework is also applied to estimate the ecological memory of annual boreal forest growth to local, past water availability. Consistent with ecological understanding of boreal forest growth dynamics, memory to past water availability peaks in the year previous to growth and slowly decays to zero in five to eight years. The Bayesian hierarchical framework has applicability to a broad range of ecosystems and processes allowing for increased understanding of ecosystem responses to local and past conditions and improved prediction of ecological processes.

  18. Narayanaswamy’s 1971 aging theory and material time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dyre, Jeppe C., E-mail: dyre@ruc.dk

    2015-09-21

    The Bochkov-Kuzovlev nonlinear fluctuation-dissipation theorem is used to derive Narayanaswamy’s phenomenological theory of physical aging, in which this highly nonlinear phenomenon is described by a linear material-time convolution integral. A characteristic property of the Narayanaswamy aging description is material-time translational invariance, which is here taken as the basic assumption of the derivation. It is shown that only one possible definition of the material time obeys this invariance, namely, the square of the distance travelled from a configuration of the system far back in time. The paper concludes with suggestions for computer simulations that test for consequences of material-time translational invariance.more » One of these is the “unique-triangles property” according to which any three points on the system’s path form a triangle such that two side lengths determine the third; this is equivalent to the well-known triangular relation for time-autocorrelation functions of aging spin glasses [L. F. Cugliandolo and J. Kurchan, J. Phys. A: Math. Gen. 27, 5749 (1994)]. The unique-triangles property implies a simple geometric interpretation of out-of-equilibrium time-autocorrelation functions, which extends to aging a previously proposed framework for such functions in equilibrium [J. C. Dyre, e-print arXiv:cond-mat/9712222 (1997)].« less

  19. Is the effective field theory of dark energy effective?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linder, Eric V.; Sengör, Gizem; Watson, Scott, E-mail: evlinder@lbl.gov, E-mail: gsengor@syr.edu, E-mail: gswatson@syr.edu

    2016-05-01

    The effective field theory of cosmic acceleration systematizes possible contributions to the action, accounting for both dark energy and modifications of gravity. Rather than making model dependent assumptions, it includes all terms, subject to the required symmetries, with four (seven) functions of time for the coefficients. These correspond respectively to the Horndeski and general beyond Horndeski class of theories. We address the question of whether this general systematization is actually effective, i.e. useful in revealing the nature of cosmic acceleration when compared with cosmological data. The answer is no and yes: there is no simple time dependence of the freemore » functions —assumed forms in the literature are poor fits, but one can derive some general characteristics in early and late time limits. For example, we prove that the gravitational slip must restore to general relativity in the de Sitter limit of Horndeski theories, and why it doesn't more generally. We also clarify the relation between the tensor and scalar sectors, and its important relation to observations; in a real sense the expansion history H ( z ) or dark energy equation of state w ( z ) is 1/5 or less of the functional information! In addition we discuss the de Sitter, Horndeski, and decoupling limits of the theory utilizing Goldstone techniques.« less

  20. Interaction between lexical and grammatical language systems in the brain

    NASA Astrophysics Data System (ADS)

    Ardila, Alfredo

    2012-06-01

    This review concentrates on two different language dimensions: lexical/semantic and grammatical. This distinction between a lexical/semantic system and a grammatical system is well known in linguistics, but in cognitive neurosciences it has been obscured by the assumption that there are several forms of language disturbances associated with focal brain damage and hence language includes a diversity of functions (phoneme discrimination, lexical memory, grammar, repetition, language initiation ability, etc.), each one associated with the activity of a specific brain area. The clinical observation of patients with cerebral pathology shows that there are indeed only two different forms of language disturbances (disturbances in the lexical/semantic system and disturbances in the grammatical system); these two language dimensions are supported by different brain areas (temporal and frontal) in the left hemisphere. Furthermore, these two aspects of the language are developed at different ages during child's language acquisition, and they probably appeared at different historical moments during human evolution. Mechanisms of learning are different for both language systems: whereas the lexical/semantic knowledge is based in a declarative memory, grammatical knowledge corresponds to a procedural type of memory. Recognizing these two language dimensions can be crucial in understanding language evolution and human cognition.

  1. Cortisol shifts financial risk preferences

    PubMed Central

    Kandasamy, Narayanan; Hardy, Ben; Page, Lionel; Schaffner, Markus; Graggaber, Johann; Powlson, Andrew S.; Fletcher, Paul C.; Gurnell, Mark; Coates, John

    2014-01-01

    Risk taking is central to human activity. Consequently, it lies at the focal point of behavioral sciences such as neuroscience, economics, and finance. Many influential models from these sciences assume that financial risk preferences form a stable trait. Is this assumption justified and, if not, what causes the appetite for risk to fluctuate? We have previously found that traders experience a sustained increase in the stress hormone cortisol when the amount of uncertainty, in the form of market volatility, increases. Here we ask whether these elevated cortisol levels shift risk preferences. Using a double-blind, placebo-controlled, cross-over protocol we raised cortisol levels in volunteers over 8 d to the same extent previously observed in traders. We then tested for the utility and probability weighting functions underlying their risk taking and found that participants became more risk-averse. We also observed that the weighting of probabilities became more distorted among men relative to women. These results suggest that risk preferences are highly dynamic. Specifically, the stress response calibrates risk taking to our circumstances, reducing it in times of prolonged uncertainty, such as a financial crisis. Physiology-induced shifts in risk preferences may thus be an underappreciated cause of market instability. PMID:24550472

  2. Cortisol shifts financial risk preferences.

    PubMed

    Kandasamy, Narayanan; Hardy, Ben; Page, Lionel; Schaffner, Markus; Graggaber, Johann; Powlson, Andrew S; Fletcher, Paul C; Gurnell, Mark; Coates, John

    2014-03-04

    Risk taking is central to human activity. Consequently, it lies at the focal point of behavioral sciences such as neuroscience, economics, and finance. Many influential models from these sciences assume that financial risk preferences form a stable trait. Is this assumption justified and, if not, what causes the appetite for risk to fluctuate? We have previously found that traders experience a sustained increase in the stress hormone cortisol when the amount of uncertainty, in the form of market volatility, increases. Here we ask whether these elevated cortisol levels shift risk preferences. Using a double-blind, placebo-controlled, cross-over protocol we raised cortisol levels in volunteers over 8 d to the same extent previously observed in traders. We then tested for the utility and probability weighting functions underlying their risk taking and found that participants became more risk-averse. We also observed that the weighting of probabilities became more distorted among men relative to women. These results suggest that risk preferences are highly dynamic. Specifically, the stress response calibrates risk taking to our circumstances, reducing it in times of prolonged uncertainty, such as a financial crisis. Physiology-induced shifts in risk preferences may thus be an underappreciated cause of market instability.

  3. Evolution of female-specific wingless forms in bagworm moths.

    PubMed

    Niitsu, Shuhei; Sugawara, Hirotaka; Hayashi, Fumio

    2017-01-01

    The evolution of winglessness in insects has been typically interpreted as a consequence of developmental and other adaptations to various environments that are secondarily derived from a winged morph. Several species of bagworm moths (Insecta: Lepidoptera, Psychidae) exhibit a case-dwelling larval life style along with one of the most extreme cases of sexual dimorphism: wingless female adults. While the developmental process that led to these wingless females is well known, the origins and evolutionary transitions are not yet understood. To examine the evolutionary patterns of wing reduction in bagworm females, we reconstruct the molecular phylogeny of over 30 Asian species based on both mitochondrial (cytochrome c oxidase subunit I) and nuclear (28S rRNA) DNA sequences. Under a parsimonious assumption, the molecular phylogeny implies that: (i) the evolutionary wing reduction towards wingless females consisted of two steps: (Step I) from functional wings to vestigial wings (nonfunctional) and (Step II) from vestigial wings to the most specialized vermiform adults (lacking wings and legs); and (ii) vermiform morphs evolved independently at least twice. Based on the results of our study, we suggest that the evolutionary changes in the developmental system are essential for the establishment of different wingless forms in insects. © 2016 Wiley Periodicals, Inc.

  4. Precision reconstruction of manufactured free-form components

    NASA Astrophysics Data System (ADS)

    Ristic, Mihailo; Brujic, Djordje; Ainsworth, Iain

    2000-03-01

    Manufacturing needs in many industries, especially the aerospace and the automotive, involve CAD remodeling of manufactured free-form parts using NURBS. This is typically performed as part of 'first article inspection' or 'closing the design loop.' The reconstructed model must satisfy requirements such as accuracy, compatibility with the original CAD model and adherence to various constraints. The paper outlines a methodology for realizing this task. Efficiency and quality of the results are achieved by utilizing the nominal CAD model. It is argued that measurement and remodeling steps are equally important. We explain how the measurement was optimized in terms of accuracy, point distribution and measuring speed using a CMM. Remodeling steps include registration, data segmentation, parameterization and surface fitting. Enforcement of constraints such as continuity was performed as part of the surface fitting process. It was found necessary that the relevant algorithms are able to perform in the presence of measurement noise, while making no special assumptions about regularity of data distribution. In order to deal with real life situations, a number of supporting functions for geometric modeling were required and these are described. The presented methodology was applied using real aeroengine parts and the experimental results are presented.

  5. Convergence to Diagonal Form of Block Jacobi-type Processes

    NASA Astrophysics Data System (ADS)

    Hari, Vjeran

    2008-09-01

    The main result of recent research on convergence to diagonal form of block Jacobi-type processes is presented. For this purpose, all notions needed to describe the result are introduced. In particular, elementary block transformation matrices, simple and non-simple algorithms, block pivot strategies together with the appropriate equivalence relations are defined. The general block Jacobi-type process considered here can be specialized to take the form of almost any known Jacobi-type method for solving the ordinary or the generalized matrix eigenvalue and singular value problems. The assumptions used in the result are satisfied by many concrete methods.

  6. An experimental study of nonlinear dynamic system identification

    NASA Technical Reports Server (NTRS)

    Stry, Greselda I.; Mook, D. Joseph

    1990-01-01

    A technique for robust identification of nonlinear dynamic systems is developed and illustrated using both simulations and analog experiments. The technique is based on the Minimum Model Error optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature of the current work is the ability to identify nonlinear dynamic systems without prior assumptions regarding the form of the nonlinearities, in constrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.

  7. Backscattering from a two-scale rough surface with application to radar sea return

    NASA Technical Reports Server (NTRS)

    Chan, H. L.; Fung, A. K.

    1973-01-01

    A two-scale composite surface scattering theory was developed without using the noncoherent assumption. The surface is assumed electrically homogeneous and finitely conducting; the surface roughness may be nonuniform geometrically. The special forms of the terms for excluding the non-coherent assumption and the meanings of these terms are discussed. To gain insight into the mechanisms of backscattering, the results are compared with those obtained by previous theories. The comparison with NRL data shows satisfactory agreement for both horizontal and vertical polarization, especially for incident angles larger than 30 deg. For smaller incident angles, NASA/JSC data have been chosen for comparison and close agreement is again observed.

  8. Lifespan development of pro- and anti-saccades: multiple regression models for point estimates.

    PubMed

    Klein, Christoph; Foerster, Friedrich; Hartnegg, Klaus; Fischer, Burkhart

    2005-12-07

    The comparative study of anti- and pro-saccade task performance contributes to our functional understanding of the frontal lobes, their alterations in psychiatric or neurological populations, and their changes during the life span. In the present study, we apply regression analysis to model life span developmental effects on various pro- and anti-saccade task parameters, using data of a non-representative sample of 327 participants aged 9 to 88 years. Development up to the age of about 27 years was dominated by curvilinear rather than linear effects of age. Furthermore, the largest developmental differences were found for intra-subject variability measures and the anti-saccade task parameters. Ageing, by contrast, had the shape of a global linear decline of the investigated saccade functions, lacking the differential effects of age observed during development. While these results do support the assumption that frontal lobe functions can be distinguished from other functions by their strong and protracted development, they do not confirm the assumption of disproportionate deterioration of frontal lobe functions with ageing. We finally show that the regression models applied here to quantify life span developmental effects can also be used for individual predictions in applied research contexts or clinical practice.

  9. Collective behaviour of dislocations in a finite medium

    NASA Astrophysics Data System (ADS)

    Kooiman, M.; Hütter, M.; Geers, M. G. D.

    2014-04-01

    We derive the grand-canonical partition function of straight and parallel dislocation lines without making a priori assumptions on the temperature regime. Such a systematic derivation for dislocations has, to the best of our knowledge, not been carried out before, and several conflicting assumptions on the free energy of dislocations have been made in the literature. Dislocations have gained interest as they are the carriers of plastic deformation in crystalline materials and solid polymers, and they constitute a prototype system for two-dimensional Coulomb particles. Our microscopic starting level is the description of dislocations as used in the discrete dislocation dynamics (DDD) framework. The macroscopic level of interest is characterized by the temperature, the boundary deformation and the dislocation density profile. By integrating over state space, we obtain a field theoretic partition function, which is a functional integral of the Boltzmann weight over an auxiliary field. The Hamiltonian consists of a term quadratic in the field and an exponential of this field. The partition function is strongly non-local, and reduces in special cases to the sine-Gordon model. Moreover, we determine implicit expressions for the response functions and the dominant scaling regime for metals, namely the low-temperature regime.

  10. Monitored Geologic Repository Project Description Document

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    P. M. Curry

    2001-01-30

    The primary objective of the Monitored Geologic Repository Project Description Document (PDD) is to allocate the functions, requirements, and assumptions to the systems at Level 5 of the Civilian Radioactive Waste Management System (CRWMS) architecture identified in Section 4. It provides traceability of the requirements to those contained in Section 3 of the ''Monitored Geologic Repository Requirements Document'' (MGR RD) (YMP 2000a) and other higher-level requirements documents. In addition, the PDD allocates design related assumptions to work products of non-design organizations. The document provides Monitored Geologic Repository (MGR) technical requirements in support of design and performance assessment in preparing formore » the Site Recommendation (SR) and License Application (LA) milestones. The technical requirements documented in the PDD are to be captured in the System Description Documents (SDDs) which address each of the systems at Level 5 of the CRWMS architecture. The design engineers obtain the technical requirements from the SDDs and by reference from the SDDs to the PDD. The design organizations and other organizations will obtain design related assumptions directly from the PDD. These organizations may establish additional assumptions for their individual activities, but such assumptions are not to conflict with the assumptions in the PDD. The PDD will serve as the primary link between the technical requirements captured in the SDDs and the design requirements captured in US Department of Energy (DOE) documents. The approved PDD is placed under Level 3 baseline control by the CRWMS Management and Operating Contractor (M and O) and the following portions of the PDD constitute the Technical Design Baseline for the MGR: the design characteristics listed in Table 1-1, the MGR Architecture (Section 4.1), the Technical Requirements (Section 5), and the Controlled Project Assumptions (Section 6).« less

  11. Instrumental variable specifications and assumptions for longitudinal analysis of mental health cost offsets.

    PubMed

    O'Malley, A James

    2012-12-01

    Instrumental variables (IVs) enable causal estimates in observational studies to be obtained in the presence of unmeasured confounders. In practice, a diverse range of models and IV specifications can be brought to bear on a problem, particularly with longitudinal data where treatment effects can be estimated for various functions of current and past treatment. However, in practice the empirical consequences of different assumptions are seldom examined, despite the fact that IV analyses make strong assumptions that cannot be conclusively tested by the data. In this paper, we consider several longitudinal models and specifications of IVs. Methods are applied to data from a 7-year study of mental health costs of atypical and conventional antipsychotics whose purpose was to evaluate whether the newer and more expensive atypical antipsychotic medications lead to a reduction in overall mental health costs.

  12. Some observations on the use of discriminant analysis in ecology

    USGS Publications Warehouse

    Williams, B.K.

    1983-01-01

    The application of discriminant analysis in ecological investigations is discussed. The appropriate statistical assumptions for discriminant analysis are illustrated, and both classification and group separation approaches are outlined. Three assumptions that are crucial in ecological studies are discussed at length, and the consequences of their violation are developed. These assumptions are: equality of dispersions, identifiability of prior probabilities, and precise and accurate estimation of means and dispersions. The use of discriminant functions for purposes of interpreting ecological relationships is also discussed. It is suggested that the common practice of imputing ecological 'meaning' to the signs and magnitudes of coefficients be replaced by an assessment of 'structure coefficients.' Finally, the potential and limitations of representation of data in canonical space are considered, and some cautionary points are made concerning ecological interpretation of patterns in canonical space.

  13. Influence of dipolar interactions on the angular-dependent coercivity of nickel nanocylinders

    NASA Astrophysics Data System (ADS)

    Bender, P.; Krämer, F.; Tschöpe, A.; Birringer, R.

    2015-04-01

    In this study the influence of dipolar interactions on the orientation-dependent magnetization behavior of an ensemble of single-domain nickel nanorods was investigated. The rods were synthesized by electrodeposition of nickel into porous alumina templates. Some of the rods were released from the oxide and embedded in gelatine hydrogels (ferrogel) at a sufficiently large average interparticle distance to suppress dipolar interactions. By comparing the orientation-dependent hystereses of the two ensembles in the template and the gel-matrix it could be shown that the dipolar interactions in the template considerably alter the functional form of the angular-dependent coercivity. Analysis of the magnetization curves for an angle of 60° between the rod-axes and the field revealed a significantly reduced coercivity of the template compared to the ferrogel, which could be directly attributed to a stray field induced magnetization reversal of a steadily increasing number of rods with increasing field strength. The magnetization curve of the template could be approximated by a weighted linear superposition of the hysteresis branches of the ferrogel. The magnetization reversal process of the rods was investigated by analyzing the angular-dependent coercivity of the non-interacting nanorods. Comparison of the functional form with analytical models and micromagnetic simulations emphasized the assumption of a localized magnetization reversal. Additionally, it could be shown that the nucleation field of rods with diameters in the range 18-29 nm tends to increase with increasing diameter.

  14. Determining informative priors for cognitive models.

    PubMed

    Lee, Michael D; Vanpaemel, Wolf

    2018-02-01

    The development of cognitive models involves the creative scientific formalization of assumptions, based on theory, observation, and other relevant information. In the Bayesian approach to implementing, testing, and using cognitive models, assumptions can influence both the likelihood function of the model, usually corresponding to assumptions about psychological processes, and the prior distribution over model parameters, usually corresponding to assumptions about the psychological variables that influence those processes. The specification of the prior is unique to the Bayesian context, but often raises concerns that lead to the use of vague or non-informative priors in cognitive modeling. Sometimes the concerns stem from philosophical objections, but more often practical difficulties with how priors should be determined are the stumbling block. We survey several sources of information that can help to specify priors for cognitive models, discuss some of the methods by which this information can be formalized in a prior distribution, and identify a number of benefits of including informative priors in cognitive modeling. Our discussion is based on three illustrative cognitive models, involving memory retention, categorization, and decision making.

  15. SU-E-T-293: Simplifying Assumption for Determining Sc and Sp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, R; Cheung, A; Anderson, R

    Purpose: Scp(mlc,jaw) is a two-dimensional function of collimator field size and effective field size. Conventionally, Scp(mlc,jaw) is treated as separable into components Sc(jaw) and Sp(mlc). Scp(mlc=jaw) is measured in phantom and Sc(jaw) is measured in air with Sp=Scp/Sc. Ideally, Sc and Sp would be able to predict measured values of Scp(mlc,jaw) for all combinations of mlc and jaw. However, ideal Sc and Sp functions do not exist and a measured two-dimensional Scp dataset cannot be decomposed into a unique pair of one-dimensional functions.If the output functions Sc(jaw) and Sp(mlc) were equal to each other and thus each equal to Scp(mlc=jaw){supmore » 0.5}, this condition would lead to a simpler measurement process by eliminating the need for in-air measurements. Without the distorting effect of the buildup-cap, small-field measurement would be limited only by the dimensions of the detector and would thus be improved by this simplification of the output functions. The goal of the present study is to evaluate an assumption that Sc=Sp. Methods: For a 6 MV x-ray beam, Sc and Sp were determined both by the conventional method and as Scp(mlc=jaw){sup 0.5}. Square field benchmark values of Scp(mlc,jaw) were then measured across the range from 2×2 to 29×29. Both Sc and Sp functions were then evaluated as to their ability to predict these measurements. Results: Both methods produced qualitatively similar results with <4% error for all cases and >3% error in 1 case. The conventional method produced 2 cases with >2% error, while the squareroot method produced only 1 such case. Conclusion: Though it would need to be validated for any specific beam to which it might be applied, under the conditions studied, the simplifying assumption that Sc = Sp is justified.« less

  16. Local Structure Theory for Cellular Automata.

    NASA Astrophysics Data System (ADS)

    Gutowitz, Howard Andrew

    The local structure theory (LST) is a generalization of the mean field theory for cellular automata (CA). The mean field theory makes the assumption that iterative application of the rule does not introduce correlations between the states of cells in different positions. This assumption allows the derivation of a simple formula for the limit density of each possible state of a cell. The most striking feature of CA is that they may well generate correlations between the states of cells as they evolve. The LST takes the generation of correlation explicitly into account. It thus has the potential to describe statistical characteristics in detail. The basic assumption of the LST is that though correlation may be generated by CA evolution, this correlation decays with distance. This assumption allows the derivation of formulas for the estimation of the probability of large blocks of states in terms of smaller blocks of states. Given the probabilities of blocks of size n, probabilities may be assigned to blocks of arbitrary size such that these probability assignments satisfy the Kolmogorov consistency conditions and hence may be used to define a measure on the set of all possible (infinite) configurations. Measures defined in this way are called finite (or n-) block measures. A function called the scramble operator of order n maps a measure to an approximating n-block measure. The action of a CA on configurations induces an action on measures on the set of all configurations. The scramble operator is combined with the CA map on measure to form the local structure operator (LSO). The LSO of order n maps the set of n-block measures into itself. It is hypothesised that the LSO applied to n-block measures approximates the rule itself on general measures, and does so increasingly well as n increases. The fundamental advantage of the LSO is that its action is explicitly computable from a finite system of rational recursion equations. Empirical study of a number of CA rules demonstrates the potential of the LST to describe the statistical features of CA. The behavior of some simple rules is derived analytically. Other rules have more complex, chaotic behavior. Even for these rules, the LST yields an accurate portrait of both small and large time statistics.

  17. Cognitions as determinants of (mal)adaptive emotions and emotionally intelligent behavior in an organizational context.

    PubMed

    Spörrle, Matthias; Welpe, Isabell M; Försterling, Friedrich

    2006-01-01

    This study applies the theoretical concepts of Rational Emotive Behavior Therapy (REBT; Ellis, 1962, 1994) to the analysis of functional and dysfunctional behaviour and emotions in the workplace and tests central assumptions of REBT in an organizational setting. We argue that Ellis' appraisal theory of emotion sheds light on some of the cognitive and emotional antecedents of emotional intelligence and emotionally intelligent behaviour. In an extension of REBT, we posit that adaptive emotions resulting from rational cognitions reflect more emotional intelligence than maladaptive emotions which result from irrational cognitions, because the former lead to functional behaviour. We hypothesize that semantically similar emotions (e.g. annoyance and rage) lead to different behavioural reactions and have a different functionality in an organizational context. The results of scenario experiments using organizational vignettes confirm the central assumptions of Ellis' appraisal theory and support our hypotheses of a correspondence between adaptive emotions and emotionally intelligent behaviour. Additionally, we find evidence that irrational job-related attitudes result in reduced work (but not life) satisfaction.

  18. Evaluating Equating Accuracy and Assumptions for Groups that Differ in Performance

    ERIC Educational Resources Information Center

    Powers, Sonya; Kolen, Michael J.

    2014-01-01

    Accurate equating results are essential when comparing examinee scores across exam forms. Previous research indicates that equating results may not be accurate when group differences are large. This study compared the equating results of frequency estimation, chained equipercentile, item response theory (IRT) true-score, and IRT observed-score…

  19. Learning in Equity-Oriented Scale-Making Projects

    ERIC Educational Resources Information Center

    Jurow, A. Susan; Shea, Molly

    2015-01-01

    This article examines how new forms of learning and expertise are made to become consequential in changing communities of practice. We build on notions of scale making to understand how particular relations between practices, technologies, and people become meaningful across spatial and temporal trajectories of social action. A key assumption of…

  20. The Use of Photography in Family Psychotherapy.

    ERIC Educational Resources Information Center

    Entin, Alan D.

    Photographs and family albums are helpful in marriage and family psychotherapy to aid in the understanding of family processes, relationship patterns, goals, expectations, values, traditions, and ideals. Based on the assumption that a photograph is a form of communication, photography can be used to: (1) examine typical family picture-taking…

Top