Science.gov

Sample records for empirically based theoretical

  1. Distributed optical fiber-based theoretical and empirical methods monitoring hydraulic engineering subjected to seepage velocity

    NASA Astrophysics Data System (ADS)

    Su, Huaizhi; Tian, Shiguang; Cui, Shusheng; Yang, Meng; Wen, Zhiping; Xie, Wei

    2016-09-01

    In order to systematically investigate the general principle and method of monitoring seepage velocity in the hydraulic engineering, the theoretical analysis and physical experiment were implemented based on distributed fiber-optic temperature sensing (DTS) technology. During the coupling influence analyses between seepage field and temperature field in the embankment dam or dike engineering, a simplified model was constructed to describe the coupling relationship of two fields. Different arrangement schemes of optical fiber and measuring approaches of temperature were applied on the model. The inversion analysis idea was further used. The theoretical method of monitoring seepage velocity in the hydraulic engineering was finally proposed. A new concept, namely the effective thermal conductivity, was proposed referring to the thermal conductivity coefficient in the transient hot-wire method. The influence of heat conduction and seepage could be well reflected by this new concept, which was proved to be a potential approach to develop an empirical method monitoring seepage velocity in the hydraulic engineering.

  2. Outcome (competency) based education: an exploration of its origins, theoretical basis, and empirical evidence.

    PubMed

    Morcke, Anne Mette; Dornan, Tim; Eika, Berit

    2013-10-01

    Outcome based or competency based education (OBE) is so firmly established in undergraduate medical education that it might not seem necessary to ask why it was included in recommendations for the future, like the Flexner centenary report. Uncritical acceptance may not, however, deliver its greatest benefits. Our aim was to explore the underpinnings of OBE: its historical origins, theoretical basis, and empirical evidence of its effects in order to answer the question: How can predetermined learning outcomes influence undergraduate medical education? This literature review had three components: A review of historical landmarks in the evolution of OBE; a review of conceptual frameworks and theories; and a systematic review of empirical publications from 1999 to 2010 that reported data concerning the effects of learning outcomes on undergraduate medical education. OBE had its origins in behaviourist theories of learning. It is tightly linked to the assessment and regulation of proficiency, but less clearly linked to teaching and learning activities. Over time, there have been cycles of advocacy for, then criticism of, OBE. A recurring critique concerns the place of complex personal and professional attributes as "competencies". OBE has been adopted by consensus in the face of weak empirical evidence. OBE, which has been advocated for over 50 years, can contribute usefully to defining requisite knowledge and skills, and blueprinting assessments. Its applicability to more complex aspects of clinical performance is not clear. OBE, we conclude, provides a valuable approach to some, but not all, important aspects of undergraduate medical education.

  3. Theoretical and empirical bases for dialect-neutral language assessment: contributions from theoretical and applied linguistics to communication disorders.

    PubMed

    Pearson, Barbara Zurer

    2004-02-01

    Three avenues of theoretical research provide insights for discovering abstract properties of language that are subject to disorder and amenable to assessment: (1) the study of universal grammar and its acquisition; (2) descriptions of African American English (AAE) Syntax, Semantics, and Phonology within theoretical linguistics; and (3) the study of specific language impairment (SLI) cross-linguistically. Abstract linguistic concepts were translated into a set of assessment protocols that were used to establish normative data on language acquisition (developmental milestones) in typically developing AAE children ages 4 to 9 years. Testing AAE-speaking language impaired (LI) children and both typically developing (TD) and LI Mainstream American English (MAE)-learning children on these same measures provided the data to select assessments for which (1) TD MAE and AAE children performed the same, and (2) TD performance was reliably different from LI performance in both dialect groups.

  4. Outcome (Competency) Based Education: An Exploration of Its Origins, Theoretical Basis, and Empirical Evidence

    ERIC Educational Resources Information Center

    Morcke, Anne Mette; Dornan, Tim; Eika, Berit

    2013-01-01

    Outcome based or competency based education (OBE) is so firmly established in undergraduate medical education that it might not seem necessary to ask why it was included in recommendations for the future, like the Flexner centenary report. Uncritical acceptance may not, however, deliver its greatest benefits. Our aim was to explore the…

  5. Designing Educative Curriculum Materials: A Theoretically and Empirically Driven Process

    ERIC Educational Resources Information Center

    Davis, Elizabeth A.; Palincsar, Annemarie Sullivan; Arias, Anna Maria; Bismack, Amber Schultz; Marulis, Loren M.; Iwashyna, Stefanie K.

    2014-01-01

    In this article, the authors argue for a design process in the development of educative curriculum materials that is theoretically and empirically driven. Using a design-based research approach, they describe their design process for incorporating educative features intended to promote teacher learning into existing, high-quality curriculum…

  6. Reading Comprehension to 1970: Its Theoretical and Empirical Bases, and Its Implementation in Secondary Professional Textbooks, Instructional Materials, and Tests.

    ERIC Educational Resources Information Center

    Harker, William John

    This study was designed: (1) to determine current concepts of reading comprehension deriving from experimental investigations and theoretical statements, and (2) to establish whether these concepts are represented consistently in current secondary professional reading textbooks, instructional materials, and published tests. Current knowledge of…

  7. Theoretical modeling of stream potholes based upon empirical observations from the Orange River, Republic of South Africa

    NASA Astrophysics Data System (ADS)

    Springer, Gregory S.; Tooth, Stephen; Wohl, Ellen E.

    2006-12-01

    Potholes carved into streambeds can be important components of channel incision, but they have received little quantitative attention. Here empirical evidence is presented from three sites along the Orange River, Republic of South Africa that demonstrates that the pothole dimensions of radius and depth are strongly correlated using a simple power law. Where radius is the dependent variable, the exponent of the power law describes the rate of increase in radius with increasing depth. Erosion within potholes is complexly related to erosion on the adjacent bed. Erosion efficiencies within small, hemispherical potholes must be high if the potholes are to survive in the face of bed translation (incision). As potholes deepen, however, the necessary efficiencies decline rapidly. Increasing concavity associated with growth imposes stricter constraints; comparatively deep potholes must erode orders of magnitude larger volumes of substrate than shallower potholes in the face of bed retreat. Hemispherical potholes are eventually converted to cylindrical potholes, the geometries of which favor enlargement while they are small. Geometric models constructed using the power law show unambiguously that more substrate is eroded by volume from cylindrical pothole walls during growth than from cylindrical pothole floors. Grinders thus play a secondary role to suspended sediment entrained within the vortices that occur in potholes. Continued growth leads to coalescence with other potholes or destruction through block detachment depending on local geology. The combination of geology and erosion mechanisms may determine whether a strath or inner channel develops as a consequence of the process.

  8. Semivolatile organic compounds in homes: strategies for efficient and systematic exposure measurement based on empirical and theoretical factors.

    PubMed

    Dodson, Robin E; Camann, David E; Morello-Frosch, Rachel; Brody, Julia G; Rudel, Ruthann A

    2015-01-01

    Residential exposure can dominate total exposure for commercial chemicals of health concern; however, despite the importance of consumer exposures, methods for estimating household exposures remain limited. We collected house dust and indoor air samples in 49 California homes and analyzed for 76 semivolatile organic compounds (SVOCs)--phthalates, polybrominated diphenyl ethers (PBDEs), polychlorinated biphenyls (PCBs), polycyclic aromatic hydrocarbons (PAHs), and pesticides. Sixty chemicals were detected in either dust or air and here we report 58 SVOCs detected in dust for the first time. In dust, phthalates (bis(2-ethylhexyl) phthalate, benzyl butyl phthalate, di-n-butyl phthalate) and flame retardants (PBDE 99, PBDE 47) were detected at the highest concentrations relative to other chemicals at the 95th percentile, while phthalates were highest at the median. Because SVOCs are found in both gas and condensed phases and redistribute from their original source over time, partitioning models can clarify their fate indoors. We use empirical data to validate air-dust partitioning models and use these results, combined with experience in SVOC exposure assessment, to recommend residential exposure measurement strategies. We can predict dust concentrations reasonably well from measured air concentrations (R(2) = 0.80). Partitioning models and knowledge of chemical Koa elucidate exposure pathways and suggest priorities for chemical regulation. These findings also inform study design by allowing researchers to select sampling approaches optimized for their chemicals of interest and study goals. While surface wipes are commonly used in epidemiology studies because of ease of implementation, passive air sampling may be more standardized between homes and also relatively simple to deploy. Validation of passive air sampling methods for SVOCs is a priority.

  9. Semivolatile organic compounds in homes: strategies for efficient and systematic exposure measurement based on empirical and theoretical factors.

    PubMed

    Dodson, Robin E; Camann, David E; Morello-Frosch, Rachel; Brody, Julia G; Rudel, Ruthann A

    2015-01-01

    Residential exposure can dominate total exposure for commercial chemicals of health concern; however, despite the importance of consumer exposures, methods for estimating household exposures remain limited. We collected house dust and indoor air samples in 49 California homes and analyzed for 76 semivolatile organic compounds (SVOCs)--phthalates, polybrominated diphenyl ethers (PBDEs), polychlorinated biphenyls (PCBs), polycyclic aromatic hydrocarbons (PAHs), and pesticides. Sixty chemicals were detected in either dust or air and here we report 58 SVOCs detected in dust for the first time. In dust, phthalates (bis(2-ethylhexyl) phthalate, benzyl butyl phthalate, di-n-butyl phthalate) and flame retardants (PBDE 99, PBDE 47) were detected at the highest concentrations relative to other chemicals at the 95th percentile, while phthalates were highest at the median. Because SVOCs are found in both gas and condensed phases and redistribute from their original source over time, partitioning models can clarify their fate indoors. We use empirical data to validate air-dust partitioning models and use these results, combined with experience in SVOC exposure assessment, to recommend residential exposure measurement strategies. We can predict dust concentrations reasonably well from measured air concentrations (R(2) = 0.80). Partitioning models and knowledge of chemical Koa elucidate exposure pathways and suggest priorities for chemical regulation. These findings also inform study design by allowing researchers to select sampling approaches optimized for their chemicals of interest and study goals. While surface wipes are commonly used in epidemiology studies because of ease of implementation, passive air sampling may be more standardized between homes and also relatively simple to deploy. Validation of passive air sampling methods for SVOCs is a priority. PMID:25488487

  10. Semivolatile Organic Compounds in Homes: Strategies for Efficient and Systematic Exposure Measurement Based on Empirical and Theoretical Factors

    PubMed Central

    2014-01-01

    Residential exposure can dominate total exposure for commercial chemicals of health concern; however, despite the importance of consumer exposures, methods for estimating household exposures remain limited. We collected house dust and indoor air samples in 49 California homes and analyzed for 76 semivolatile organic compounds (SVOCs)—phthalates, polybrominated diphenyl ethers (PBDEs), polychlorinated biphenyls (PCBs), polycyclic aromatic hydrocarbons (PAHs), and pesticides. Sixty chemicals were detected in either dust or air and here we report 58 SVOCs detected in dust for the first time. In dust, phthalates (bis(2-ethylhexyl) phthalate, benzyl butyl phthalate, di-n-butyl phthalate) and flame retardants (PBDE 99, PBDE 47) were detected at the highest concentrations relative to other chemicals at the 95th percentile, while phthalates were highest at the median. Because SVOCs are found in both gas and condensed phases and redistribute from their original source over time, partitioning models can clarify their fate indoors. We use empirical data to validate air-dust partitioning models and use these results, combined with experience in SVOC exposure assessment, to recommend residential exposure measurement strategies. We can predict dust concentrations reasonably well from measured air concentrations (R2 = 0.80). Partitioning models and knowledge of chemical Koa elucidate exposure pathways and suggest priorities for chemical regulation. These findings also inform study design by allowing researchers to select sampling approaches optimized for their chemicals of interest and study goals. While surface wipes are commonly used in epidemiology studies because of ease of implementation, passive air sampling may be more standardized between homes and also relatively simple to deploy. Validation of passive air sampling methods for SVOCs is a priority. PMID:25488487

  11. Empirical and theoretical analysis of complex systems

    NASA Astrophysics Data System (ADS)

    Zhao, Guannan

    structures evolve on a similar timescale to individual level transmission, we investigated the process of transmission through a model population comprising of social groups which follow simple dynamical rules for growth and break-up, and the profiles produced bear a striking resemblance to empirical data obtained from social, financial and biological systems. Finally, for better implementation of a widely accepted power law test algorithm, we have developed a fast testing procedure using parallel computation.

  12. The learning evaluation: a theoretical and empirical exploration.

    PubMed

    Edelenbos, Jurian; van Buuren, Arwin

    2005-12-01

    In this article, the authors theoretically and empirically explore the concept of learning evaluation. They shed light on the positioning of the learning evaluation amid scholarly work on evaluations. Moreover, they describe the learning evaluation in practice in the Netherlands by going into a specific project called the Stimulation Program on Citizen and Environment. The theoretical and empirical quest gives insights into the problems with and possibilities of the learning evaluation. They think that their experiences can help the further development of theory about learning evaluation as well as aid in the practice of such evaluations.

  13. Competence and Drug Use: Theoretical Frameworks, Empirical Evidence and Measurement.

    ERIC Educational Resources Information Center

    Lindenberg, Cathy Strachan; Solorzano, Rosa; Kelley, Maureen; Darrow, Vicki; Gendrop, Sylvia C.; Strickland, Ora

    1998-01-01

    Discusses the Social Stress Model of Substance Abuse. Summarizes theoretical and conceptual formulations for the construct of competence, reviews empirical evidence for the association of competence with drug use, and describes the preliminary development of a multiscale instrument designed to assess drug-protective competence among low-income…

  14. Physical Violence between Siblings: A Theoretical and Empirical Analysis

    ERIC Educational Resources Information Center

    Hoffman, Kristi L.; Kiecolt, K. Jill; Edwards, John N.

    2005-01-01

    This study develops and tests a theoretical model to explain sibling violence based on the feminist, conflict, and social learning theoretical perspectives and research in psychology and sociology. A multivariate analysis of data from 651 young adults generally supports hypotheses from all three theoretical perspectives. Males with brothers have…

  15. Advanced evolutionary phases in globular clusters. . Empirical and theoretical constraints

    NASA Astrophysics Data System (ADS)

    Bono, G.

    We present empirical and theoretical constraints for advanced evolutionary phases in Globular Clusters. In particular, we focus our attention on the central helium burning phases (Horizontal Branch) and on the white dwarf cooling sequence. We introduce the canonical evolutionary scenario and discuss new possible routes which can provide firm constraints on several open problems. Finally, we briefly outline new predicted near-infrared evolutionary features of the white dwarf cooling sequences which can be adopted to constrain their evolutionary properties.

  16. Empirical and theoretical bacterial diversity in four Arizona soils.

    PubMed

    Dunbar, John; Barns, Susan M; Ticknor, Lawrence O; Kuske, Cheryl R

    2002-06-01

    Understanding patterns of biodiversity in microbial communities is severely constrained by the difficulty of adequately sampling these complex systems. We illustrate the problem with empirical data from small surveys (200-member 16S rRNA gene clone libraries) of four bacterial soil communities from two locations in Arizona. Among the four surveys, nearly 500 species-level groups ( Dunbar et al., Appl. Environ. Microbiol. 65:662-1669, 1999) and 21 bacterial divisions were documented, including four new candidate divisions provisionally designated SC1, SC2, SC3, and SC4. We devised a simple approach to constructing theoretical null models of bacterial species abundance. These null models provide, for the first time, detailed descriptions of soil bacterial community structure that can be used to guide experimental design. Models based on a lognormal distribution were consistent with the observed sizes of the four communities and the richness of the clone surveys. Predictions from the models showed that the species richness of small surveys from complex communities is reproducible, whereas the species composition is not. By using the models, we can now estimate the required survey scale to document specified fractions of community diversity. For example, documentation of half the species in each model community would require surveys of 16,284 to 44,000 individuals. However, quantitative comparisons of half the species in two communities would require surveys at least 10-fold larger for each community. PMID:12039765

  17. Kinetics of solute adsorption at solid/solution interfaces: a theoretical development of the empirical pseudo-first and pseudo-second order kinetic rate equations, based on applying the statistical rate theory of interfacial transport.

    PubMed

    Rudzinski, Wladyslaw; Plazinski, Wojciech

    2006-08-24

    For practical applications of solid/solution adsorption processes, the kinetics of these processes is at least as much essential as their features at equilibrium. Meanwhile, the general understanding of this kinetics and its corresponding theoretical description are far behind the understanding and the level of theoretical interpretation of adsorption equilibria in these systems. The Lagergren empirical equation proposed at the end of 19th century to describe the kinetics of solute sorption at the solid/solution interfaces has been the most widely used kinetic equation until now. This equation has also been called the pseudo-first order kinetic equation because it was intuitively associated with the model of one-site occupancy adsorption kinetics governed by the rate of surface reaction. More recently, its generalization for the two-sites-occupancy adsorption was proposed and called the pseudo-second-order kinetic equation. However, the general use and the wide applicability of these empirical equations during more than one century have not resulted in a corresponding fundamental search for their theoretical origin. Here the first theoretical development of these equations is proposed, based on applying the new fundamental approach to kinetics of interfacial transport called the Statistical Rate Theory. It is shown that these empirical equations are simplified forms of a more general equation developed here, for the case when the adsorption kinetics is governed by the rate of surface reactions. The features of that general equation are shown by presenting exhaustive model investigations, and the applicability of that equation is tested by presenting a quantitative analysis of some experimental data reported in the literature.

  18. What Is a Reference Book? A Theoretical and Empirical Analysis.

    ERIC Educational Resources Information Center

    Bates, Marcia J.

    1986-01-01

    Provides a definition of reference books based on their organizational structure and describes an empirical study which was conducted in three libraries to identify types of book organization and determine their frequency in reference departments and stack collections. The data are analyzed and shown to support the definition. (EM)

  19. Empirical STORM-E Model. [I. Theoretical and Observational Basis

    NASA Technical Reports Server (NTRS)

    Mertens, Christopher J.; Xu, Xiaojing; Bilitza, Dieter; Mlynczak, Martin G.; Russell, James M., III

    2013-01-01

    Auroral nighttime infrared emission observed by the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument onboard the Thermosphere-Ionosphere-Mesosphere Energetics and Dynamics (TIMED) satellite is used to develop an empirical model of geomagnetic storm enhancements to E-region peak electron densities. The empirical model is called STORM-E and will be incorporated into the 2012 release of the International Reference Ionosphere (IRI). The proxy for characterizing the E-region response to geomagnetic forcing is NO+(v) volume emission rates (VER) derived from the TIMED/SABER 4.3 lm channel limb radiance measurements. The storm-time response of the NO+(v) 4.3 lm VER is sensitive to auroral particle precipitation. A statistical database of storm-time to climatological quiet-time ratios of SABER-observed NO+(v) 4.3 lm VER are fit to widely available geomagnetic indices using the theoretical framework of linear impulse-response theory. The STORM-E model provides a dynamic storm-time correction factor to adjust a known quiescent E-region electron density peak concentration for geomagnetic enhancements due to auroral particle precipitation. Part II of this series describes the explicit development of the empirical storm-time correction factor for E-region peak electron densities, and shows comparisons of E-region electron densities between STORM-E predictions and incoherent scatter radar measurements. In this paper, Part I of the series, the efficacy of using SABER-derived NO+(v) VER as a proxy for the E-region response to solar-geomagnetic disturbances is presented. Furthermore, a detailed description of the algorithms and methodologies used to derive NO+(v) VER from SABER 4.3 lm limb emission measurements is given. Finally, an assessment of key uncertainties in retrieving NO+(v) VER is presented

  20. Collective behavior in animal groups: theoretical models and empirical studies

    PubMed Central

    Giardina, Irene

    2008-01-01

    Collective phenomena in animal groups have attracted much attention in the last years, becoming one of the hottest topics in ethology. There are various reasons for this. On the one hand, animal grouping provides a paradigmatic example of self-organization, where collective behavior emerges in absence of centralized control. The mechanism of group formation, where local rules for the individuals lead to a coherent global state, is very general and transcends the detailed nature of its components. In this respect, collective animal behavior is a subject of great interdisciplinary interest. On the other hand, there are several important issues related to the biological function of grouping and its evolutionary success. Research in this field boasts a number of theoretical models, but much less empirical results to compare with. For this reason, even if the general mechanisms through which self-organization is achieved are qualitatively well understood, a quantitative test of the models assumptions is still lacking. New analysis on large groups, which require sophisticated technological procedures, can provide the necessary empirical data. PMID:19404431

  1. Converging Paradigms: A Reflection on Parallel Theoretical Developments in Psychoanalytic Metapsychology and Empirical Dream Research.

    PubMed

    Schmelowszky, Ágoston

    2016-08-01

    In the last decades one can perceive a striking parallelism between the shifting perspective of leading representatives of empirical dream research concerning their conceptualization of dreaming and the paradigm shift within clinically based psychoanalytic metapsychology with respect to its theory on the significance of dreaming. In metapsychology, dreaming becomes more and more a central metaphor of mental functioning in general. The theories of Klein, Bion, and Matte-Blanco can be considered as milestones of this paradigm shift. In empirical dream research, the competing theories of Hobson and of Solms respectively argued for and against the meaningfulness of the dream-work in the functioning of the mind. In the meantime, empirical data coming from various sources seemed to prove the significance of dream consciousness for the development and maintenance of adaptive waking consciousness. Metapsychological speculations and hypotheses based on empirical research data seem to point in the same direction, promising for contemporary psychoanalytic practice a more secure theoretical base. In this paper the author brings together these diverse theoretical developments and presents conclusions regarding psychoanalytic theory and technique, as well as proposing an outline of an empirical research plan for testing the specificity of psychoanalysis in developing dream formation. PMID:27500705

  2. Transitive inference in non-human animals: an empirical and theoretical analysis.

    PubMed

    Vasconcelos, Marco

    2008-07-01

    Transitive inference has long been considered one of the hallmarks of human deductive reasoning. Recent reports of transitive-like behaviors in non-human animals have prompted a flourishing empirical and theoretical search for the mechanism(s) that may mediate this ability in non-humans. In this paper, I begin by describing the transitive inference tasks customarily used with non-human animals and then review the empirical findings. Transitive inference has been demonstrated in a wide variety of species, and the signature effects that usually accompany transitive inference in humans (the serial position effect and the symbolic distance effect) have also been found in non-humans. I then critically analyze the most prominent models of this ability in non-human animals. Some models are cognitive, proposing for instance that animals use the rules of formal logic or form mental representations of the premises to solve the task, others are based on associative mechanisms such as value transfer and reinforcement and non-reinforcement. Overall, I argue that the reinforcement-based models are in a much better empirical and theoretical position. Hence, transitive inference in non-human animals should be considered a property of reinforcement history rather than of inferential processes. I finalize by shedding some light on some promising lines of research.

  3. Ranking Academic Departments: Empirical Findings and a Theoretical Perspective.

    ERIC Educational Resources Information Center

    Drew, David E.; Karpf, Ronald

    1981-01-01

    The history of evaluations of academic departments through peer review rankings and subsequent attempts to identify empirical correlates of the ratings are reviewed. Findings indicate that American Council on Education rankings can be predicted by the departmental rate of publication in highly cited journals. (Author/MLW)

  4. Integrative Behavioral Couple Therapy: Theoretical Background, Empirical Research, and Dissemination.

    PubMed

    Roddy, McKenzie K; Nowlan, Kathryn M; Doss, Brian D; Christensen, Andrew

    2016-09-01

    Integrative Behavioral Couple Therapy (IBCT), developed by Drs. Andrew Christensen and Neil Jacobson, builds off the tradition of behavioral couple therapy by including acceptance strategies as key components of treatment. Results from a large randomized clinical trial of IBCT indicate that it yields large and significant gains in relationship satisfaction. Furthermore, these benefits have been shown to persist for at least 5 years after treatment for the average couple. Not only does IBCT positively impact relationship constructs such as satisfaction and communication, but the benefits of therapy extend to individual, co-parenting, and child functioning. Moreover, IBCT has been shown to operate through the putative mechanisms of improvements in emotional acceptance, behavior change, and communication. IBCT was chosen for nationwide training and dissemination through the Veteran Affairs Medical Centers. Furthermore, the principles of IBCT have been translated into a web-based intervention for distressed couples, OurRelationship.com. IBCT is continuing to evolve and grow as research and technologies allow for continued evaluation and dissemination of this well-supported theoretical model.

  5. Integrative Behavioral Couple Therapy: Theoretical Background, Empirical Research, and Dissemination.

    PubMed

    Roddy, McKenzie K; Nowlan, Kathryn M; Doss, Brian D; Christensen, Andrew

    2016-09-01

    Integrative Behavioral Couple Therapy (IBCT), developed by Drs. Andrew Christensen and Neil Jacobson, builds off the tradition of behavioral couple therapy by including acceptance strategies as key components of treatment. Results from a large randomized clinical trial of IBCT indicate that it yields large and significant gains in relationship satisfaction. Furthermore, these benefits have been shown to persist for at least 5 years after treatment for the average couple. Not only does IBCT positively impact relationship constructs such as satisfaction and communication, but the benefits of therapy extend to individual, co-parenting, and child functioning. Moreover, IBCT has been shown to operate through the putative mechanisms of improvements in emotional acceptance, behavior change, and communication. IBCT was chosen for nationwide training and dissemination through the Veteran Affairs Medical Centers. Furthermore, the principles of IBCT have been translated into a web-based intervention for distressed couples, OurRelationship.com. IBCT is continuing to evolve and grow as research and technologies allow for continued evaluation and dissemination of this well-supported theoretical model. PMID:27226235

  6. Alternative Information Theoretic Measures of Television Messages: An Empirical Test.

    ERIC Educational Resources Information Center

    Danowski, James A.

    This research examines two information theoretic measures of media exposure within the same sample of respondents and examines their relative strengths in predicting self-reported aggression. The first measure is the form entropy (DYNUFAM) index of Watt and Krull, which assesses the structural and organizational properties of specific television…

  7. The ascent of man: Theoretical and empirical evidence for blatant dehumanization.

    PubMed

    Kteily, Nour; Bruneau, Emile; Waytz, Adam; Cotterill, Sarah

    2015-11-01

    Dehumanization is a central concept in the study of intergroup relations. Yet although theoretical and methodological advances in subtle, "everyday" dehumanization have progressed rapidly, blatant dehumanization remains understudied. The present research attempts to refocus theoretical and empirical attention on blatant dehumanization, examining when and why it provides explanatory power beyond subtle dehumanization. To accomplish this, we introduce and validate a blatant measure of dehumanization based on the popular depiction of evolutionary progress in the "Ascent of Man." We compare blatant dehumanization to established conceptualizations of subtle and implicit dehumanization, including infrahumanization, perceptions of human nature and human uniqueness, and implicit associations between ingroup-outgroup and human-animal concepts. Across 7 studies conducted in 3 countries, we demonstrate that blatant dehumanization is (a) more strongly associated with individual differences in support for hierarchy than subtle or implicit dehumanization, (b) uniquely predictive of numerous consequential attitudes and behaviors toward multiple outgroup targets, (c) predictive above prejudice, and (d) reliable over time. Finally, we show that blatant-but not subtle-dehumanization spikes immediately after incidents of real intergroup violence and strongly predicts support for aggressive actions like torture and retaliatory violence (after the Boston Marathon bombings and Woolwich attacks in England). This research extends theory on the role of dehumanization in intergroup relations and intergroup conflict and provides an intuitive, validated empirical tool to reliably measure blatant dehumanization. PMID:26121523

  8. Adaptive evolution: evaluating empirical support for theoretical predictions.

    PubMed

    Olson-Manning, Carrie F; Wagner, Maggie R; Mitchell-Olds, Thomas

    2012-12-01

    Adaptive evolution is shaped by the interaction of population genetics, natural selection and underlying network and biochemical constraints. Variation created by mutation, the raw material for evolutionary change, is translated into phenotypes by flux through metabolic pathways and by the topography and dynamics of molecular networks. Finally, the retention of genetic variation and the efficacy of selection depend on population genetics and demographic history. Emergent high-throughput experimental methods and sequencing technologies allow us to gather more evidence and to move beyond the theory in different systems and populations. Here we review the extent to which recent evidence supports long-established theoretical principles of adaptation.

  9. Theoretical, Methodological, and Empirical Approaches to Cost Savings: A Compendium

    SciTech Connect

    M Weimar

    1998-12-10

    This publication summarizes and contains the original documentation for understanding why the U.S. Department of Energy's (DOE's) privatization approach provides cost savings and the different approaches that could be used in calculating cost savings for the Tank Waste Remediation System (TWRS) Phase I contract. The initial section summarizes the approaches in the different papers. The appendices are the individual source papers which have been reviewed by individuals outside of the Pacific Northwest National Laboratory and the TWRS Program. Appendix A provides a theoretical basis for and estimate of the level of savings that can be" obtained from a fixed-priced contract with performance risk maintained by the contractor. Appendix B provides the methodology for determining cost savings when comparing a fixed-priced contractor with a Management and Operations (M&O) contractor (cost-plus contractor). Appendix C summarizes the economic model used to calculate cost savings and provides hypothetical output from preliminary calculations. Appendix D provides the summary of the approach for the DOE-Richland Operations Office (RL) estimate of the M&O contractor to perform the same work as BNFL Inc. Appendix E contains information on cost growth and per metric ton of glass costs for high-level waste at two other DOE sites, West Valley and Savannah River. Appendix F addresses a risk allocation analysis of the BNFL proposal that indicates,that the current approach is still better than the alternative.

  10. Submarine gas hydrate estimation: Theoretical and empirical approaches

    SciTech Connect

    Ginsburg, G.D.; Soloviev, V.A.

    1995-12-01

    The published submarine gas hydrate resource estimates are based on the concepts of their continuous extent over large areas and depth intervals and/or the regionally high hydrate concentrations in sediments. The observational data are in conflict with these concepts. At present such estimates cannot be made to an accuracy better than an order of magnitude. The amount of methane in shallow subbottom (seepage associated) gas-hydrate accumulations is estimated at 10{sup 14} m{sup 3} STP, and in deep-seated hydrates at 10{sup 15} m{sup 3} according to observational data. From the genetic standpoint for the time being gas hydrate potential could be only assessed as far less than 10{sup 17} m{sup 3} because rates of related hydrogeological and geochemical processes have not been adequately studied.

  11. Social Experiences with Peers and High School Graduation: A Review of Theoretical and Empirical Research

    ERIC Educational Resources Information Center

    Veronneau, Marie-Helene; Vitaro, Frank

    2007-01-01

    This article reviews theoretical and empirical work on the relations between child and adolescent peer experiences and high school graduation. First, the different developmental models that guide research in this domain will be explained. Then, descriptions of peer experiences at the group level (peer acceptance/rejection, victimisation, and crowd…

  12. Theoretical Foundation of Zisman's Empirical Equation for Wetting of Liquids on Solid Surfaces

    ERIC Educational Resources Information Center

    Zhu, Ruzeng; Cui, Shuwen; Wang, Xiaosong

    2010-01-01

    Theories of wetting of liquids on solid surfaces under the condition that van der Waals force is dominant are briefly reviewed. We show theoretically that Zisman's empirical equation for wetting of liquids on solid surfaces is a linear approximation of the Young-van der Waals equation in the wetting region, and we express the two parameters in…

  13. A Unified Model of Knowledge Sharing Behaviours: Theoretical Development and Empirical Test

    ERIC Educational Resources Information Center

    Chennamaneni, Anitha; Teng, James T. C.; Raja, M. K.

    2012-01-01

    Research and practice on knowledge management (KM) have shown that information technology alone cannot guarantee that employees will volunteer and share knowledge. While previous studies have linked motivational factors to knowledge sharing (KS), we took a further step to thoroughly examine this theoretically and empirically. We developed a…

  14. University Students' Understanding of the Concepts Empirical, Theoretical, Qualitative and Quantitative Research

    ERIC Educational Resources Information Center

    Murtonen, Mari

    2015-01-01

    University research education in many disciplines is frequently confronted by problems with students' weak level of understanding of research concepts. A mind map technique was used to investigate how students understand central methodological concepts of empirical, theoretical, qualitative and quantitative. The main hypothesis was that some…

  15. Empirically Based Play Interventions for Children

    ERIC Educational Resources Information Center

    Reddy, Linda A., Ed.; Files-Hall, Tara M., Ed.; Schaefer, Charles E., Ed.

    2005-01-01

    "Empirically Based Play Interventions for Children" is a compilation of innovative, well-designed play interventions, presented for the first time in one text. Play therapy is the oldest and most popular form of child therapy in clinical practice and is widely considered by practitioners to be uniquely responsive to children's developmental needs.…

  16. Color and psychological functioning: a review of theoretical and empirical work.

    PubMed

    Elliot, Andrew J

    2015-01-01

    In the past decade there has been increased interest in research on color and psychological functioning. Important advances have been made in theoretical work and empirical work, but there are also important weaknesses in both areas that must be addressed for the literature to continue to develop apace. In this article, I provide brief theoretical and empirical reviews of research in this area, in each instance beginning with a historical background and recent advancements, and proceeding to an evaluation focused on weaknesses that provide guidelines for future research. I conclude by reiterating that the literature on color and psychological functioning is at a nascent stage of development, and by recommending patience and prudence regarding conclusions about theory, findings, and real-world application. PMID:25883578

  17. Color and psychological functioning: a review of theoretical and empirical work

    PubMed Central

    Elliot, Andrew J.

    2015-01-01

    In the past decade there has been increased interest in research on color and psychological functioning. Important advances have been made in theoretical work and empirical work, but there are also important weaknesses in both areas that must be addressed for the literature to continue to develop apace. In this article, I provide brief theoretical and empirical reviews of research in this area, in each instance beginning with a historical background and recent advancements, and proceeding to an evaluation focused on weaknesses that provide guidelines for future research. I conclude by reiterating that the literature on color and psychological functioning is at a nascent stage of development, and by recommending patience and prudence regarding conclusions about theory, findings, and real-world application. PMID:25883578

  18. Quantifying heterogeneity attributable to polythetic diagnostic criteria: theoretical framework and empirical application.

    PubMed

    Olbert, Charles M; Gala, Gary J; Tupler, Larry A

    2014-05-01

    Heterogeneity within psychiatric disorders is both theoretically and practically problematic: For many disorders, it is possible for 2 individuals to share very few or even no symptoms in common yet share the same diagnosis. Polythetic diagnostic criteria have long been recognized to contribute to this heterogeneity, yet no unified theoretical understanding of the coherence of symptom criteria sets currently exists. A general framework for analyzing the logical and mathematical structure, coherence, and diversity of Diagnostic and Statistical Manual diagnostic categories (DSM-5 and DSM-IV-TR) is proposed, drawing from combinatorial mathematics, set theory, and information theory. Theoretical application of this framework to 18 diagnostic categories indicates that in most categories, 2 individuals with the same diagnosis may share no symptoms in common, and that any 2 theoretically possible symptom combinations will share on average less than half their symptoms. Application of this framework to 2 large empirical datasets indicates that patients who meet symptom criteria for major depressive disorder and posttraumatic stress disorder tend to share approximately three-fifths of symptoms in common. For both disorders in each of the datasets, pairs of individuals who shared no common symptoms were observed. Any 2 individuals with either diagnosis were unlikely to exhibit identical symptomatology. The theoretical and empirical results stemming from this approach have substantive implications for etiological research into, and measurement of, psychiatric disorders.

  19. Measuring health lifestyles in a comparative analysis: theoretical issues and empirical findings.

    PubMed

    Abel, T

    1991-01-01

    The concept of lifestyle bears great potential for research in medical sociology. Yet, weaknesses in current methods have restrained lifestyle research from realizing its full potentials. The present focus is on the links between theoretical conceptions and their empirical application. The paper divides into two parts. The first part provides a discussion of basic theoretical and methodological issues. In particular selected lines of thought from Max Weber are presented and their usefulness in providing a theoretical frame of reference for health lifestyle research is outlined. Next, a theory guided definition of the subject matter is introduced and basic problems in empirical applications of theoretical lifestyle concepts are discussed. In its second part the paper presents findings from comparative lifestyle analyses. Data from the U.S. and West Germany are utilized to explore issues of measurement equivalence and theoretical validity. Factor analyses indicate high conceptual equivalence for new measures of health lifestyle dimensions in both the U.S. and West Germany. Divisive cluster analyses detect three distinct lifestyle groups in both nations. Implications for future lifestyle research are discussed.

  20. A review of the nurtured heart approach to parenting: evaluation of its theoretical and empirical foundations.

    PubMed

    Hektner, Joel M; Brennan, Alison L; Brotherson, Sean E

    2013-09-01

    The Nurtured Heart Approach to parenting (NHA; Glasser & Easley, 2008) is summarized and evaluated in terms of its alignment with current theoretical perspectives and empirical evidence in family studies and developmental science. Originally conceived and promoted as a behavior management approach for parents of difficult children (i.e., with behavior disorders), NHA is increasingly offered as a valuable strategy for parents of any children, despite a lack of published empirical support. Parents using NHA are trained to minimize attention to undesired behaviors, provide positive attention and praise for compliance with rules, help children be successful by scaffolding and shaping desired behavior, and establish a set of clear rules and consequences. Many elements of the approach have strong support in the theoretical and empirical literature; however, some of the assumptions are more questionable, such as that negative child behavior can always be attributed to unintentional positive reinforcement by parents responding with negative attention. On balance, NHA appears to promote effective and validated parenting practices, but its effectiveness now needs to be tested empirically. PMID:24033240

  1. Dignity in the care of older people – a review of the theoretical and empirical literature

    PubMed Central

    Gallagher, Ann; Li, Sarah; Wainwright, Paul; Jones, Ian Rees; Lee, Diana

    2008-01-01

    Background Dignity has become a central concern in UK health policy in relation to older and vulnerable people. The empirical and theoretical literature relating to dignity is extensive and as likely to confound and confuse as to clarify the meaning of dignity for nurses in practice. The aim of this paper is critically to examine the literature and to address the following questions: What does dignity mean? What promotes and diminishes dignity? And how might dignity be operationalised in the care of older people? This paper critically reviews the theoretical and empirical literature relating to dignity and clarifies the meaning and implications of dignity in relation to the care of older people. If nurses are to provide dignified care clarification is an essential first step. Methods This is a review article, critically examining papers reporting theoretical perspectives and empirical studies relating to dignity. The following databases were searched: Assia, BHI, CINAHL, Social Services Abstracts, IBSS, Web of Knowledge Social Sciences Citation Index and Arts & Humanities Citation Index and location of books a chapters in philosophy literature. An analytical approach was adopted to the publications reviewed, focusing on the objectives of the review. Results and discussion We review a range of theoretical and empirical accounts of dignity and identify key dignity promoting factors evident in the literature, including staff attitudes and behaviour; environment; culture of care; and the performance of specific care activities. Although there is scope to learn more about cultural aspects of dignity we know a good deal about dignity in care in general terms. Conclusion We argue that what is required is to provide sufficient support and education to help nurses understand dignity and adequate resources to operationalise dignity in their everyday practice. Using the themes identified from our review we offer proposals for the direction of future research. PMID:18620561

  2. Modelling drying kinetics of thyme (Thymus vulgaris L.): theoretical and empirical models, and neural networks.

    PubMed

    Rodríguez, J; Clemente, G; Sanjuán, N; Bon, J

    2014-01-01

    The drying kinetics of thyme was analyzed by considering different conditions: air temperature of between 40°C  and 70°C , and air velocity of 1 m/s. A theoretical diffusion model and eight different empirical models were fitted to the experimental data. From the theoretical model application, the effective diffusivity per unit area of the thyme was estimated (between 3.68 × 10(-5) and 2.12 × 10 (-4) s(-1)). The temperature dependence of the effective diffusivity was described by the Arrhenius relationship with activation energy of 49.42 kJ/mol. Eight different empirical models were fitted to the experimental data. Additionally, the dependence of the parameters of each model on the drying temperature was determined, obtaining equations that allow estimating the evolution of the moisture content at any temperature in the established range. Furthermore, artificial neural networks were developed and compared with the theoretical and empirical models using the percentage of the relative errors and the explained variance. The artificial neural networks were found to be more accurate predictors of moisture evolution with VAR ≥ 99.3% and ER ≤ 8.7%.

  3. Modelling drying kinetics of thyme (Thymus vulgaris L.): theoretical and empirical models, and neural networks.

    PubMed

    Rodríguez, J; Clemente, G; Sanjuán, N; Bon, J

    2014-01-01

    The drying kinetics of thyme was analyzed by considering different conditions: air temperature of between 40°C  and 70°C , and air velocity of 1 m/s. A theoretical diffusion model and eight different empirical models were fitted to the experimental data. From the theoretical model application, the effective diffusivity per unit area of the thyme was estimated (between 3.68 × 10(-5) and 2.12 × 10 (-4) s(-1)). The temperature dependence of the effective diffusivity was described by the Arrhenius relationship with activation energy of 49.42 kJ/mol. Eight different empirical models were fitted to the experimental data. Additionally, the dependence of the parameters of each model on the drying temperature was determined, obtaining equations that allow estimating the evolution of the moisture content at any temperature in the established range. Furthermore, artificial neural networks were developed and compared with the theoretical and empirical models using the percentage of the relative errors and the explained variance. The artificial neural networks were found to be more accurate predictors of moisture evolution with VAR ≥ 99.3% and ER ≤ 8.7%. PMID:23733820

  4. Uncovering curvilinear relationships between conscientiousness and job performance: how theoretically appropriate measurement makes an empirical difference.

    PubMed

    Carter, Nathan T; Dalal, Dev K; Boyce, Anthony S; O'Connell, Matthew S; Kung, Mei-Chuan; Delgado, Kristin M

    2014-07-01

    The personality trait of conscientiousness has seen considerable attention from applied psychologists due to its efficacy for predicting job performance across performance dimensions and occupations. However, recent theoretical and empirical developments have questioned the assumption that more conscientiousness always results in better job performance, suggesting a curvilinear link between the 2. Despite these developments, the results of studies directly testing the idea have been mixed. Here, we propose this link has been obscured by another pervasive assumption known as the dominance model of measurement: that higher scores on traditional personality measures always indicate higher levels of conscientiousness. Recent research suggests dominance models show inferior fit to personality test scores as compared to ideal point models that allow for curvilinear relationships between traits and scores. Using data from 2 different samples of job incumbents, we show the rank-order changes that result from using an ideal point model expose a curvilinear link between conscientiousness and job performance 100% of the time, whereas results using dominance models show mixed results, similar to the current state of the literature. Finally, with an independent cross-validation sample, we show that selection based on predicted performance using ideal point scores results in more favorable objective hiring outcomes. Implications for practice and future research are discussed. PMID:24188394

  5. Why It Is Hard to Find Genes Associated With Social Science Traits: Theoretical and Empirical Considerations

    PubMed Central

    Lee, James J.; Benjamin, Daniel J.; Beauchamp, Jonathan P.; Glaeser, Edward L.; Borst, Gregoire; Pinker, Steven; Laibson, David I.

    2013-01-01

    Objectives. We explain why traits of interest to behavioral scientists may have a genetic architecture featuring hundreds or thousands of loci with tiny individual effects rather than a few with large effects and why such an architecture makes it difficult to find robust associations between traits and genes. Methods. We conducted a genome-wide association study at 2 sites, Harvard University and Union College, measuring more than 100 physical and behavioral traits with a sample size typical of candidate gene studies. We evaluated predictions that alleles with large effect sizes would be rare and most traits of interest to social science are likely characterized by a lack of strong directional selection. We also carried out a theoretical analysis of the genetic architecture of traits based on R.A. Fisher’s geometric model of natural selection and empirical analyses of the effects of selection bias and phenotype measurement stability on the results of genetic association studies. Results. Although we replicated several known genetic associations with physical traits, we found only 2 associations with behavioral traits that met the nominal genome-wide significance threshold, indicating that physical and behavioral traits are mainly affected by numerous genes with small effects. Conclusions. The challenge for social science genomics is the likelihood that genes are connected to behavioral variation by lengthy, nonlinear, interactive causal chains, and unraveling these chains requires allying with personal genomics to take advantage of the potential for large sample sizes as well as continuing with traditional epidemiological studies. PMID:23927501

  6. Empirical and Theoretical Aspects of Generation and Transfer of Information in a Neuromagnetic Source Network

    PubMed Central

    Vakorin, Vasily A.; Mišić, Bratislav; Krakovska, Olga; McIntosh, Anthony Randal

    2011-01-01

    Variability in source dynamics across the sources in an activated network may be indicative of how the information is processed within a network. Information-theoretic tools allow one not only to characterize local brain dynamics but also to describe interactions between distributed brain activity. This study follows such a framework and explores the relations between signal variability and asymmetry in mutual interdependencies in a data-driven pipeline of non-linear analysis of neuromagnetic sources reconstructed from human magnetoencephalographic (MEG) data collected as a reaction to a face recognition task. Asymmetry in non-linear interdependencies in the network was analyzed using transfer entropy, which quantifies predictive information transfer between the sources. Variability of the source activity was estimated using multi-scale entropy, quantifying the rate of which information is generated. The empirical results are supported by an analysis of synthetic data based on the dynamics of coupled systems with time delay in coupling. We found that the amount of information transferred from one source to another was correlated with the difference in variability between the dynamics of these two sources, with the directionality of net information transfer depending on the time scale at which the sample entropy was computed. The results based on synthetic data suggest that both time delay and strength of coupling can contribute to the relations between variability of brain signals and information transfer between them. Our findings support the previous attempts to characterize functional organization of the activated brain, based on a combination of non-linear dynamics and temporal features of brain connectivity, such as time delay. PMID:22131968

  7. The Role of Trait Emotional Intelligence in Academic Performance: Theoretical Overview and Empirical Update.

    PubMed

    Perera, Harsha N

    2016-01-01

    Considerable debate still exists among scholars over the role of trait emotional intelligence (TEI) in academic performance. The dominant theoretical position is that TEI should be orthogonal or only weakly related to achievement; yet, there are strong theoretical reasons to believe that TEI plays a key role in performance. The purpose of the current article is to provide (a) an overview of the possible theoretical mechanisms linking TEI with achievement and (b) an update on empirical research examining this relationship. To elucidate these theoretical mechanisms, the overview draws on multiple theories of emotion and regulation, including TEI theory, social-functional accounts of emotion, and expectancy-value and psychobiological model of emotion and regulation. Although these theoretical accounts variously emphasize different variables as focal constructs, when taken together, they provide a comprehensive picture of the possible mechanisms linking TEI with achievement. In this regard, the article redresses the problem of vaguely specified theoretical links currently hampering progress in the field. The article closes with a consideration of directions for future research. PMID:26515326

  8. The Role of Trait Emotional Intelligence in Academic Performance: Theoretical Overview and Empirical Update.

    PubMed

    Perera, Harsha N

    2016-01-01

    Considerable debate still exists among scholars over the role of trait emotional intelligence (TEI) in academic performance. The dominant theoretical position is that TEI should be orthogonal or only weakly related to achievement; yet, there are strong theoretical reasons to believe that TEI plays a key role in performance. The purpose of the current article is to provide (a) an overview of the possible theoretical mechanisms linking TEI with achievement and (b) an update on empirical research examining this relationship. To elucidate these theoretical mechanisms, the overview draws on multiple theories of emotion and regulation, including TEI theory, social-functional accounts of emotion, and expectancy-value and psychobiological model of emotion and regulation. Although these theoretical accounts variously emphasize different variables as focal constructs, when taken together, they provide a comprehensive picture of the possible mechanisms linking TEI with achievement. In this regard, the article redresses the problem of vaguely specified theoretical links currently hampering progress in the field. The article closes with a consideration of directions for future research.

  9. On the complex relationship between energy expenditure and longevity: Reconciling the contradictory empirical results with a simple theoretical model.

    PubMed

    Hou, Chen; Amunugama, Kaushalya

    2015-07-01

    The relationship between energy expenditure and longevity has been a central theme in aging studies. Empirical studies have yielded controversial results, which cannot be reconciled by existing theories. In this paper, we present a simple theoretical model based on first principles of energy conservation and allometric scaling laws. The model takes into considerations the energy tradeoffs between life history traits and the efficiency of the energy utilization, and offers quantitative and qualitative explanations for a set of seemingly contradictory empirical results. We show that oxidative metabolism can affect cellular damage and longevity in different ways in animals with different life histories and under different experimental conditions. Qualitative data and the linearity between energy expenditure, cellular damage, and lifespan assumed in previous studies are not sufficient to understand the complexity of the relationships. Our model provides a theoretical framework for quantitative analyses and predictions. The model is supported by a variety of empirical studies, including studies on the cellular damage profile during ontogeny; the intra- and inter-specific correlations between body mass, metabolic rate, and lifespan; and the effects on lifespan of (1) diet restriction and genetic modification of growth hormone, (2) the cold and exercise stresses, and (3) manipulations of antioxidant. PMID:26086438

  10. SAGE II/Umkehr ozone comparisons and aerosols effects: An empirical and theoretical study. Final report

    SciTech Connect

    Newchurch, M.

    1997-09-15

    The objectives of this research were to: (1) examine empirically the aerosol effect on Umkehr ozone profiles using SAGE II aerosol and ozone data; (2) examine theoretically the aerosol effect on Umkehr ozone profiles; (3) examine the differences between SAGE II ozone profiles and both old- and new-format Umkehr ozone profiles for ozone-trend information; (4) reexamine SAGE I-Umkehr ozone differences with the most recent version of SAGE I data; and (5) contribute to the SAGE II science team.

  11. A Theoretical and Empirical Comparison of Three Approaches to Achievement Testing.

    ERIC Educational Resources Information Center

    Haladyna, Tom; Roid, Gale

    Three approaches to the construction of achievement tests are compared: construct, operational, and empirical. The construct approach is based upon classical test theory and measures an abstract representation of the instructional objectives. The operational approach specifies instructional intent through instructional objectives, facet design,…

  12. The Theoretical and Empirical Basis for Meditation as an Intervention for PTSD

    ERIC Educational Resources Information Center

    Lang, Ariel J.; Strauss, Jennifer L.; Bomyea, Jessica; Bormann, Jill E.; Hickman, Steven D.; Good, Raquel C.; Essex, Michael

    2012-01-01

    In spite of the existence of good empirically supported treatments for posttraumatic stress disorder (PTSD), consumers and providers continue to ask for more options for managing this common and often chronic condition. Meditation-based approaches are being widely implemented, but there is minimal research rigorously assessing their effectiveness.…

  13. Common liability to addiction and “gateway hypothesis”: Theoretical, empirical and evolutionary perspective

    PubMed Central

    Vanyukov, Michael M.; Tarter, Ralph E.; Kirillova, Galina P.; Kirisci, Levent; Reynolds, Maureen D.; Kreek, Mary Jeanne; Conway, Kevin P.; Maher, Brion S.; Iacono, William G.; Bierut, Laura; Neale, Michael C.; Clark, Duncan B.; Ridenour, Ty A.

    2013-01-01

    Background Two competing concepts address the development of involvement with psychoactive substances: the “gateway hypothesis” (GH) and common liability to addiction (CLA). Method The literature on theoretical foundations and empirical findings related to both concepts is reviewed. Results The data suggest that drug use initiation sequencing, the core GH element, is variable and opportunistic rather than uniform and developmentally deterministic. The association between risks for use of different substances, if any, can be more readily explained by common underpinnings than by specific staging. In contrast, the CLA concept is grounded in genetic theory and supported by data identifying common sources of variation in the risk for specific addictions. This commonality has identifiable neurobiological substrate and plausible evolutionary explanations. Conclusions Whereas the “gateway” hypothesis does not specify mechanistic connections between “stages”, and does not extend to the risks for addictions, the concept of common liability to addictions incorporates sequencing of drug use initiation as well as extends to related addictions and their severity, provides a parsimonious explanation of substance use and addiction co-occurrence, and establishes a theoretical and empirical foundation to research in etiology, quantitative risk and severity measurement, as well as targeted non-drug-specific prevention and early intervention. PMID:22261179

  14. Perceived barriers to children's active commuting to school: a systematic review of empirical, methodological and theoretical evidence.

    PubMed

    Lu, Wenhua; McKyer, E Lisako J; Lee, Chanam; Goodson, Patricia; Ory, Marcia G; Wang, Suojin

    2014-11-18

    Active commuting to school (ACS) may increase children's daily physical activity and help them maintain a healthy weight. Previous studies have identified various perceived barriers related to children's ACS. However, it is not clear whether and how these studies were methodologically sound and theoretically grounded. The purpose of this review was to critically assess the current literature on perceived barriers to children's ACS and provide recommendations for future studies. Empirically based literature on perceived barriers to ACS was systematically searched from six databases. A methodological quality scale (MQS) and a theory utilization quality scale (TQS) were created based on previously established instruments and tailored for the current review. Among the 39 studies that met the inclusion criteria, 19 (48.7%) reported statistically significant perceived barriers to child's ACS. The methodological and theory utilization qualities of reviewed studies varied, with MQS scores ranging between 7 and 20 (Mean =12.95, SD =2.95) and TQS scores from 1 to 7 (Mean =3.62, SD =1.74). A detailed appraisal of the literature suggests several empirical, methodological, and theoretical recommendations for future studies on perceived barriers to ACS. Empirically, increasing the diversity of study regions and samples should be a high priority, particularly in Asian and European countries, and among rural residents; more prospective and interventions studies are needed to determine the causal mechanism liking the perceived factors and ACS; future researchers should include policy-related barriers into their inquiries. Methodologically, the conceptualization of ACS should be standardized or at least well rationalized in future studies to ensure the comparability of results; researchers' awareness need to be increased for improving the methodological rigor of studies, especially in regard to appropriate statistical analysis techniques, control variable estimation

  15. Theoretical and empirical scale dependency of Z-R relationships: Evidence, impacts, and correction

    NASA Astrophysics Data System (ADS)

    Verrier, Sébastien; Barthès, Laurent; Mallet, Cécile

    2013-07-01

    Estimation of rainfall intensities from radar measurements relies to a large extent on power-laws relationships between rain rates R and radar reflectivities Z, i.e., Z = a*R^b. These relationships are generally applied unawarely of the scale, which is questionable since the nonlinearity of these relations could lead to undesirable discrepancies when combined with scale aggregation. Since the parameters (a,b) are expectedly related with drop size distribution (DSD) properties, they are often derived at disdrometer scale, not at radar scale, which could lead to errors at the latter. We propose to investigate the statistical behavior of Z-R relationships across scales both on theoretical and empirical sides. Theoretically, it is shown that claimed multifractal properties of rainfall processes could constrain the parameters (a,b) such that the exponent b would be scale independent but the prefactor a would be growing as a (slow) power law of time or space scale. In the empirical part (which may be read independently of theoretical considerations), high-resolution disdrometer (Dual-Beam Spectropluviometer) data of rain rates and reflectivity factors are considered at various integration times comprised in the range 15 s - 64 min. A variety of regression techniques is applied on Z-R scatterplots at all these time scales, establishing empirical evidence of a behavior coherent with theoretical considerations: a grows as a 0.1 power law of scale while b decreases more slightly. The properties of a are suggested to be closely linked to inhomogeneities in the DSDs since extensions of Z-R relationships involving (here, strongly nonconstant) normalization parameters of the DSDs seem to be more robust across scales. The scale dependence of simple Z = a*R^b relationships is advocated to be a possible source of overestimation of rainfall intensities or accumulations. Several ways for correcting such scaling biases (which can reach >15-20% in terms of relative error) are suggested

  16. Issues and Controversies that Surround Recent Texts on Empirically Supported and Empirically Based Treatments

    ERIC Educational Resources Information Center

    Paul, Howard A.

    2004-01-01

    Since the 1993 APA task force of the Society of Clinical Psychology developed guidelines to apply data-based psychology to the identification of effective psychotherapy, there has been an increasing number of texts focussing on Empirically based Psychotherapy and Empirically Supported Treatments. This manuscript examines recent key texts and…

  17. How beauty works. Theoretical mechanisms and two empirical applications on students' evaluation of teaching.

    PubMed

    Wolbring, Tobias; Riordan, Patrick

    2016-05-01

    Plenty of studies show that the physical appearance of a person affects a variety of outcomes in everyday life. However, due to an incomplete theoretical explication and empirical problems in disentangling different beauty effects, it is unclear which mechanisms are at work. To clarify how beauty works we present explanations from evolutionary theory and expectation states theory and show where both perspectives differ and where interlinkage appears promising. Using students' evaluations of teaching we find observational and experimental evidence for the different causal pathways of physical attractiveness. First, independent raters strongly agree over the physical attractiveness of a person. Second, attractive instructors receive better student ratings. Third, students attend classes of attractive instructors more frequently - even after controlling for teaching quality. Fourth, we find no evidence that attractiveness effects become stronger if rater and ratee are of the opposite sex. Finally, the beauty premium turns into a penalty if an attractive instructor falls short of students' expectations. PMID:26973043

  18. Why autobiographical memories for traumatic and emotional events might differ: theoretical arguments and empirical evidence.

    PubMed

    Sotgiu, Igor; Rusconi, Maria Luisa

    2014-01-01

    The authors review five arguments supporting the hypothesis that memories for traumatic and nontraumatic emotional events should be considered as qualitatively different recollections. The first argument considers the objective features of traumatic and emotional events and their possible influence on the formation of memories for these events. The second argument assumes that traumatic memories distinguish from emotional ones as trauma exposure is often associated with the development of psychological disorders involving memory disturbances. The third argument is that traumatic experiences are more likely than emotional experiences to be forgotten and recovered. The fourth argument concerns the possibility that emotional memories are socially shared more frequently than traumatic memories. A fifth argument suggests that trauma exposure may impair selected brain systems implicated in memory functions. Theoretical and empirical evidence supporting these claims is reviewed. In the conclusions, the authors illustrate future research directions and discuss some conceptual issues related to the definitions of traumatic event currently employed by memory researchers. PMID:25087317

  19. How beauty works. Theoretical mechanisms and two empirical applications on students' evaluation of teaching.

    PubMed

    Wolbring, Tobias; Riordan, Patrick

    2016-05-01

    Plenty of studies show that the physical appearance of a person affects a variety of outcomes in everyday life. However, due to an incomplete theoretical explication and empirical problems in disentangling different beauty effects, it is unclear which mechanisms are at work. To clarify how beauty works we present explanations from evolutionary theory and expectation states theory and show where both perspectives differ and where interlinkage appears promising. Using students' evaluations of teaching we find observational and experimental evidence for the different causal pathways of physical attractiveness. First, independent raters strongly agree over the physical attractiveness of a person. Second, attractive instructors receive better student ratings. Third, students attend classes of attractive instructors more frequently - even after controlling for teaching quality. Fourth, we find no evidence that attractiveness effects become stronger if rater and ratee are of the opposite sex. Finally, the beauty premium turns into a penalty if an attractive instructor falls short of students' expectations.

  20. Why autobiographical memories for traumatic and emotional events might differ: theoretical arguments and empirical evidence.

    PubMed

    Sotgiu, Igor; Rusconi, Maria Luisa

    2014-01-01

    The authors review five arguments supporting the hypothesis that memories for traumatic and nontraumatic emotional events should be considered as qualitatively different recollections. The first argument considers the objective features of traumatic and emotional events and their possible influence on the formation of memories for these events. The second argument assumes that traumatic memories distinguish from emotional ones as trauma exposure is often associated with the development of psychological disorders involving memory disturbances. The third argument is that traumatic experiences are more likely than emotional experiences to be forgotten and recovered. The fourth argument concerns the possibility that emotional memories are socially shared more frequently than traumatic memories. A fifth argument suggests that trauma exposure may impair selected brain systems implicated in memory functions. Theoretical and empirical evidence supporting these claims is reviewed. In the conclusions, the authors illustrate future research directions and discuss some conceptual issues related to the definitions of traumatic event currently employed by memory researchers.

  1. The adaptive evolution of virulence: a review of theoretical predictions and empirical tests.

    PubMed

    Cressler, Clayton E; McLEOD, David V; Rozins, Carly; VAN DEN Hoogen, Josée; Day, Troy

    2016-06-01

    Why is it that some parasites cause high levels of host damage (i.e. virulence) whereas others are relatively benign? There are now numerous reviews of virulence evolution in the literature but it is nevertheless still difficult to find a comprehensive treatment of the theory and data on the subject that is easily accessible to non-specialists. Here we attempt to do so by distilling the vast theoretical literature on the topic into a set of relatively few robust predictions. We then provide a comprehensive assessment of the available empirical literature that tests these predictions. Our results show that there have been some notable successes in integrating theory and data but also that theory and empiricism in this field do not 'speak' to each other very well. We offer a few suggestions for how the connection between the two might be improved. PMID:26302775

  2. The Influence of Education and Socialization on Radicalization: An Exploration of Theoretical Presumptions and Empirical Research.

    PubMed

    Pels, Trees; de Ruyter, Doret J

    2012-06-01

    BACKGROUND AND OBJECTIVE: Research into radicalization does not pay much attention to education. This is remarkable and possibly misses an important influence on the process of radicalization. Therefore this article sets out to explore the relation between education on the one hand and the onset or prevention of radicalization on the other hand. METHOD: This article is a theoretical literature review. It has analyzed empirical studies-mainly from European countries-about the educational aims, content and style of Muslim parents and parents with (extreme) right-wing sympathies. RESULTS: Research examining similarity in right-wing sympathies between parents and children yields mixed results, but studies among adolescents point to a significant concordance. Research also showed that authoritarian parenting may play a significant role. Similar research among Muslim families was not found. While raising children with distrust and an authoritarian style are prevalent, the impact on adolescents has not been investigated. The empirical literature we reviewed does not give sufficient evidence to conclude that democratic ideal in and an authoritative style of education are conducive to the development of a democratic attitude. CONCLUSION: There is a knowledge gap with regard to the influence of education on the onset or the prevention of radicalization. Schools and families are underappreciated sources of informal social control and social capital and therefore the gap should be closed. If there is a better understanding of the effect of education, policy as well as interventions can be developed to assist parents and teachers in preventing radicalization.

  3. The Influence of Education and Socialization on Radicalization: An Exploration of Theoretical Presumptions and Empirical Research.

    PubMed

    Pels, Trees; de Ruyter, Doret J

    2012-06-01

    BACKGROUND AND OBJECTIVE: Research into radicalization does not pay much attention to education. This is remarkable and possibly misses an important influence on the process of radicalization. Therefore this article sets out to explore the relation between education on the one hand and the onset or prevention of radicalization on the other hand. METHOD: This article is a theoretical literature review. It has analyzed empirical studies-mainly from European countries-about the educational aims, content and style of Muslim parents and parents with (extreme) right-wing sympathies. RESULTS: Research examining similarity in right-wing sympathies between parents and children yields mixed results, but studies among adolescents point to a significant concordance. Research also showed that authoritarian parenting may play a significant role. Similar research among Muslim families was not found. While raising children with distrust and an authoritarian style are prevalent, the impact on adolescents has not been investigated. The empirical literature we reviewed does not give sufficient evidence to conclude that democratic ideal in and an authoritative style of education are conducive to the development of a democratic attitude. CONCLUSION: There is a knowledge gap with regard to the influence of education on the onset or the prevention of radicalization. Schools and families are underappreciated sources of informal social control and social capital and therefore the gap should be closed. If there is a better understanding of the effect of education, policy as well as interventions can be developed to assist parents and teachers in preventing radicalization. PMID:22611328

  4. Optimized curvelet-based empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Wu, Renjie; Zhang, Qieshi; Kamata, Sei-ichiro

    2015-02-01

    The recent years has seen immense improvement in the development of signal processing based on Curvelet transform. The Curvelet transform provide a new multi-resolution representation. The frame elements of Curvelets exhibit higher direction sensitivity and anisotropic than the Wavelets, multi-Wavelets, steerable pyramids, and so on. These features are based on the anisotropic notion of scaling. In practical instances, time series signals processing problem is often encountered. To solve this problem, the time-frequency analysis based methods are studied. However, the time-frequency analysis cannot always be trusted. Many of the new methods were proposed. The Empirical Mode Decomposition (EMD) is one of them, and widely used. The EMD aims to decompose into their building blocks functions that are the superposition of a reasonably small number of components, well separated in the time-frequency plane. And each component can be viewed as locally approximately harmonic. However, it cannot solve the problem of directionality of high-dimensional. A reallocated method of Curvelet transform (optimized Curvelet-based EMD) is proposed in this paper. We introduce a definition for a class of functions that can be viewed as a superposition of a reasonably small number of approximately harmonic components by optimized Curvelet family. We analyze this algorithm and demonstrate its results on data. The experimental results prove the effectiveness of our method.

  5. From the bench to modeling--R0 at the interface between empirical and theoretical approaches in epidemiology of environmentally transmitted infectious diseases.

    PubMed

    Ivanek, Renata; Lahodny, Glenn

    2015-02-01

    transmission rate of infection and the pathogen growth rate in the environment. Moreover, we identified experimental conditions for which the theoretical R0 predictions based on the hypotheses H2 and H3 differ greatly, which would assist their discrimination and conclusive validation against future empirical studies. Once a valid theoretical R0 is identified for Salmonella Typhimurium in mice, its generalizability to other host-pathogen-environment systems should be tested. The present study may serve as a template for integrated empirical and theoretical research of R0 in the epidemiology of ETIDs.

  6. Theoretical Foundations for Evidence-Based Health Informatics: Why? How?

    PubMed

    Scott, Philip J; Georgiou, Andrew; Hyppönen, Hannele; Craven, Catherine K; Rigby, Michael; Brender McNair, Jytte

    2016-01-01

    A scientific approach to health informatics requires sound theoretical foundations. Health informatics implementation would be more effective if evidence-based and guided by theories about what is likely to work in what circumstances. We report on a Medinfo 2015 workshop on this topic jointly organized by the EFMI Working Group on Assessment of Health Information Systems and the IMIA Working Group on Technology Assessment and Quality Development. We discuss the findings of the workshop and propose an approach to consolidate empirical knowledge into testable middle-range theories. PMID:27577457

  7. Evaluation of theoretical and empirical water vapor sorption isotherm models for soils

    NASA Astrophysics Data System (ADS)

    Arthur, Emmanuel; Tuller, Markus; Moldrup, Per; de Jonge, Lis W.

    2016-01-01

    The mathematical characterization of water vapor sorption isotherms of soils is crucial for modeling processes such as volatilization of pesticides and diffusive and convective water vapor transport. Although numerous physically based and empirical models were previously proposed to describe sorption isotherms of building materials, food, and other industrial products, knowledge about the applicability of these functions for soils is noticeably lacking. We present an evaluation of nine models for characterizing adsorption/desorption isotherms for a water activity range from 0.03 to 0.93 based on measured data of 207 soils with widely varying textures, organic carbon contents, and clay mineralogy. In addition, the potential applicability of the models for prediction of sorption isotherms from known clay content was investigated. While in general, all investigated models described measured adsorption and desorption isotherms reasonably well, distinct differences were observed between physical and empirical models and due to the different degrees of freedom of the model equations. There were also considerable differences in model performance for adsorption and desorption data. While regression analysis relating model parameters and clay content and subsequent model application for prediction of measured isotherms showed promise for the majority of investigated soils, for soils with distinct kaolinitic and smectitic clay mineralogy predicted isotherms did not closely match the measurements.

  8. Solubility of caffeine from green tea in supercritical CO2: a theoretical and empirical approach.

    PubMed

    Gadkari, Pravin Vasantrao; Balaraman, Manohar

    2015-12-01

    Decaffeination of fresh green tea was carried out with supercritical CO2 in the presence of ethanol as co-solvent. The solubility of caffeine in supercritical CO2 varied from 44.19 × 10(-6) to 149.55 × 10(-6) (mole fraction) over a pressure and temperature range of 15 to 35 MPa and 313 to 333 K, respectively. The maximum solubility of caffeine was obtained at 25 MPa and 323 K. Experimental solubility data were correlated with the theoretical equation of state models Peng-Robinson (PR), Soave Redlich-Kwong (SRK), and Redlich-Kwong (RK). The RK model had regressed experimental data with 15.52 % average absolute relative deviation (AARD). In contrast, Gordillo empirical model regressed the best to experimental data with only 0.96 % AARD. Under supercritical conditions, solubility of caffeine in tea matrix was lower than the solubility of pure caffeine. Further, solubility of caffeine in supercritical CO2 was compared with solubility of pure caffeine in conventional solvents and a maximum solubility 90 × 10(-3) mol fraction was obtained with chloroform.

  9. Solubility of caffeine from green tea in supercritical CO2: a theoretical and empirical approach.

    PubMed

    Gadkari, Pravin Vasantrao; Balaraman, Manohar

    2015-12-01

    Decaffeination of fresh green tea was carried out with supercritical CO2 in the presence of ethanol as co-solvent. The solubility of caffeine in supercritical CO2 varied from 44.19 × 10(-6) to 149.55 × 10(-6) (mole fraction) over a pressure and temperature range of 15 to 35 MPa and 313 to 333 K, respectively. The maximum solubility of caffeine was obtained at 25 MPa and 323 K. Experimental solubility data were correlated with the theoretical equation of state models Peng-Robinson (PR), Soave Redlich-Kwong (SRK), and Redlich-Kwong (RK). The RK model had regressed experimental data with 15.52 % average absolute relative deviation (AARD). In contrast, Gordillo empirical model regressed the best to experimental data with only 0.96 % AARD. Under supercritical conditions, solubility of caffeine in tea matrix was lower than the solubility of pure caffeine. Further, solubility of caffeine in supercritical CO2 was compared with solubility of pure caffeine in conventional solvents and a maximum solubility 90 × 10(-3) mol fraction was obtained with chloroform. PMID:26604372

  10. Empirical corroboration of an earlier theoretical resolution to the UV paradox of insect polarized skylight orientation.

    PubMed

    Wang, Xin; Gao, Jun; Fan, Zhiguo

    2014-02-01

    It is surprising that many insect species use only the ultraviolet (UV) component of the polarized skylight for orientation and navigation purposes, while both the intensity and the degree of polarization of light from the clear sky are lower in the UV than at longer (blue, green, red) wavelengths. Why have these insects chosen the UV part of the polarized skylight? This strange phenomenon is called the "UV-sky-pol paradox". Although earlier several speculations tried to resolve this paradox, they did this without any quantitative data. A theoretical and computational model has convincingly explained why it is advantageous for certain animals to detect celestial polarization in the UV. We performed a sky-polarimetric approach and built a polarized skylight sensor that models the processing of polarization signals by insect photoreceptors. Using this model sensor, we carried out measurements under clear and cloudy sky conditions. Our results showed that light from the cloudy sky has maximal degree of polarization in the UV. Furthermore, under both clear and cloudy skies the angle of polarization of skylight can be detected with a higher accuracy. By this, we corroborated empirically the soundness of the earlier computational resolution of the UV-sky-pol paradox. PMID:24402685

  11. Safety climate and injuries: an examination of theoretical and empirical relationships.

    PubMed

    Beus, Jeremy M; Payne, Stephanie C; Bergman, Mindy E; Arthur, Winfred

    2010-07-01

    Our purpose in this study was to meta-analytically address several theoretical and empirical issues regarding the relationships between safety climate and injuries. First, we distinguished between extant safety climate-->injury and injury-->safety climate relationships for both organizational and psychological safety climates. Second, we examined several potential moderators of these relationships. Meta-analyses revealed that injuries were more predictive of organizational safety climate than safety climate was predictive of injuries. Additionally, the injury-->safety climate relationship was stronger for organizational climate than for psychological climate. Moderator analyses revealed that the degree of content contamination in safety climate measures inflated effects, whereas measurement deficiency attenuated effects. Additionally, moderator analyses showed that as the time period over which injuries were assessed lengthened, the safety climate-->injury relationship was attenuated. Supplemental meta-analyses of specific safety climate dimensions also revealed that perceived management commitment to safety is the most robust predictor of occupational injuries. Contrary to expectations, the operationalization of injuries did not meaningfully moderate safety climate-injury relationships. Implications and recommendations for future research and practice are discussed.

  12. The complexities of defining optimal sleep: empirical and theoretical considerations with a special emphasis on children.

    PubMed

    Blunden, Sarah; Galland, Barbara

    2014-10-01

    The main aim of this paper is to consider relevant theoretical and empirical factors defining optimal sleep, and assess the relative importance of each in developing a working definition for, or guidelines about, optimal sleep, particularly in children. We consider whether optimal sleep is an issue of sleep quantity or of sleep quality. Sleep quantity is discussed in terms of duration, timing, variability and dose-response relationships. Sleep quality is explored in relation to continuity, sleepiness, sleep architecture and daytime behaviour. Potential limitations of sleep research in children are discussed, specifically the loss of research precision inherent in sleep deprivation protocols involving children. We discuss which outcomes are the most important to measure. We consider the notion that insufficient sleep may be a totally subjective finding, is impacted by the age of the reporter, driven by socio-cultural patterns and sleep-wake habits, and that, in some individuals, the driver for insufficient sleep can be viewed in terms of a cost-benefit relationship, curtailing sleep in order to perform better while awake. We conclude that defining optimal sleep is complex. The only method of capturing this elusive concept may be by somnotypology, taking into account duration, quality, age, gender, race, culture, the task at hand, and an individual's position in both sleep-alert and morningness-eveningness continuums. At the experimental level, a unified approach by researchers to establish standardized protocols to evaluate optimal sleep across paediatric age groups is required.

  13. Uncertainty in the multielemental quantification by total-reflection X-ray fluorescence: theoretical and empirical approximation.

    PubMed

    Fernández-Ruiz, R

    2008-11-15

    Nowadays, the subject of the quality assurance of the analytical results is acquiring more and more importance. This work presents a basic theoretical and empirical approximation to the expanded uncertainty associated to the TXRF measurements. Two theoretical models has been proposed and compared systematically with the empirical expanded uncertainty obtained. The main consequences derived of this work are the following; theoretical model B explains with a high degree of agreement the empirical expanded uncertainties associated to the TXRF measurements, while theoretical model A explains partially the instrumental repeatability of the TXRF system. On the other hand, an unexpected U-behavior has been found for the empirical uncertainty in TXRF measurements whose explanation can be due to the sum of several sources of uncertainty not considered like variations of the Compton background or the nonlinearity of the Si(Li) detector quantum efficiency. Additionally, it has been shown that the roughness and small geometrical variations of the sample depositions are the more important uncertainty sources in the experimental TXRF measurements.

  14. Multisystemic Therapy: An Empirically Supported, Home-Based Family Therapy Approach.

    ERIC Educational Resources Information Center

    Sheidow, Ashli J.; Woodford, Mark S.

    2003-01-01

    Multisystemic Therapy (MST) is a well-validated, evidenced-based treatment for serious clinical problems presented by adolescents and their families. This article is an introduction to the MST approach and outlines key clinical features, describes the theoretical underpinnings, and discusses the empirical support for MST's effectiveness with a…

  15. Empirically Based Myths: Astrology, Biorhythms, and ATIs.

    ERIC Educational Resources Information Center

    Ragsdale, Ronald G.

    1980-01-01

    A myth may have an empirical basis through chance occurrence; perhaps Aptitude Treatment Interactions (ATIs) are in this category. While ATIs have great utility in describing, planning, and implementing instruction, few disordinal interactions have been found. Article suggests narrowing of ATI research with replications and estimates of effect…

  16. Calculation of theoretical and empirical nutrient N critical loads in the mixed conifer ecosystems of southern California.

    PubMed

    Breiner, Joan; Gimeno, Benjamin S; Fenn, Mark

    2007-01-01

    Edaphic, foliar, and hydrologic forest nutrient status indicators from 15 mixed conifer forest stands in the Sierra Nevada, San Gabriel Mountains, and San Bernardino National Forest were used to estimate empirical or theoretical critical loads (CL) for nitrogen (N) as a nutrient. Soil acidification response to N deposition was also evaluated. Robust empirical relationships were found relating N deposition to plant N uptake (N in foliage), N fertility (litter C/N ratio), and soil acidification. However, no consistent empirical CL were obtained when the thresholds for parameters indicative of N excess from other types of ecosystems were used. Similarly, the highest theoretical CL for nutrient N calculated using the simple mass balance steady state model (estimates ranging from 1.4-8.8 kg N/ha/year) was approximately two times lower than the empirical observations. Further research is needed to derive the thresholds for indicators associated with the impairment of these mixed conifer forests exposed to chronic N deposition within a Mediterranean climate. Further development or parameterization of models for the calculation of theoretical critical loads suitable for these ecosystems will also be an important aspect of future critical loads research. PMID:17450298

  17. Theoretical and Empirical Comparisons between Two Models for Continuous Item Responses.

    ERIC Educational Resources Information Center

    Ferrando, Pere J.

    2002-01-01

    Analyzed the relations between two continuous response models intended for typical response items: the linear congeneric model and Samejima's continuous response model (CRM). Illustrated the relations described using an empirical example and assessed the relations through a simulation study. (SLD)

  18. A Model of Resource Allocation in Public School Districts: A Theoretical and Empirical Analysis.

    ERIC Educational Resources Information Center

    Chambers, Jay G.

    This paper formulates a comprehensive model of resource allocation in a local public school district. The theoretical framework specified could be applied equally well to any number of local public social service agencies. Section 1 develops the theoretical model describing the process of resource allocation. This involves the determination of the…

  19. Toward a theoretically based measurement model of the good life.

    PubMed

    Cheung, C K

    1997-06-01

    A theoretically based conceptualization of the good life should differentiate 4 dimensions-the hedonist good life, the dialectical good life, the humanist good life, and the formalist good life. These 4 dimensions incorporate previous fragmentary measures, such as life satisfaction, depression, work alienation, and marital satisfaction, to produce an integrative view. In the present study, 276 Hong Kong Chinese husbands and wives responded to a survey of 13 indicators for these 4 good life dimensions. Confirmatory hierarchical factor analysis showed that these indicators identified the 4 dimensions of the good life, which in turn converged to identify a second-order factor of the overall good life. The model demonstrates discriminant validity in that the first-order factors had high loadings on the overall good life factor despite being linked by a social desirability factor. Analysis further showed that the second-order factor model applied equally well to husbands and wives. Thus, the conceptualization appears to be theoretically and empirically adequate in incorporating previous conceptualizations of the good life. PMID:9168589

  20. Why Do People Need Self-Esteem? A Theoretical and Empirical Review

    ERIC Educational Resources Information Center

    Pyszczynsi, Tom; Greenberg, Jeff; Solomon, Sheldon; Arndt, Jamie; Schimel, Jeff

    2004-01-01

    Terror management theory (TMT; J. Greenberg, T. Pyszczynski, & S. Solomon, 1986) posits that people are motivated to pursue positive self-evaluations because self-esteem provides a buffer against the omnipresent potential for anxiety engendered by the uniquely human awareness of mortality. Empirical evidence relevant to the theory is reviewed…

  1. Culminating Experience Empirical and Theoretical Research Projects, University of Tennessee at Chattanooga, Spring, 2005

    ERIC Educational Resources Information Center

    Watson, Sandy White, Ed.

    2005-01-01

    This document represents a sample collection of master's theses from the University of Tennessee at Chattanooga's Teacher Education Program, spring semester, 2005. The majority of these student researchers were simultaneously student teaching while writing their theses. Studies were empirical and conceptual in nature and demonstrate some ways in…

  2. Development of an Axiomatic Theory of Organization/Environment Interaction: A Theoretical and Empirical Analysis.

    ERIC Educational Resources Information Center

    Ganey, Rodney F.

    The goal of this paper was to develop a theory of organization/environment interaction by examining the impact of perceived environmental uncertainty on organizational processes and on organizational goal attainment. It examines theories from the organizational environment literature and derives corollaries that are empirically tested using a data…

  3. Image Retrieval: Theoretical Analysis and Empirical User Studies on Accessing Information in Images.

    ERIC Educational Resources Information Center

    Ornager, Susanne

    1997-01-01

    Discusses indexing and retrieval for effective searches of digitized images. Reports on an empirical study about criteria for analysis and indexing digitized images, and the different types of user queries done in newspaper image archives in Denmark. Concludes that it is necessary that the indexing represent both a factual and an expressional…

  4. Religious Identity Development of Adolescents in Religious Affiliated Schools. A Theoretical Foundation for Empirical Research

    ERIC Educational Resources Information Center

    Bertram-Troost, Gerdien D.; de Roos, Simone; Miedema, Siebren

    2006-01-01

    The question, how religious affiliated schools for secondary education shape religious education and what effects this education has on the religious identity development of pupils, is relevant in a time when the position of religious affiliated schools is highly disputable. In earlier empirical research on religious identity development of…

  5. Empirical social-ecological system analysis: from theoretical framework to latent variable structural equation model.

    PubMed

    Asah, Stanley Tanyi

    2008-12-01

    The social-ecological system (SES) approach to natural resource management holds enormous promise towards achieving sustainability. Despite this promise, social-ecological interactions are complex and elusive; they require simplification to guide effective application of the SES approach. The complex, adaptive and place-specific nature of human-environment interactions impedes determination of state and trends in SES parameters of interest to managers and policy makers. Based on a rigorously developed systemic theoretical model, this paper integrates field observations, interviews, surveys, and latent variable modeling to illustrate the development of simplified and easily interpretable indicators of the state of, and trends in, relevant SES processes. Social-agricultural interactions in the Logone floodplain, in the Lake Chad basin, served as case study. This approach is found to generate simplified determinants of the state of SESs, easily communicable across the array of stakeholders common in human-environment interactions. The approach proves to be useful for monitoring SESs, guiding interventions, and assessing the effectiveness of interventions. It incorporates real time responses to biophysical change in understanding coarse scale processes within which finer scales are embedded. This paper emphasizes the importance of merging quantitative and qualitative methods for effective monitoring and assessment of SESs.

  6. Experimental comparison of methods for simultaneous selection of two correlated traits in Tribolium : 1. Empirical and theoretical selection indexes.

    PubMed

    Campo, J L; Rodriguez, M C

    1985-11-01

    Two lines of Tribolium castaneum were selected in each of three replicates for egg laying between 7 and 11 days after adult emergency and for adult weight at 12 days, using theoretical (IT) and empirical (IP) index selection methods. Index coefficients were given empirically in the IP line and they were adjusted in the successive generations of selection according to the results obtained in the previous ones. Highly repeatable selection responses in all replicates occurred in both lines for the aggregate genotype (egg laying plus adult weight) and for each individual trait. The IP line tended to increase slightly more than the IT line for aggregate genotype and egg laying, while the highest response in adult weight was obtained with the IT method. The two methods gave consistently different responses in each replicate. The expected results were that IT selection should not exceed IP selection for the aggregate genotype and egg laying while theoretically the IT method should have been superior for increase adult weight. Theoretical expectations for adult weight have been fulfilled in practice. The IP method would be preferred in a practical sense because of its simplicity and freedom from need of parameter estimation. PMID:24247344

  7. Public Disaster Communication and Child and Family Disaster Mental Health: a Review of Theoretical Frameworks and Empirical Evidence.

    PubMed

    Houston, J Brian; First, Jennifer; Spialek, Matthew L; Sorenson, Mary E; Koch, Megan

    2016-06-01

    Children have been identified as particularly vulnerable to psychological and behavioral difficulties following disaster. Public child and family disaster communication is one public health tool that can be utilized to promote coping/resilience and ameliorate maladaptive child reactions following an event. We conducted a review of the public disaster communication literature and identified three main functions of child and family disaster communication: fostering preparedness, providing psychoeducation, and conducting outreach. Our review also indicates that schools are a promising system for child and family disaster communication. We complete our review with three conclusions. First, theoretically, there appears to be a great opportunity for public disaster communication focused on child disaster reactions. Second, empirical research assessing the effects of public child and family disaster communication is essentially nonexistent. Third, despite the lack of empirical evidence in this area, there is opportunity for public child and family disaster communication efforts that address new domains. PMID:27086315

  8. Mechanisms of risk and resilience in military families: theoretical and empirical basis of a family-focused resilience enhancement program.

    PubMed

    Saltzman, William R; Lester, Patricia; Beardslee, William R; Layne, Christopher M; Woodward, Kirsten; Nash, William P

    2011-09-01

    Recent studies have confirmed that repeated wartime deployment of a parent exacts a toll on military children and families and that the quality and functionality of familial relations is linked to force preservation and readiness. As a result, family-centered care has increasingly become a priority across the military health system. FOCUS (Families OverComing Under Stress), a family-centered, resilience-enhancing program developed by a team at UCLA and Harvard Schools of Medicine, is a primary initiative in this movement. In a large-scale implementation project initiated by the Bureau of Navy Medicine, FOCUS has been delivered to thousands of Navy, Marine, Navy Special Warfare, Army, and Air Force families since 2008. This article describes the theoretical and empirical foundation and rationale for FOCUS, which is rooted in a broad conception of family resilience. We review the literature on family resilience, noting that an important next step in building a clinically useful theory of family resilience is to move beyond developing broad "shopping lists" of risk indicators by proposing specific mechanisms of risk and resilience. Based on the literature, we propose five primary risk mechanisms for military families and common negative "chain reaction" pathways through which they undermine the resilience of families contending with wartime deployments and parental injury. In addition, we propose specific mechanisms that mobilize and enhance resilience in military families and that comprise central features of the FOCUS Program. We describe these resilience-enhancing mechanisms in detail, followed by a discussion of the ways in which evaluation data from the program's first 2 years of operation supports the proposed model and the specified mechanisms of action. PMID:21655938

  9. Ecological risk and resilience perspective: a theoretical framework supporting evidence-based practice in schools.

    PubMed

    Powers, Joelle D

    2010-10-01

    Multidisciplinary school practitioners are clearly being called to use evidence-based practices from reputable sources such as their own professional organizations and federal agencies. In spite of this encouragement, most schools are not regularly employing empirically supported interventions. This paper further promotes the use of this approach by describing the theoretical support for evidence-based practice in schools. The ecological risk and resilience theoretical framework presented fills a gap in the literature and advocates for evidence-based practice in schools by illustrating how it can assist practitioners such as school social workers to better address problems associated with school failure.

  10. Ecological risk and resilience perspective: a theoretical framework supporting evidence-based practice in schools.

    PubMed

    Powers, Joelle D

    2010-10-01

    Multidisciplinary school practitioners are clearly being called to use evidence-based practices from reputable sources such as their own professional organizations and federal agencies. In spite of this encouragement, most schools are not regularly employing empirically supported interventions. This paper further promotes the use of this approach by describing the theoretical support for evidence-based practice in schools. The ecological risk and resilience theoretical framework presented fills a gap in the literature and advocates for evidence-based practice in schools by illustrating how it can assist practitioners such as school social workers to better address problems associated with school failure. PMID:21082473

  11. Empirically Based Strategies for Preventing Juvenile Delinquency.

    PubMed

    Pardini, Dustin

    2016-04-01

    Juvenile crime is a serious public health problem that results in significant emotional and financial costs for victims and society. Using etiologic models as a guide, multiple interventions have been developed to target risk factors thought to perpetuate the emergence and persistence of delinquent behavior. Evidence suggests that the most effective interventions tend to have well-defined treatment protocols, focus on therapeutic approaches as opposed to external control techniques, and use multimodal cognitive-behavioral treatment strategies. Moving forward, there is a need to develop effective policies and procedures that promote the widespread adoption of evidence-based delinquency prevention practices across multiple settings. PMID:26980128

  12. Image-Based Empirical Modeling of the Plasmasphere

    NASA Technical Reports Server (NTRS)

    Adrian, Mark L.; Gallagher, D. L.

    2008-01-01

    A new suite of empirical models of plasmaspheric plasma based on remote, global images from the IMAGE EUV instrument is proposed for development. The purpose of these empirical models is to establish the statistical properties of the plasmasphere as a function of conditions. This suite of models will mark the first time the plasmaspheric plume is included in an empirical model. Development of these empirical plasmaspheric models will support synoptic studies (such as for wave propagation and growth, energetic particle loss through collisions and dust transport as influenced by charging) and serves as a benchmark against which physical models can be tested. The ability to know that a specific global density distribution occurs in response to specific magnetospheric and solar wind factors is a huge advantage over all previous in-situ based empirical models. The consequence of creating these new plasmaspheric models will be to provide much higher fidelity and much richer quantitative descriptions of the statistical properties of plasmaspheric plasma in the inner magnetosphere, whether that plasma is in the main body of the plasmasphere, nearby during recovery or in the plasmaspheric plume. Model products to be presented include statistical probabilities for being in the plasmasphere, near thermal He+ density boundaries and the complexity of its spatial structure.

  13. Empirical Testing of a Theoretical Extension of the Technology Acceptance Model: An Exploratory Study of Educational Wikis

    ERIC Educational Resources Information Center

    Liu, Xun

    2010-01-01

    This study extended the technology acceptance model and empirically tested the new model with wikis, a new type of educational technology. Based on social cognitive theory and the theory of planned behavior, three new variables, wiki self-efficacy, online posting anxiety, and perceived behavioral control, were added to the original technology…

  14. Perceptual Organization in Schizophrenia Spectrum Disorders: Empirical Research and Theoretical Implications

    ERIC Educational Resources Information Center

    Uhlhaas, Peter J.; Silverstein, Steven M.

    2005-01-01

    The research into perceptual organization in schizophrenia spectrum disorders has found evidence for and against a perceptual organization deficit and has interpreted the data from within several different theoretical frameworks. A synthesis of this evidence, however, reveals that this body of work has produced reliable evidence for deficits in…

  15. Multiple Embedded Inequalities and Cultural Diversity in Educational Systems: A Theoretical and Empirical Exploration

    ERIC Educational Resources Information Center

    Verhoeven, Marie

    2011-01-01

    This article explores the social construction of cultural diversity in education, with a view to social justice. It examines how educational systems organize ethno-cultural difference and how this process contributes to inequalities. Theoretical resources are drawn from social philosophy as well as from recent developments in social organisation…

  16. Rural Schools, Social Capital and the Big Society: A Theoretical and Empirical Exposition

    ERIC Educational Resources Information Center

    Bagley, Carl; Hillyard, Sam

    2014-01-01

    The paper commences with a theoretical exposition of the current UK government's policy commitment to the idealised notion of the Big Society and the social capital currency underpinning its formation. The paper positions this debate in relation to the rural and adopts an ethnographically-informed methodological approach to provide an…

  17. The Role of Identity in Acculturation among Immigrant People: Theoretical Propositions, Empirical Questions, and Applied Recommendations

    ERIC Educational Resources Information Center

    Schwartz, Seth J.; Montgomery, Marilyn J.; Briones, Ervin

    2006-01-01

    The present paper advances theoretical propositions regarding the relationship between acculturation and identity. The most central thesis argued is that acculturation represents changes in cultural identity and that personal identity has the potential to "anchor" immigrant people during their transition to a new society. The article emphasizes…

  18. Strengthening Theoretical Testing in Criminology Using Agent-based Modeling

    PubMed Central

    Groff, Elizabeth R.

    2014-01-01

    Objectives: The Journal of Research in Crime and Delinquency (JRCD) has published important contributions to both criminological theory and associated empirical tests. In this article, we consider some of the challenges associated with traditional approaches to social science research, and discuss a complementary approach that is gaining popularity—agent-based computational modeling—that may offer new opportunities to strengthen theories of crime and develop insights into phenomena of interest. Method: Two literature reviews are completed. The aim of the first is to identify those articles published in JRCD that have been the most influential and to classify the theoretical perspectives taken. The second is intended to identify those studies that have used an agent-based model (ABM) to examine criminological theories and to identify which theories have been explored. Results: Ecological theories of crime pattern formation have received the most attention from researchers using ABMs, but many other criminological theories are amenable to testing using such methods. Conclusion: Traditional methods of theory development and testing suffer from a number of potential issues that a more systematic use of ABMs—not without its own issues—may help to overcome. ABMs should become another method in the criminologists toolbox to aid theory testing and falsification. PMID:25419001

  19. Responses to Commentaries on Advances in Empirically Based Assessment.

    ERIC Educational Resources Information Center

    McConaughy, Stephanie H.

    1993-01-01

    Author of article (this issue) describing research program to advance assessment of children's behavioral and emotional problems; presenting conceptual framework for multiaxial empirically based assessment; and summarizing research efforts to develop cross-informant scales for scoring parent, teacher, and self-reports responds to commentaries on…

  20. GIS Teacher Training: Empirically-Based Indicators of Effectiveness

    ERIC Educational Resources Information Center

    Höhnle, Steffen; Fögele, Janis; Mehren, Rainer; Schubert, Jan Christoph

    2016-01-01

    In spite of various actions, the implementation of GIS (geographic information systems) in German schools is still very low. In the presented research, teaching experts as well as teaching novices were presented with empirically based constraints for implementation stemming from an earlier survey. In the process of various group discussions, the…

  1. Statistical learning as an individual ability: Theoretical perspectives and empirical evidence

    PubMed Central

    Siegelman, Noam; Frost, Ram

    2015-01-01

    Although the power of statistical learning (SL) in explaining a wide range of linguistic functions is gaining increasing support, relatively little research has focused on this theoretical construct from the perspective of individual differences. However, to be able to reliably link individual differences in a given ability such as language learning to individual differences in SL, three critical theoretical questions should be posed: Is SL a componential or unified ability? Is it nested within other general cognitive abilities? Is it a stable capacity of an individual? Following an initial mapping sentence outlining the possible dimensions of SL, we employed a battery of SL tasks in the visual and auditory modalities, using verbal and non-verbal stimuli, with adjacent and non-adjacent contingencies. SL tasks were administered along with general cognitive tasks in a within-subject design at two time points to explore our theoretical questions. We found that SL, as measured by some tasks, is a stable and reliable capacity of an individual. Moreover, we found SL to be independent of general cognitive abilities such as intelligence or working memory. However, SL is not a unified capacity, so that individual sensitivity to conditional probabilities is not uniform across modalities and stimuli. PMID:25821343

  2. Theoretical Bases of Science Education Research.

    ERIC Educational Resources Information Center

    Good, Ronald; And Others

    This symposium examines the science education research enterprise from multiple theoretical perspectives. The first paper, "Contextual Constructivism; The Impact of Culture on the Learning and Teaching of Science (William Cobern), focuses on broad issues of culture and how constructivism is affected by the context of culture. Culturally based…

  3. Colour in insect thermoregulation: empirical and theoretical tests in the colour-changing grasshopper, Kosciuscola tristis.

    PubMed

    Umbers, K D L; Herberstein, M E; Madin, J S

    2013-01-01

    Body colours can result in different internal body temperatures, but evidence for the biological significance of colour-induced temperature differences is inconsistent. We investigated the relationship between body colour and temperature in a model insect species that rapidly changes colour. We used an empirical approach and constructed a heat budget model to quantify whether a colour change from black to turquoise has a role in thermoregulation for the chameleon grasshopper (Kosciuscola tristis). Our study shows that colour change in K. tristis provides relatively small temperature differences that vary greatly with wind speed (0.55 °C at ms(-1) to 0.05 °C at 10 ms(-1)). The biological significance of this difference is unclear and we discuss the requirement for more studies that directly test hypotheses regarding the fitness effects of colour in manipulating body temperature.

  4. Colour in insect thermoregulation: empirical and theoretical tests in the colour-changing grasshopper, Kosciuscola tristis.

    PubMed

    Umbers, K D L; Herberstein, M E; Madin, J S

    2013-01-01

    Body colours can result in different internal body temperatures, but evidence for the biological significance of colour-induced temperature differences is inconsistent. We investigated the relationship between body colour and temperature in a model insect species that rapidly changes colour. We used an empirical approach and constructed a heat budget model to quantify whether a colour change from black to turquoise has a role in thermoregulation for the chameleon grasshopper (Kosciuscola tristis). Our study shows that colour change in K. tristis provides relatively small temperature differences that vary greatly with wind speed (0.55 °C at ms(-1) to 0.05 °C at 10 ms(-1)). The biological significance of this difference is unclear and we discuss the requirement for more studies that directly test hypotheses regarding the fitness effects of colour in manipulating body temperature. PMID:23108152

  5. [Very old age in an ageing society. Theoretical challenges, empirical problems and sociopolitical responsibilities].

    PubMed

    Motel-Klingebiel, A; Ziegelmann, J P; Wiest, M

    2013-01-01

    This paper focuses on very old age as a challenge for ageing theory, as an empirical problem and as a scope for social policy and it introduces the contributions of the special issue "Very old age in an ageing society". Especially the need for (re-)integration of the life-phases of young and old age is discussed from the position of social and behavioural ageing research. While reaching very old age is an increasingly normal life-event, and thus there is an increasing need for knowledge, there is currently only limited knowledge about it. It is particularly the diversity and inequality within old and very old age and the pathways into latest life that needs to be targeted. Finally, normative patterns and biographical outlines of this increasingly important phase of life need to be developed.

  6. Chronic Pain in a Couples Context: A Review and Integration of Theoretical Models and Empirical Evidence

    PubMed Central

    Leonard, Michelle T.; Cano, Annmarie; Johansen, Ayna B.

    2007-01-01

    Researchers have become increasingly interested in the social context of chronic pain conditions. The purpose of this article is to provide an integrated review of the evidence linking marital functioning with chronic pain outcomes including pain severity, physical disability, pain behaviors, and psychological distress. We first present an overview of existing models that identify an association between marital functioning and pain variables. We then review the empirical evidence for a relationship between pain variables and several marital functioning variables including marital satisfaction, spousal support, spouse responses to pain, and marital interaction. On the basis of the evidence, we present a working model of marital and pain variables, identify gaps in the literature, and offer recommendations for research and clinical work. Perspective The authors provide a comprehensive review of the relationships between marital functioning and chronic pain variables to advance future research and help treatment providers understand marital processes in chronic pain. PMID:16750794

  7. Guiding Empirical and Theoretical Explorations of Organic Matter Decay By Synthesizing Temperature Responses of Enzyme Kinetics, Microbes, and Isotope Fluxes

    NASA Astrophysics Data System (ADS)

    Billings, S. A.; Ballantyne, F.; Lehmeier, C.; Min, K.

    2014-12-01

    Soil organic matter (SOM) transformation rates generally increase with temperature, but whether this is realized depends on soil-specific features. To develop predictive models applicable to all soils, we must understand two key, ubiquitous features of SOM transformation: the temperature sensitivity of myriad enzyme-substrate combinations and temperature responses of microbial physiology and metabolism, in isolation from soil-specific conditions. Predicting temperature responses of production of CO2 vs. biomass is also difficult due to soil-specific features: we cannot know the identity of active microbes nor the substrates they employ. We highlight how recent empirical advances describing SOM decay can help develop theoretical tools relevant across diverse spatial and temporal scales. At a molecular level, temperature effects on purified enzyme kinetics reveal distinct temperature sensitivities of decay of diverse SOM substrates. Such data help quantify the influence of microbial adaptations and edaphic conditions on decay, have permitted computation of the relative availability of carbon (C) and nitrogen (N) liberated upon decay, and can be used with recent theoretical advances to predict changes in mass specific respiration rates as microbes maintain biomass C:N with changing temperature. Enhancing system complexity, we can subject microbes to temperature changes while controlling growth rate and without altering substrate availability or identity of the active population, permitting calculation of variables typically inferred in soils: microbial C use efficiency (CUE) and isotopic discrimination during C transformations. Quantified declines in CUE with rising temperature are critical for constraining model CUE estimates, and known changes in δ13C of respired CO2 with temperature is useful for interpreting δ13C-CO2 at diverse scales. We suggest empirical studies important for advancing knowledge of how microbes respond to temperature, and ideas for theoretical

  8. Pharmaceuticals, political money, and public policy: a theoretical and empirical agenda.

    PubMed

    Jorgensen, Paul D

    2013-01-01

    Why, when confronted with policy alternatives that could improve patient care, public health, and the economy, does Congress neglect those goals and tailor legislation to suit the interests of pharmaceutical corporations? In brief, for generations, the pharmaceutical industry has convinced legislators to define policy problems in ways that protect its profit margin. It reinforces this framework by selectively providing information and by targeting campaign contributions to influential legislators and allies. In this way, the industry displaces the public's voice in developing pharmaceutical policy. Unless citizens mobilize to confront the political power of pharmaceutical firms, objectionable industry practices and public policy will not change. Yet we need to refine this analysis. I propose a research agenda to uncover pharmaceutical influence. It develops the theory of dependence corruption to explain how the pharmaceutical industry is able to deflect the broader interests of the general public. It includes empirical studies of lobbying and campaign finance to uncover the means drug firms use to: (1) shape the policy framework adopted and information used to analyze policy; (2) subsidize the work of political allies; and (3) influence congressional voting. PMID:24088146

  9. Recombination rate variation and speciation: theoretical predictions and empirical results from rabbits and mice.

    PubMed

    Nachman, Michael W; Payseur, Bret A

    2012-02-01

    Recently diverged taxa may continue to exchange genes. A number of models of speciation with gene flow propose that the frequency of gene exchange will be lower in genomic regions of low recombination and that these regions will therefore be more differentiated. However, several population-genetic models that focus on selection at linked sites also predict greater differentiation in regions of low recombination simply as a result of faster sorting of ancestral alleles even in the absence of gene flow. Moreover, identifying the actual amount of gene flow from patterns of genetic variation is tricky, because both ancestral polymorphism and migration lead to shared variation between recently diverged taxa. New analytic methods have been developed to help distinguish ancestral polymorphism from migration. Along with a growing number of datasets of multi-locus DNA sequence variation, these methods have spawned a renewed interest in speciation models with gene flow. Here, we review both speciation and population-genetic models that make explicit predictions about how the rate of recombination influences patterns of genetic variation within and between species. We then compare those predictions with empirical data of DNA sequence variation in rabbits and mice. We find strong support for the prediction that genomic regions experiencing low levels of recombination are more differentiated. In most cases, reduced gene flow appears to contribute to the pattern, although disentangling the relative contribution of reduced gene flow and selection at linked sites remains a challenge. We suggest fruitful areas of research that might help distinguish between different models.

  10. Pharmaceuticals, political money, and public policy: a theoretical and empirical agenda.

    PubMed

    Jorgensen, Paul D

    2013-01-01

    Why, when confronted with policy alternatives that could improve patient care, public health, and the economy, does Congress neglect those goals and tailor legislation to suit the interests of pharmaceutical corporations? In brief, for generations, the pharmaceutical industry has convinced legislators to define policy problems in ways that protect its profit margin. It reinforces this framework by selectively providing information and by targeting campaign contributions to influential legislators and allies. In this way, the industry displaces the public's voice in developing pharmaceutical policy. Unless citizens mobilize to confront the political power of pharmaceutical firms, objectionable industry practices and public policy will not change. Yet we need to refine this analysis. I propose a research agenda to uncover pharmaceutical influence. It develops the theory of dependence corruption to explain how the pharmaceutical industry is able to deflect the broader interests of the general public. It includes empirical studies of lobbying and campaign finance to uncover the means drug firms use to: (1) shape the policy framework adopted and information used to analyze policy; (2) subsidize the work of political allies; and (3) influence congressional voting.

  11. Social-Emotional Well-Being and Resilience of Children in Early Childhood Settings--PERIK: An Empirically Based Observation Scale for Practitioners

    ERIC Educational Resources Information Center

    Mayr, Toni; Ulich, Michaela

    2009-01-01

    Compared with the traditional focus on developmental problems, research on positive development is relatively new. Empirical research in children's well-being has been scarce. The aim of this study was to develop a theoretically and empirically based instrument for practitioners to observe and assess preschool children's well-being in early…

  12. Interpersonal relatedness, self-definition, and their motivational orientation during adolescence: a theoretical and empirical integration.

    PubMed

    Shahar, Golan; Henrich, Christopher C; Blatt, Sidney J; Ryan, Richard; Little, Todd D

    2003-05-01

    The authors examined a theoretical model linking interpersonal relatedness and self-definition (S.J. Blatt, 1974), autonomous and controlled regulation (E. L. Deci & R. M. Ryan, 1985), and negative and positive life events in adolescence (N = 860). They hypothesized that motivational orientation would mediate the effects of interpersonal relatedness and self-definition on life events. Self-criticism, a maladaptive form of self-definition, predicted less positive events, whereas efficacy, an adaptive form of self-definition, predicted more positive events. These effects were fully mediated by the absence and presence, respectively, of autonomous motivation. Controlled motivation, predicted by self-criticism and maladaptive neediness, did not predict negative events. Results illustrate the centrality of protective, pleasure-related processes in adaptive adolescent development.

  13. UVCS Empirical Constraints on Theoretical Models of Solar Wind Source Regions

    NASA Astrophysics Data System (ADS)

    Kohl, J. L.; Cranmer, S. R.; Miralles, M. P.; Panasyuk, A.; Strachan, L.

    2007-12-01

    Spectroscopic observations from the Ultraviolet Coronagraph Spectrometer (UVCS) on the Solar and Heliospheric Observatory (SOHO) have resulted in empirical models of polar coronal holes, polar plumes, coronal jets and streamers. These findings have been used to make significant progress toward identifying and characterizing the physical processes that produce extended heating in the corona and accelerate fast and slow solar wind streams. The UVCS scientific observations, which began in April 1996 and continue at this writing, have provided determinations of proton and minor ion temperatures (including evidence for anisotropic microscopic velocity distributions in coronal holes and quiescent equatorial streamers), outflow velocities, and elemental abundances. The variations in these quantities over the solar cycle also have been determined. For example, observations of large polar coronal holes at different phases of the solar cycle indicate that line width is positively correlated with outflow speed and anti-correlated with electron density. This paper will review these results, and present new results from measurements taken as the current solar activity cycle approaches solar minimum. The results regarding preferential ion heating and acceleration of heavy ions (i.e., O5+) in polar coronal holes have contributed in a major way to the advances in understanding solar wind acceleration that have occurred during the past decade. It is important to verify and confirm the key features of these findings. Hence, the results from a new analysis of an expanded set of UVCS data from polar coronal holes at solar minimum by S. R. Cranmer, A. Panasyuk and J. L. Kohl will be presented. This work has been supported by the National Aeronautics and Space Administration (NASA) under Grants NNG06G188G and NNX07AL72G and NNX06AG95G to the Smithsonian Astrophysical Observatory.

  14. Lay attitudes toward deception in medicine: Theoretical considerations and empirical evidence

    PubMed Central

    Pugh, Jonathan; Kahane, Guy; Maslen, Hannah; Savulescu, Julian

    2016-01-01

    Abstract Background: There is a lack of empirical data on lay attitudes toward different sorts of deception in medicine. However, lay attitudes toward deception should be taken into account when we consider whether deception is ever permissible in a medical context. The objective of this study was to examine lay attitudes of U.S. citizens toward different sorts of deception across different medical contexts. Methods: A one-time online survey was administered to U.S. users of the Amazon “Mechanical Turk” website. Participants were asked to answer questions regarding a series of vignettes depicting different sorts of deception in medical care, as well as a question regarding their general attitudes toward truth-telling. Results: Of the 200 respondents, the majority found the use of placebos in different contexts to be acceptable following partial disclosure but found it to be unacceptable if it involved outright lying. Also, 55.5% of respondents supported the use of sham surgery in clinical research, although 55% claimed that it would be unacceptable to deceive patients in this research, even if this would improve the quality of the data from the study. Respondents supported fully informing patients about distressing medical information in different contexts, especially when the patient is suffering from a chronic condition. In addition, 42.5% of respondents believed that it is worse to deceive someone by providing the person with false information than it is to do so by giving the person true information that is likely to lead them to form a false belief, without telling them other important information that shows it to be false. However, 41.5% believed that the two methods of deception were morally equivalent. Conclusions: Respondents believed that some forms of deception were acceptable in some circumstances. While the majority of our respondents opposed outright lying in medical contexts, they were prepared to support partial disclosure and the use of

  15. Linking predator risk and uncertainty to adaptive forgetting: a theoretical framework and empirical test using tadpoles.

    PubMed

    Ferrari, Maud C O; Brown, Grant E; Bortolotti, Gary R; Chivers, Douglas P

    2010-07-22

    Hundreds of studies have examined how prey animals assess their risk of predation. These studies work from the basic tennet that prey need to continually balance the conflicting demands of predator avoidance with activities such as foraging and reproduction. The information that animals gain regarding local predation risk is most often learned. Yet, the concept of 'memory' in the context of predation remains virtually unexplored. Here, our goal was (i) to determine if the memory window associated with predator recognition is fixed or flexible and, if it is flexible, (ii) to identify which factors affect the length of this window and in which ways. We performed an experiment on larval wood frogs, Rana sylvatica, to test whether the risk posed by, and the uncertainty associated with, the predator would affect the length of the tadpoles' memory window. We found that as the risk associated with the predator increases, tadpoles retained predator-related information for longer. Moreover, if the uncertainty about predator-related information increases, then prey use this information for a shorter period. We also present a theoretical framework aiming at highlighting both intrinsic and extrinsic factors that could affect the memory window of information use by prey individuals.

  16. Determining VA physician requirements through empirically based models.

    PubMed Central

    Lipscomb, J; Kilpatrick, K E; Lee, K L; Pieper, K S

    1995-01-01

    OBJECTIVE: As part of a project to estimate physician requirements for the Department of Veterans Affairs, the Institute of Medicine (IOM) developed and tested empirically based models of physician staffing, by specialty, that could be applied to each VA facility. DATA SOURCE/STUDY SETTING. These analyses used selected data on all patient encounters and all facilities in VA's management information systems for FY 1989. STUDY DESIGN. Production functions (PFs), with patient workload dependent on physicians, other providers, and nonpersonnel factors, were estimated for each of 14 patient care areas in a VA medical center. Inverse production functions (IPFs), with physician staffing levels dependent on workload and other factors, were estimated for each of 11 specialty groupings. These models provide complementary approaches to deriving VA physician requirements for patient care and medical education. DATA COLLECTION/EXTRACTION METHODS. All data were assembled by VA and put in analyzable SAS data sets containing FY 1989 workload and staffing variables used in the PFs and IPFs. All statistical analyses reported here were conducted by the IOM. PRINCIPAL FINDINGS. Existing VA data can be used to develop statistically strong, clinically plausible, empirically based models for calculating physician requirements, by specialty. These models can (1) compare current physician staffing in a given setting with systemwide norms and (2) yield estimates of future staffing requirements conditional on future workload. CONCLUSIONS. Empirically based models can play an important role in determining VA physician staffing requirements. VA should test, evaluate, and revise these models on an ongoing basis. PMID:7860320

  17. Swahili women since the nineteenth century: theoretical and empirical considerations on gender and identity construction.

    PubMed

    Gower, R; Salm, S; Falola, T

    1996-01-01

    This paper provides an analysis and update on the theoretical discussion about the link between gender and identity and uses a group of Swahili women in eastern Africa as an example of how this link works in practice. The first part of the study provides a brief overview of gender theory related to the terms "gender" and "identity." It is noted that gender is only one aspect of identity and that the concept of gender has undergone important changes such as the reconceptualization of the terms "sex" and "gender." The second part of the study synthesizes the experiences of Swahili women in the 19th century when the convergence of gender and class was very important. The status of Muslim women is reviewed, and it is noted that even influential women practiced purdah and that all Swahili women experienced discrimination, which inhibited their opportunities for socioeconomic mobility. Slavery and concubinage were widespread during this period, and the participation of Islamic women in spirit possession cults was a way for women to express themselves culturally. The separation of men and women in Swahili culture led to the development of two distinct subcultures, which excluded women from most aspects of public life. The third part of the study looks at the experiences of Swahili women since the 19th century both during and after the colonial period. It is shown that continuity exists in trends observed over a period of 200 years. For example, the mobility of Swahili women remains limited by Islam, but women do exert influence behind the scenes. It is concluded that the socioeconomic status of Swahili woman has been shaped more by complex forces such as class, ethnic, religious, and geographic area than by the oppression of Islam and colonialism. This study indicates that gender cannot be studied in isolation from other salient variables affecting identity. PMID:12292423

  18. Innovation in Information Technology: Theoretical and Empirical Study in SMQR Section of Export Import in Automotive Industry

    NASA Astrophysics Data System (ADS)

    Edi Nugroho Soebandrija, Khristian; Pratama, Yogi

    2014-03-01

    This paper has the objective to provide the innovation in information technology in both theoretical and empirical study. Precisely, both aspects relate to the Shortage Mispacking Quality Report (SMQR) Claims in Export and Import in Automotive Industry. This paper discusses the major aspects of Innovation, Information Technology, Performance and Competitive Advantage. Furthermore, In the empirical study of PT. Astra Honda Motor (AHM) refers to SMQR Claims, Communication Systems, Analysis and Design Systems. Briefly both aspects of the major aspects and its empirical study are discussed in the Introduction Session. Furthermore, the more detail discussion is conducted in the related aspects in other sessions of this paper, in particular in Literature Review in term classical and updated reference of current research. The increases of SMQR claim and communication problem at PT. Astra Daihatsu Motor (PT. ADM) which still using the email cause the time of claim settlement become longer and finally it causes the rejected of SMQR claim by supplier. With presence of this problem then performed to design the integrated communication system to manage the communication process of SMQR claim between PT. ADM with supplier. The systems was analyzed and designed is expected to facilitate the claim communication process so that can be run in accordance with the procedure and fulfill the target of claim settlement time and also eliminate the difficulties and problems on the previous manual communication system with the email. The design process of the system using the approach of system development life cycle method by Kendall & Kendall (2006)which design process covers the SMQR problem communication process, judgment process by the supplier, claim process, claim payment process and claim monitoring process. After getting the appropriate system designs for managing the SMQR claim, furthermore performed the system implementation and can be seen the improvement in claim communication

  19. Alignment of Standards and Assessment: A Theoretical and Empirical Study of Methods for Alignment

    ERIC Educational Resources Information Center

    Nasstrom, Gunilla; Henriksson, Widar

    2008-01-01

    Introduction: In a standards-based school-system alignment of policy documents with standards and assessment is important. To be able to evaluate whether schools and students have reached the standards, the assessment should focus on the standards. Different models and methods can be used for measuring alignment, i.e. the correspondence between…

  20. Theoretical and Empirical Underpinnings of the What Works Clearinghouse Attrition Standard for Randomized Controlled Trials

    ERIC Educational Resources Information Center

    Deke, John; Chiang, Hanley

    2014-01-01

    Meeting the What Works Clearinghouse (WWC) attrition standard (or one of the attrition standards based on the WWC standard) is now an important consideration for researchers conducting studies that could potentially be reviewed by the WWC (or other evidence reviews). Understanding the basis of this standard is valuable for anyone seeking to meet…

  1. Coping, acculturation, and psychological adaptation among migrants: a theoretical and empirical review and synthesis of the literature

    PubMed Central

    Kuo, Ben C.H.

    2014-01-01

    Given the continuous, dynamic demographic changes internationally due to intensive worldwide migration and globalization, the need to more fully understand how migrants adapt and cope with acculturation experiences in their new host cultural environment is imperative and timely. However, a comprehensive review of what we currently know about the relationship between coping behavior and acculturation experience for individuals undergoing cultural changes has not yet been undertaken. Hence, the current article aims to compile, review, and examine cumulative cross-cultural psychological research that sheds light on the relationships among coping, acculturation, and psychological and mental health outcomes for migrants. To this end, this present article reviews prevailing literature pertaining to: (a) the stress and coping conceptual perspective of acculturation; (b) four theoretical models of coping, acculturation and cultural adaptation; (c) differential coping pattern among diverse acculturating migrant groups; and (d) the relationship between coping variabilities and acculturation levels among migrants. In terms of theoretical understanding, this review points to the relative strengths and limitations associated with each of the four theoretical models on coping-acculturation-adaptation. These theories and the empirical studies reviewed in this article further highlight the central role of coping behaviors/strategies in the acculturation process and outcome for migrants and ethnic populations, both conceptually and functionally. Moreover, the review shows that across studies culturally preferred coping patterns exist among acculturating migrants and migrant groups and vary with migrants' acculturation levels. Implications and limitations of the existing literature for coping, acculturation, and psychological adaptation research are discussed and recommendations for future research are put forth. PMID:25750766

  2. The Ease of Language Understanding (ELU) model: theoretical, empirical, and clinical advances

    PubMed Central

    Rönnberg, Jerker; Lunner, Thomas; Zekveld, Adriana; Sörqvist, Patrik; Danielsson, Henrik; Lyxell, Björn; Dahlström, Örjan; Signoret, Carine; Stenfelt, Stefan; Pichora-Fuller, M. Kathleen; Rudner, Mary

    2013-01-01

    Working memory is important for online language processing during conversation. We use it to maintain relevant information, to inhibit or ignore irrelevant information, and to attend to conversation selectively. Working memory helps us to keep track of and actively participate in conversation, including taking turns and following the gist. This paper examines the Ease of Language Understanding model (i.e., the ELU model, Rönnberg, 2003; Rönnberg et al., 2008) in light of new behavioral and neural findings concerning the role of working memory capacity (WMC) in uni-modal and bimodal language processing. The new ELU model is a meaning prediction system that depends on phonological and semantic interactions in rapid implicit and slower explicit processing mechanisms that both depend on WMC albeit in different ways. It is based on findings that address the relationship between WMC and (a) early attention processes in listening to speech, (b) signal processing in hearing aids and its effects on short-term memory, (c) inhibition of speech maskers and its effect on episodic long-term memory, (d) the effects of hearing impairment on episodic and semantic long-term memory, and finally, (e) listening effort. New predictions and clinical implications are outlined. Comparisons with other WMC and speech perception models are made. PMID:23874273

  3. An Empirically Based Shaped Charge Jet Break-Up Model

    NASA Astrophysics Data System (ADS)

    Baker, Ernest; Pham, James; Vuong, Tan

    2013-06-01

    This paper discusses an empirically based shaped charge jet break-up model based around Walsh's breakup theory and provides significant experimental confirmation over a broad range of velocity gradients. The parameters which affect jet length and breakup times are fairly well known, but there is some controversy over the exact nature of the dependencies. Walsh theorized that the dependence of jet length would take a particular form, based on his determination of a dimensionless parameter for the problem and numerical experiments in which initial perturbation strengths were varied. Walsh did not present comparisons with experimental results. Chou has presented a variety of different jet break-up models with some data comparisons. Mostert [3] has suggested that breakup time is proportional to (Δm/Δv) 1/3. It is shown here that the parameter (Δm/Δv)1/2 or (dm/dv)1/3, closely related to Walsh's dimensionless parameter, whose values were obtained from either experiments or simulations correlates quite well with jet breakup times for a very wide variety of shaped charge devices. The values of Δm and Δv are respectively the jet mass and the velocity difference of the portion of jet in question. For a typical shaped charge Δm/Δv is essentially invariant with respect to time. In this paper, we present the mathematical basis for an empirically based break-up model with a similar basis to Walsh and Mostert, as well as supporting empirical data for a broad range of shaped charge geometries.

  4. Periodic limb movements of sleep: empirical and theoretical evidence supporting objective at-home monitoring

    PubMed Central

    Moro, Marilyn; Goparaju, Balaji; Castillo, Jelina; Alameddine, Yvonne; Bianchi, Matt T

    2016-01-01

    Introduction Periodic limb movements of sleep (PLMS) may increase cardiovascular and cerebrovascular morbidity. However, most people with PLMS are either asymptomatic or have nonspecific symptoms. Therefore, predicting elevated PLMS in the absence of restless legs syndrome remains an important clinical challenge. Methods We undertook a retrospective analysis of demographic data, subjective symptoms, and objective polysomnography (PSG) findings in a clinical cohort with or without obstructive sleep apnea (OSA) from our laboratory (n=443 with OSA, n=209 without OSA). Correlation analysis and regression modeling were performed to determine predictors of periodic limb movement index (PLMI). Markov decision analysis with TreeAge software compared strategies to detect PLMS: in-laboratory PSG, at-home testing, and a clinical prediction tool based on the regression analysis. Results Elevated PLMI values (>15 per hour) were observed in >25% of patients. PLMI values in No-OSA patients correlated with age, sex, self-reported nocturnal leg jerks, restless legs syndrome symptoms, and hypertension. In OSA patients, PLMI correlated only with age and self-reported psychiatric medications. Regression models indicated only a modest predictive value of demographics, symptoms, and clinical history. Decision modeling suggests that at-home testing is favored as the pretest probability of PLMS increases, given plausible assumptions regarding PLMS morbidity, costs, and assumed benefits of pharmacological therapy. Conclusion Although elevated PLMI values were commonly observed, routinely acquired clinical information had only weak predictive utility. As the clinical importance of elevated PLMI continues to evolve, it is likely that objective measures such as PSG or at-home PLMS monitors will prove increasingly important for clinical and research endeavors. PMID:27540316

  5. Size-dependent standard deviation for growth rates: empirical results and theoretical modeling.

    PubMed

    Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H Eugene; Grosse, I

    2008-05-01

    We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation sigma(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation sigma(R) on the average value of the wages with a scaling exponent beta approximately 0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation sigma(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of sigma(R) on the average payroll with a scaling exponent beta approximately -0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable. PMID:18643131

  6. Size-dependent standard deviation for growth rates: Empirical results and theoretical modeling

    NASA Astrophysics Data System (ADS)

    Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H. Eugene; Grosse, I.

    2008-05-01

    We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation σ(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation σ(R) on the average value of the wages with a scaling exponent β≈0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation σ(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of σ(R) on the average payroll with a scaling exponent β≈-0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.

  7. AGENT-BASED MODELS IN EMPIRICAL SOCIAL RESEARCH*

    PubMed Central

    Bruch, Elizabeth; Atwell, Jon

    2014-01-01

    Agent-based modeling has become increasingly popular in recent years, but there is still no codified set of recommendations or practices for how to use these models within a program of empirical research. This article provides ideas and practical guidelines drawn from sociology, biology, computer science, epidemiology, and statistics. We first discuss the motivations for using agent-based models in both basic science and policy-oriented social research. Next, we provide an overview of methods and strategies for incorporating data on behavior and populations into agent-based models, and review techniques for validating and testing the sensitivity of agent-based models. We close with suggested directions for future research. PMID:25983351

  8. Developing a theoretical framework for complex community-based interventions.

    PubMed

    Angeles, Ricardo N; Dolovich, Lisa; Kaczorowski, Janusz; Thabane, Lehana

    2014-01-01

    Applying existing theories to research, in the form of a theoretical framework, is necessary to advance knowledge from what is already known toward the next steps to be taken. This article proposes a guide on how to develop a theoretical framework for complex community-based interventions using the Cardiovascular Health Awareness Program as an example. Developing a theoretical framework starts with identifying the intervention's essential elements. Subsequent steps include the following: (a) identifying and defining the different variables (independent, dependent, mediating/intervening, moderating, and control); (b) postulating mechanisms how the independent variables will lead to the dependent variables; (c) identifying existing theoretical models supporting the theoretical framework under development; (d) scripting the theoretical framework into a figure or sets of statements as a series of hypotheses, if/then logic statements, or a visual model; (e) content and face validation of the theoretical framework; and (f) revising the theoretical framework. In our example, we combined the "diffusion of innovation theory" and the "health belief model" to develop our framework. Using the Cardiovascular Health Awareness Program as the model, we demonstrated a stepwise process of developing a theoretical framework. The challenges encountered are described, and an overview of the strategies employed to overcome these challenges is presented.

  9. Video watermarking with empirical PCA-based decoding.

    PubMed

    Khalilian, Hanieh; Bajic, Ivan V

    2013-12-01

    A new method for video watermarking is presented in this paper. In the proposed method, data are embedded in the LL subband of wavelet coefficients, and decoding is performed based on the comparison among the elements of the first principal component resulting from empirical principal component analysis (PCA). The locations for data embedding are selected such that they offer the most robust PCA-based decoding. Data are inserted in the LL subband in an adaptive manner based on the energy of high frequency subbands and visual saliency. Extensive testing was performed under various types of attacks, such as spatial attacks (uniform and Gaussian noise and median filtering), compression attacks (MPEG-2, H. 263, and H. 264), and temporal attacks (frame repetition, frame averaging, frame swapping, and frame rate conversion). The results show that the proposed method offers improved performance compared with several methods from the literature, especially under additive noise and compression attacks.

  10. Comparison between empirical and physically based models of atmospheric correction

    NASA Astrophysics Data System (ADS)

    Mandanici, E.; Franci, F.; Bitelli, G.; Agapiou, A.; Alexakis, D.; Hadjimitsis, D. G.

    2015-06-01

    A number of methods have been proposed for the atmospheric correction of the multispectral satellite images, based on either atmosphere modelling or images themselves. Full radiative transfer models require a lot of ancillary information about the atmospheric conditions at the acquisition time. Whereas, image based methods cannot account for all the involved phenomena. Therefore, the aim of this paper is the comparison of different atmospheric correction methods for multispectral satellite images. The experimentation was carried out on a study area located in the catchment area of Yialias river, 20 km South of Nicosia, the Cyprus capital. The following models were tested, both empirical and physically based: Dark object subtraction, QUAC, Empirical line, 6SV, and FLAASH. They were applied on a Landsat 8 multispectral image. The spectral signatures of ten different land cover types were measured during a field campaign in 2013 and 15 samples were collected for laboratory measurements in a second campaign in 2014. GER 1500 spectroradiometer was used; this instrument can record electromagnetic radiation from 350 up to 1050 nm, includes 512 different channels and each channel covers about 1.5 nm. The spectral signatures measured were used to simulate the reflectance values for the multispectral sensor bands by applying relative spectral response filters. These data were considered as ground truth to assess the accuracy of the different image correction models. Results do not allow to establish which method is the most accurate. The physics-based methods describe better the shape of the signatures, whereas the image-based models perform better regarding the overall albedo.

  11. Development of an empirically based dynamic biomechanical strength model

    NASA Technical Reports Server (NTRS)

    Pandya, A.; Maida, J.; Aldridge, A.; Hasson, S.; Woolford, B.

    1992-01-01

    The focus here is on the development of a dynamic strength model for humans. Our model is based on empirical data. The shoulder, elbow, and wrist joints are characterized in terms of maximum isolated torque, position, and velocity in all rotational planes. This information is reduced by a least squares regression technique into a table of single variable second degree polynomial equations determining the torque as a function of position and velocity. The isolated joint torque equations are then used to compute forces resulting from a composite motion, which in this case is a ratchet wrench push and pull operation. What is presented here is a comparison of the computed or predicted results of the model with the actual measured values for the composite motion.

  12. Developing an empirical base for clinical nurse specialist education.

    PubMed

    Stahl, Arleen M; Nardi, Deena; Lewandowski, Margaret A

    2008-01-01

    This article reports on the design of a clinical nurse specialist (CNS) education program using National Association of Clinical Nurse Specialists (NACNS) CNS competencies to guide CNS program clinical competency expectations and curriculum outcomes. The purpose is to contribute to the development of an empirical base for education and credentialing of CNSs. The NACNS CNS core competencies and practice competencies in all 3 spheres of influence guided the creation of clinical competency grids for this university's practicum courses. This project describes the development, testing, and application of these clinical competency grids that link the program's CNS clinical courses with the NACNS CNS competencies. These documents guide identification, tracking, measurement, and evaluation of the competencies throughout the clinical practice portion of the CNS program. This ongoing project will continue to provide data necessary to the benchmarking of CNS practice competencies, which is needed to evaluate the effectiveness of direct practice performance and the currency of graduate nursing education. PMID:18438164

  13. Open-circuit sensitivity model based on empirical parameters for a capacitive-type MEMS acoustic sensor

    NASA Astrophysics Data System (ADS)

    Lee, Jaewoo; Jeon, J. H.; Je, C. H.; Lee, S. Q.; Yang, W. S.; Lee, S.-G.

    2016-03-01

    An empirical-based open-circuit sensitivity model for a capacitive-type MEMS acoustic sensor is presented. To intuitively evaluate the characteristic of the open-circuit sensitivity, the empirical-based model is proposed and analysed by using a lumped spring-mass model and a pad test sample without a parallel plate capacitor for the parasitic capacitance. The model is composed of three different parameter groups: empirical, theoretical, and mixed data. The empirical residual stress from the measured pull-in voltage of 16.7 V and the measured surface topology of the diaphragm were extracted as +13 MPa, resulting in the effective spring constant of 110.9 N/m. The parasitic capacitance for two probing pads including the substrate part was 0.25 pF. Furthermore, to verify the proposed model, the modelled open-circuit sensitivity was compared with the measured value. The MEMS acoustic sensor had an open- circuit sensitivity of -43.0 dBV/Pa at 1 kHz with a bias of 10 V, while the modelled open- circuit sensitivity was -42.9 dBV/Pa, which showed good agreement in the range from 100 Hz to 18 kHz. This validates the empirical-based open-circuit sensitivity model for designing capacitive-type MEMS acoustic sensors.

  14. Ontology-Based Empirical Knowledge Verification for Professional Virtual Community

    ERIC Educational Resources Information Center

    Chen, Yuh-Jen

    2011-01-01

    A professional virtual community provides an interactive platform for enterprise experts to create and share their empirical knowledge cooperatively, and the platform contains a tremendous amount of hidden empirical knowledge that knowledge experts have preserved in the discussion process. Therefore, enterprise knowledge management highly…

  15. Activity Theory as a Theoretical Framework for Health Self-Quantification: A Systematic Review of Empirical Studies

    PubMed Central

    2016-01-01

    Background Self-quantification (SQ) is a way of working in which, by using tracking tools, people aim to collect, manage, and reflect on personal health data to gain a better understanding of their own body, health behavior, and interaction with the world around them. However, health SQ lacks a formal framework for describing the self-quantifiers’ activities and their contextual components or constructs to pursue these health related goals. Establishing such framework is important because it is the first step to operationalize health SQ fully. This may in turn help to achieve the aims of health professionals and researchers who seek to make or study changes in the self-quantifiers’ health systematically. Objective The aim of this study was to review studies on health SQ in order to answer the following questions: What are the general features of the work and the particular activities that self-quantifiers perform to achieve their health objectives? What constructs of health SQ have been identified in the scientific literature? How have these studies described such constructs? How would it be possible to model these constructs theoretically to characterize the work of health SQ? Methods A systematic review of peer-reviewed literature was conducted. A total of 26 empirical studies were included. The content of these studies was thematically analyzed using Activity Theory as an organizing framework. Results The literature provided varying descriptions of health SQ as data-driven and objective-oriented work mediated by SQ tools. From the literature, we identified two types of SQ work: work on data (ie, data management activities) and work with data (ie, health management activities). Using Activity Theory, these activities could be characterized into 6 constructs: users, tracking tools, health objectives, division of work, community or group setting, and SQ plan and rules. We could not find a reference to any single study that accounted for all these activities and

  16. Empirical Likelihood-Based ANOVA for Trimmed Means

    PubMed Central

    Velina, Mara; Valeinis, Janis; Greco, Luca; Luta, George

    2016-01-01

    In this paper, we introduce an alternative to Yuen’s test for the comparison of several population trimmed means. This nonparametric ANOVA type test is based on the empirical likelihood (EL) approach and extends the results for one population trimmed mean from Qin and Tsao (2002). The results of our simulation study indicate that for skewed distributions, with and without variance heterogeneity, Yuen’s test performs better than the new EL ANOVA test for trimmed means with respect to control over the probability of a type I error. This finding is in contrast with our simulation results for the comparison of means, where the EL ANOVA test for means performs better than Welch’s heteroscedastic F test. The analysis of a real data example illustrates the use of Yuen’s test and the new EL ANOVA test for trimmed means for different trimming levels. Based on the results of our study, we recommend the use of Yuen’s test for situations involving the comparison of population trimmed means between groups of interest. PMID:27690063

  17. Developing a Comprehensive, Empirically Based Research Framework for Classroom-Based Assessment

    ERIC Educational Resources Information Center

    Hill, Kathryn; McNamara, Tim

    2012-01-01

    This paper presents a comprehensive framework for researching classroom-based assessment (CBA) processes, and is based on a detailed empirical study of two Australian school classrooms where students aged 11 to 13 were studying Indonesian as a foreign language. The framework can be considered innovative in several respects. It goes beyond the…

  18. Meaningful learning: theoretical support for concept-based teaching.

    PubMed

    Getha-Eby, Teresa J; Beery, Theresa; Xu, Yin; O'Brien, Beth A

    2014-09-01

    Novice nurses’ inability to transfer classroom knowledge to the bedside has been implicated in adverse patient outcomes, including death. Concept-based teaching is a pedagogy found to improve knowledge transfer. Concept-based teaching emanates from a constructivist paradigm of teaching and learning and can be implemented most effectively when the underlying theory and principles are applied. Ausubel’s theory of meaningful learning and its construct of substantive knowledge integration provides a model to help educators to understand, implement, and evaluate concept-based teaching. Contemporary findings from the fields of cognitive psychology, human development, and neurobiology provide empirical evidence of the relationship between concept-based teaching, meaningful learning, and knowledge transfer. This article describes constructivist principles and meaningful learning as they apply to nursing pedagogy.

  19. A patient-centered pharmacy services model of HIV patient care in community pharmacy settings: a theoretical and empirical framework.

    PubMed

    Kibicho, Jennifer; Owczarzak, Jill

    2012-01-01

    Reflecting trends in health care delivery, pharmacy practice has shifted from a drug-specific to a patient-centered model of care, aimed at improving the quality of patient care and reducing health care costs. In this article, we outline a theoretical model of patient-centered pharmacy services (PCPS), based on in-depth, qualitative interviews with a purposive sample of 28 pharmacists providing care to HIV-infected patients in specialty, semispecialty, and nonspecialty pharmacy settings. Data analysis was an interactive process informed by pharmacists' interviews and a review of the general literature on patient centered care, including Medication Therapy Management (MTM) services. Our main finding was that the current models of pharmacy services, including MTM, do not capture the range of pharmacy services in excess of mandated drug dispensing services. In this article, we propose a theoretical PCPS model that reflects the actual services pharmacists provide. The model includes five elements: (1) addressing patients as whole, contextualized persons; (2) customizing interventions to unique patient circumstances; (3) empowering patients to take responsibility for their own health care; (4) collaborating with clinical and nonclinical providers to address patient needs; and (5) developing sustained relationships with patients. The overarching goal of PCPS is to empower patients' to take responsibility for their own health care and self-manage their HIV-infection. Our findings provide the foundation for future studies regarding how widespread these practices are in diverse community settings, the validity of the proposed PCPS model, the potential for standardizing pharmacist practices, and the feasibility of a PCPS framework to reimburse pharmacists services.

  20. MODELING OF 2LIBH4 PLUS MGH2 HYDROGEN STORAGE SYSTEM ACCIDENT SCENARIOS USING EMPIRICAL AND THEORETICAL THERMODYNAMICS

    SciTech Connect

    James, C; David Tamburello, D; Joshua Gray, J; Kyle Brinkman, K; Bruce Hardy, B; Donald Anton, D

    2009-04-01

    It is important to understand and quantify the potential risk resulting from accidental environmental exposure of condensed phase hydrogen storage materials under differing environmental exposure scenarios. This paper describes a modeling and experimental study with the aim of predicting consequences of the accidental release of 2LiBH{sub 4}+MgH{sub 2} from hydrogen storage systems. The methodology and results developed in this work are directly applicable to any solid hydride material and/or accident scenario using appropriate boundary conditions and empirical data. The ability to predict hydride behavior for hypothesized accident scenarios facilitates an assessment of the of risk associated with the utilization of a particular hydride. To this end, an idealized finite volume model was developed to represent the behavior of dispersed hydride from a breached system. Semiempirical thermodynamic calculations and substantiating calorimetric experiments were performed in order to quantify the energy released, energy release rates and to quantify the reaction products resulting from water and air exposure of a lithium borohydride and magnesium hydride combination. The hydrides, LiBH{sub 4} and MgH{sub 2}, were studied individually in the as-received form and in the 2:1 'destabilized' mixture. Liquid water hydrolysis reactions were performed in a Calvet calorimeter equipped with a mixing cell using neutral water. Water vapor and oxygen gas phase reactivity measurements were performed at varying relative humidities and temperatures by modifying the calorimeter and utilizing a gas circulating flow cell apparatus. The results of these calorimetric measurements were compared with standardized United Nations (UN) based test results for air and water reactivity and used to develop quantitative kinetic expressions for hydrolysis and air oxidation in these systems. Thermodynamic parameters obtained from these tests were then inputted into a computational fluid dynamics model to

  1. Phospholipid-based nonlamellar mesophases for delivery systems: bridging the gap between empirical and rational design.

    PubMed

    Martiel, Isabelle; Sagalowicz, Laurent; Mezzenga, Raffaele

    2014-07-01

    Phospholipids are ubiquitous cell membrane components and relatively well-accepted ingredients due to their natural origin. Phosphatidylcholine (PC) in particular offers a promising alternative to monoglycerides for lyotropic liquid crystalline (LLC) delivery system applications in the food, cosmetics and pharmaceutical industries, provided its strong tendency to form zero-mean curvature lamellar mesophases in water can be overcome. Higher negative curvatures are usually reached through the addition of a third lipid component, forming a ternary diagram phospholipid/water/oil. The initial part of this work summarizes the potential advantages and the challenges of phospholipid-based delivery system applications. In the next part, various ternary PC/water/oil systems are discussed, with a special emphasis on the PC/water/cyclohexane and PC/water/α-tocopherol systems. We report that R-(+)-limonene has a quantitatively similar effect as cyclohexane. The last part is devoted to the theoretical interpretation of the observed phase behaviors. A fruitful parallel is drawn with PC polymer-like reverse micelles, leading to a thermodynamic description in terms of interfacial bending energy. Investigations at the molecular level are reviewed to help in bridging the empirical and theoretical approaches. Predictive rules are finally derived from this wide-ranging overview, thereby opening the way to a future rational design of PC-based LLC delivery systems. PMID:24685272

  2. Evidence-based ethics? On evidence-based practice and the "empirical turn" from normative bioethics

    PubMed Central

    Goldenberg, Maya J

    2005-01-01

    Background The increase in empirical methods of research in bioethics over the last two decades is typically perceived as a welcomed broadening of the discipline, with increased integration of social and life scientists into the field and ethics consultants into the clinical setting, however it also represents a loss of confidence in the typical normative and analytic methods of bioethics. Discussion The recent incipiency of "Evidence-Based Ethics" attests to this phenomenon and should be rejected as a solution to the current ambivalence toward the normative resolution of moral problems in a pluralistic society. While "evidence-based" is typically read in medicine and other life and social sciences as the empirically-adequate standard of reasonable practice and a means for increasing certainty, I propose that the evidence-based movement in fact gains consensus by displacing normative discourse with aggregate or statistically-derived empirical evidence as the "bottom line". Therefore, along with wavering on the fact/value distinction, evidence-based ethics threatens bioethics' normative mandate. The appeal of the evidence-based approach is that it offers a means of negotiating the demands of moral pluralism. Rather than appealing to explicit values that are likely not shared by all, "the evidence" is proposed to adjudicate between competing claims. Quantified measures are notably more "neutral" and democratic than liberal markers like "species normal functioning". Yet the positivist notion that claims stand or fall in light of the evidence is untenable; furthermore, the legacy of positivism entails the quieting of empirically non-verifiable (or at least non-falsifiable) considerations like moral claims and judgments. As a result, evidence-based ethics proposes to operate with the implicit normativity that accompanies the production and presentation of all biomedical and scientific facts unchecked. Summary The "empirical turn" in bioethics signals a need for

  3. Empirical estimates and theoretical predictions of the shorting factor for the THEMIS double-probe electric field instrument

    NASA Astrophysics Data System (ADS)

    Califf, S.; Cully, C. M.

    2016-07-01

    Double-probe electric field measurements on board spacecraft present significant technical challenges, especially in the inner magnetosphere where the ambient plasma characteristics can vary dramatically and alter the behavior of the instrument. We explore the shorting factor for the Time History of Events and Macroscale Interactions during Substorms electric field instrument, which is a scale factor error on the measured electric field due to coupling between the sensing spheres and the long wire booms, using both an empirical technique and through simulations with varying levels of fidelity. The empirical data and simulations both show that there is effectively no shorting when the spacecraft is immersed in high-density plasma deep within the plasmasphere and that shorting becomes more prominent as plasma density decreases and the Debye length increases outside the plasmasphere. However, there is a significant discrepancy between the data and theory for the shorting factor in low-density plasmas: the empirical estimate indicates ~0.7 shorting for long Debye lengths, but the simulations predict a shorting factor of ~0.94. This paper systematically steps through the empirical and modeling methods leading to the disagreement with the intention of motivating further study on the topic.

  4. Fleet Fatality Risk and its Sensitivity to Vehicle Mass Change in Frontal Vehicle-to-Vehicle Crashes, Using a Combined Empirical and Theoretical Model.

    PubMed

    Shi, Yibing; Nusholtz, Guy S

    2015-11-01

    The objective of this study is to analytically model the fatality risk in frontal vehicle-to-vehicle crashes of the current vehicle fleet, and its sensitivity to vehicle mass change. A model is built upon an empirical risk ratio-mass ratio relationship from field data and a theoretical mass ratio-velocity change ratio relationship dictated by conservation of momentum. The fatality risk of each vehicle is averaged over the closing velocity distribution to arrive at the mean fatality risks. The risks of the two vehicles are summed and averaged over all possible crash partners to find the societal mean fatality risk associated with a subject vehicle of a given mass from a fleet specified by a mass distribution function. Based on risk exponent and mass distribution from a recent fleet, the subject vehicle mean fatality risk is shown to increase, while at the same time that for the partner vehicles decreases, as the mass of the subject vehicle decreases. The societal mean fatality risk, the sum of these, incurs a penalty with respect to a fleet with complete mass equality. This penalty reaches its minimum (~8% for the example fleet) for crashes with a subject vehicle whose mass is close to the fleet mean mass. The sensitivity, i.e., the rate of change of the societal mean fatality risk with respect to the mass of the subject vehicle is assessed. Results from two sets of fully regression-based analyses, Kahane (2012) and Van Auken and Zellner (2013), are approximately compared with the current result. The general magnitudes of the results are comparable, but differences exist at a more detailed level. The subject vehicle-oriented societal mean fatality risk is averaged over all possible subject vehicle masses of a given fleet to obtain the overall mean fatality risk of the fleet. It is found to increase approximately linearly at a rate of about 0.8% for each 100 lb decrease in mass of all vehicles in the fleet.

  5. A Physically Based Theoretical Model of Spore Deposition for Predicting Spread of Plant Diseases.

    PubMed

    Isard, Scott A; Chamecki, Marcelo

    2016-03-01

    A physically based theory for predicting spore deposition downwind from an area source of inoculum is presented. The modeling framework is based on theories of turbulence dispersion in the atmospheric boundary layer and applies only to spores that escape from plant canopies. A "disease resistance" coefficient is introduced to convert the theoretical spore deposition model into a simple tool for predicting disease spread at the field scale. Results from the model agree well with published measurements of Uromyces phaseoli spore deposition and measurements of wheat leaf rust disease severity. The theoretical model has the advantage over empirical models in that it can be used to assess the influence of source distribution and geometry, spore characteristics, and meteorological conditions on spore deposition and disease spread. The modeling framework is refined to predict the detailed two-dimensional spatial pattern of disease spread from an infection focus. Accounting for the time variations of wind speed and direction in the refined modeling procedure improves predictions, especially near the inoculum source, and enables application of the theoretical modeling framework to field experiment design. PMID:26595112

  6. A Physically Based Theoretical Model of Spore Deposition for Predicting Spread of Plant Diseases.

    PubMed

    Isard, Scott A; Chamecki, Marcelo

    2016-03-01

    A physically based theory for predicting spore deposition downwind from an area source of inoculum is presented. The modeling framework is based on theories of turbulence dispersion in the atmospheric boundary layer and applies only to spores that escape from plant canopies. A "disease resistance" coefficient is introduced to convert the theoretical spore deposition model into a simple tool for predicting disease spread at the field scale. Results from the model agree well with published measurements of Uromyces phaseoli spore deposition and measurements of wheat leaf rust disease severity. The theoretical model has the advantage over empirical models in that it can be used to assess the influence of source distribution and geometry, spore characteristics, and meteorological conditions on spore deposition and disease spread. The modeling framework is refined to predict the detailed two-dimensional spatial pattern of disease spread from an infection focus. Accounting for the time variations of wind speed and direction in the refined modeling procedure improves predictions, especially near the inoculum source, and enables application of the theoretical modeling framework to field experiment design.

  7. Collaborative filtering based on information-theoretic co-clustering

    NASA Astrophysics Data System (ADS)

    Liang, Changyong; Leng, Yajun

    2014-03-01

    Collaborative filtering is one of the most popular recommendation techniques, which provides personalised recommendations based on users' tastes. In spite of its huge success, it suffers from a range of problems, the most fundamental being that of data sparsity. Sparsity in ratings makes the formation of inaccurate neighbourhood, thereby resulting in poor recommendations. To address this issue, in this article, we propose a novel collaborative filtering approach based on information-theoretic co-clustering. The proposed approach computes two types of similarities: cluster preference and rating, and combines them. Based on the combined similarity, the user-based and item-based approaches are adopted, respectively, to obtain individual predictions for an unknown target rating. Finally, the proposed approach fuses these resultant predictions. Experimental results show that the proposed approach is superior to existing alternatives.

  8. Theoretical geology

    NASA Astrophysics Data System (ADS)

    Mikeš, Daniel

    2010-05-01

    erroneous assumptions and do not solve the very fundamental issue that lies at the base of the problem. This problem is straighforward and obvious: a sedimentary system is inherently four-dimensional (3 spatial dimensions + 1 temporal dimension). Any method using an inferior number or dimensions is bound to fail to describe the evolution of a sedimentary system. It is indicative of the present day geological world that such fundamental issues be overlooked. The only reason for which one can appoint the socalled "rationality" in todays society. Simple "common sense" leads us to the conclusion that in this case the empirical method is bound to fail and the only method that can solve the problem is the theoretical approach. Reasoning that is completely trivial for the traditional exact sciences like physics and mathematics and applied sciences like engineering. However, not for geology, a science that was traditionally descriptive and jumped to empirical science, skipping the stage of theoretical science. I argue that the gap of theoretical geology is left open and needs to be filled. Every discipline in geology lacks a theoretical base. This base can only be filled by the theoretical/inductive approach and can impossibly be filled by the empirical/deductive approach. Once a critical mass of geologists realises this flaw in todays geology, we can start solving the fundamental problems in geology.

  9. E-learning in engineering education: a theoretical and empirical study of the Algerian higher education institution

    NASA Astrophysics Data System (ADS)

    Benchicou, Soraya; Aichouni, Mohamed; Nehari, Driss

    2010-06-01

    Technology-mediated education or e-learning is growing globally both in scale and delivery capacity due to the large diffusion of the ubiquitous information and communication technologies (ICT) in general and the web technologies in particular. This statement has not yet been fully supported by research, especially in developing countries such as Algeria. The purpose of this paper was to identify directions for addressing the needs of academics in higher education institutions in Algeria in order to adopt the e-learning approach as a strategy to improve quality of education. The paper will report results of an empirical study that measures the readiness of the Algerian higher education institutions towards the implementation of ICT in the educational process and the attitudes of faculty members towards the application of the e-learning approach in engineering education. Three main objectives were targeted, namely: (a) to provide an initial evaluation of faculty members' attitudes and perceptions towards web-based education; (b) reporting on their perceived requirements for implementing e-learning in university courses; (c) providing an initial input for a collaborative process of developing an institutional strategy for e-learning. Statistical analysis of the survey results indicates that the Algerian higher education institution, which adopted the Licence - Master and Doctorate educational system, is facing a big challenge to take advantage of emerging technological innovations and the advent of e-learning to further develop its teaching programmes and to enhance the quality of education in engineering fields. The successful implementation of this modern approach is shown to depend largely on a set of critical success factors that would include: 1. The extent to which the institution will adopt a formal and official e-learning strategy. 2. The extent to which faculty members will adhere and adopt this strategy and develop ownership of the various measures in the

  10. EMT - Empirical-mode-decomposition-based Magneto-Telluric Processing

    NASA Astrophysics Data System (ADS)

    Neukirch, M.; Garcia, X.

    2012-04-01

    We present a new Magneto-Telluric (MT) data processing scheme based on an emerging non linear, non stationary time series analysis tool, called the Empirical Mode Decomposition (EMD) or Hilbert-Huang Transform (HHT), to transform data into a non-stationary frequency domain and a robust principal component regression to estimate the most likely MT transfer functions from the data with the 2-σ confidence intervals computed by a bootstrap algorithm. Optionally, data quality can be controlled by a physical coherence and a signal power filter. MT sources are assumed to be quasi stationary and therefore a (windowed) Fourier Transform is often applied to transform the time series into the frequency domain in which Transfer Functions (TF) are defined between the electromagnetic field components. This assumption can break down in the presence of noise or when the sources are non stationary, and then TF estimates can become unreliable when obtained through a stationary transform like the Fourier transform. Our TF estimation scheme naturally deals with non stationarity without introducing artifacts and, therefore, potentially can distinguish quasi-stationary sources and non-stationary noise. In contrast to previous works on using HHT for MT processing, we argue the necessity of a multivariate EMD to model the MT problem physically correctly and highlight the resulting possibility to use instantaneous parameters as independent and identically distributed variables. Furthermore, we define a homogenization between data channels of frequency discrepancies due to non stationarity and noise. The TF estimation in the frequency domain bases on a robust principal component analysis in order to find two source polarizations. These two principal components are used as predictor to regress robustly the data channels within a bootstrap algorithm to estimate the Earth's Transfer function with 2-σ confidence interval supplied by the measured data.The scheme can be used with and without

  11. Mindfulness-based treatment to prevent addictive behavior relapse: theoretical models and hypothesized mechanisms of change.

    PubMed

    Witkiewitz, Katie; Bowen, Sarah; Harrop, Erin N; Douglas, Haley; Enkema, Matthew; Sedgwick, Carly

    2014-04-01

    Mindfulness-based treatments are growing in popularity among addiction treatment providers, and several studies suggest the efficacy of incorporating mindfulness practices into the treatment of addiction, including the treatment of substance use disorders and behavioral addictions (i.e., gambling). The current paper provides a review of theoretical models of mindfulness in the treatment of addiction and several hypothesized mechanisms of change. We provide an overview of mindfulness-based relapse prevention (MBRP), including session content, treatment targets, and client feedback from participants who have received MBRP in the context of empirical studies. Future research directions regarding operationalization and measurement, identifying factors that moderate treatment effects, and protocol adaptations for specific populations are discussed.

  12. An empirical/theoretical model with dimensionless numbers to predict the performance of electrodialysis systems on the basis of operating conditions.

    PubMed

    Karimi, Leila; Ghassemi, Abbas

    2016-07-01

    Among the different technologies developed for desalination, the electrodialysis/electrodialysis reversal (ED/EDR) process is one of the most promising for treating brackish water with low salinity when there is high risk of scaling. Multiple researchers have investigated ED/EDR to optimize the process, determine the effects of operating parameters, and develop theoretical/empirical models. Previously published empirical/theoretical models have evaluated the effect of the hydraulic conditions of the ED/EDR on the limiting current density using dimensionless numbers. The reason for previous studies' emphasis on limiting current density is twofold: 1) to maximize ion removal, most ED/EDR systems are operated close to limiting current conditions if there is not a scaling potential in the concentrate chamber due to a high concentration of less-soluble salts; and 2) for modeling the ED/EDR system with dimensionless numbers, it is more accurate and convenient to use limiting current density, where the boundary layer's characteristics are known at constant electrical conditions. To improve knowledge of ED/EDR systems, ED/EDR models should be also developed for the Ohmic region, where operation reduces energy consumption, facilitates targeted ion removal, and prolongs membrane life compared to limiting current conditions. In this paper, theoretical/empirical models were developed for ED/EDR performance in a wide range of operating conditions. The presented ion removal and selectivity models were developed for the removal of monovalent ions and divalent ions utilizing the dominant dimensionless numbers obtained from laboratory scale electrodialysis experiments. At any system scale, these models can predict ED/EDR performance in terms of monovalent and divalent ion removal.

  13. An empirical/theoretical model with dimensionless numbers to predict the performance of electrodialysis systems on the basis of operating conditions.

    PubMed

    Karimi, Leila; Ghassemi, Abbas

    2016-07-01

    Among the different technologies developed for desalination, the electrodialysis/electrodialysis reversal (ED/EDR) process is one of the most promising for treating brackish water with low salinity when there is high risk of scaling. Multiple researchers have investigated ED/EDR to optimize the process, determine the effects of operating parameters, and develop theoretical/empirical models. Previously published empirical/theoretical models have evaluated the effect of the hydraulic conditions of the ED/EDR on the limiting current density using dimensionless numbers. The reason for previous studies' emphasis on limiting current density is twofold: 1) to maximize ion removal, most ED/EDR systems are operated close to limiting current conditions if there is not a scaling potential in the concentrate chamber due to a high concentration of less-soluble salts; and 2) for modeling the ED/EDR system with dimensionless numbers, it is more accurate and convenient to use limiting current density, where the boundary layer's characteristics are known at constant electrical conditions. To improve knowledge of ED/EDR systems, ED/EDR models should be also developed for the Ohmic region, where operation reduces energy consumption, facilitates targeted ion removal, and prolongs membrane life compared to limiting current conditions. In this paper, theoretical/empirical models were developed for ED/EDR performance in a wide range of operating conditions. The presented ion removal and selectivity models were developed for the removal of monovalent ions and divalent ions utilizing the dominant dimensionless numbers obtained from laboratory scale electrodialysis experiments. At any system scale, these models can predict ED/EDR performance in terms of monovalent and divalent ion removal. PMID:27108213

  14. Comparison of empirical, semi-empirical and physically based models of soil hydraulic functions derived for bi-modal soils

    NASA Astrophysics Data System (ADS)

    Kutílek, M.; Jendele, L.; Krejča, M.

    2009-02-01

    The accelerated flow in soil pores is responsible for a rapid transport of pollutants from the soil surface to deeper layers up to groundwater. The term preferential flow is used for this type of transport. Our study was aimed at the preferential flow realized in the structural porous domain in bi-modal soils. We compared equations describing the soil water retention function h( θ) and unsaturated hydraulic conductivity K( h), eventually K( θ) modified for bi-modal soils, where θ is the soil water content and h is the pressure head. The analytical description of a curve passing experimental data sets of the soil hydraulic function is typical for the empirical equation characterized by fitting parameters only. If the measured data are described by the equation derived by the physical model without using fitting parameters, we speak about a physically based model. There exist several transitional subtypes between empirical and physically based models. They are denoted as semi-empirical, or semi-physical. We tested 3 models of soil water retention function and 3 models of unsaturated conductivity using experimental data sets of sand, silt, silt loam and loam. All used soils are typical by their bi-modality of the soil porous system. The model efficiency was estimated by RMSE (Root mean square error) and by RSE (Relative square error). The semi-empirical equation of the soil water retention function had the lowest values of RMSE and RSE and was qualified as "optimal" for the formal description of the shape of the water retention function. With this equation, the fit of the modelled data to experiments was the closest one. The fitting parameters smoothed the difference between the model and the physical reality of the soil porous media. The physical equation based upon the model of the pore size distribution did not allow exact fitting of the modelled data to the experimental data due to the rigidity and simplicity of the physical model when compared to the real soil

  15. Comparison of empirical, semi-empirical and physically based models of soil hydraulic functions derived for bi-modal soils.

    PubMed

    Kutílek, M; Jendele, L; Krejca, M

    2009-02-16

    The accelerated flow in soil pores is responsible for a rapid transport of pollutants from the soil surface to deeper layers up to groundwater. The term preferential flow is used for this type of transport. Our study was aimed at the preferential flow realized in the structural porous domain in bi-modal soils. We compared equations describing the soil water retention function h(theta) and unsaturated hydraulic conductivity K(h), eventually K(theta) modified for bi-modal soils, where theta is the soil water content and h is the pressure head. The analytical description of a curve passing experimental data sets of the soil hydraulic function is typical for the empirical equation characterized by fitting parameters only. If the measured data are described by the equation derived by the physical model without using fitting parameters, we speak about a physically based model. There exist several transitional subtypes between empirical and physically based models. They are denoted as semi-empirical, or semi-physical. We tested 3 models of soil water retention function and 3 models of unsaturated conductivity using experimental data sets of sand, silt, silt loam and loam. All used soils are typical by their bi-modality of the soil porous system. The model efficiency was estimated by RMSE (Root mean square error) and by RSE (Relative square error). The semi-empirical equation of the soil water retention function had the lowest values of RMSE and RSE and was qualified as "optimal" for the formal description of the shape of the water retention function. With this equation, the fit of the modelled data to experiments was the closest one. The fitting parameters smoothed the difference between the model and the physical reality of the soil porous media. The physical equation based upon the model of the pore size distribution did not allow exact fitting of the modelled data to the experimental data due to the rigidity and simplicity of the physical model when compared to the

  16. The Importance of Emotion in Theories of Motivation: Empirical, Methodological, and Theoretical Considerations from a Goal Theory Perspective

    ERIC Educational Resources Information Center

    Turner, Julianne C.; Meyer, Debra K.; Schweinle, Amy

    2003-01-01

    Despite its importance to educational psychology, prominent theories of motivation have mostly ignored emotion. In this paper, we review theoretical conceptions of the relation between motivation and emotion and discuss the role of emotion in understanding student motivation in classrooms. We demonstrate that emotion is one of the best indicators…

  17. Unsupervised active learning based on hierarchical graph-theoretic clustering.

    PubMed

    Hu, Weiming; Hu, Wei; Xie, Nianhua; Maybank, Steve

    2009-10-01

    Most existing active learning approaches are supervised. Supervised active learning has the following problems: inefficiency in dealing with the semantic gap between the distribution of samples in the feature space and their labels, lack of ability in selecting new samples that belong to new categories that have not yet appeared in the training samples, and lack of adaptability to changes in the semantic interpretation of sample categories. To tackle these problems, we propose an unsupervised active learning framework based on hierarchical graph-theoretic clustering. In the framework, two promising graph-theoretic clustering algorithms, namely, dominant-set clustering and spectral clustering, are combined in a hierarchical fashion. Our framework has some advantages, such as ease of implementation, flexibility in architecture, and adaptability to changes in the labeling. Evaluations on data sets for network intrusion detection, image classification, and video classification have demonstrated that our active learning framework can effectively reduce the workload of manual classification while maintaining a high accuracy of automatic classification. It is shown that, overall, our framework outperforms the support-vector-machine-based supervised active learning, particularly in terms of dealing much more efficiently with new samples whose categories have not yet appeared in the training samples. PMID:19336318

  18. The Equivalence of Information-Theoretic and Likelihood-Based Methods for Neural Dimensionality Reduction

    PubMed Central

    Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.

    2015-01-01

    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448

  19. Accuracy of Population Validity and Cross-Validity Estimation: An Empirical Comparison of Formula-Based, Traditional Empirical, and Equal Weights Procedures.

    ERIC Educational Resources Information Center

    Raju, Nambury S.; Bilgic, Reyhan; Edwards, Jack E.; Fleer, Paul F.

    1999-01-01

    Performed an empirical Monte Carlo study using predictor and criterion data from 84,808 U.S. Air Force enlistees. Compared formula-based, traditional empirical, and equal-weights procedures. Discusses issues for basic research on validation and cross-validation. (SLD)

  20. Why resilience is unappealing to social science: Theoretical and empirical investigations of the scientific use of resilience

    PubMed Central

    Olsson, Lennart; Jerneck, Anne; Thoren, Henrik; Persson, Johannes; O’Byrne, David

    2015-01-01

    Resilience is often promoted as a boundary concept to integrate the social and natural dimensions of sustainability. However, it is a troubled dialogue from which social scientists may feel detached. To explain this, we first scrutinize the meanings, attributes, and uses of resilience in ecology and elsewhere to construct a typology of definitions. Second, we analyze core concepts and principles in resilience theory that cause disciplinary tensions between the social and natural sciences (system ontology, system boundary, equilibria and thresholds, feedback mechanisms, self-organization, and function). Third, we provide empirical evidence of the asymmetry in the use of resilience theory in ecology and environmental sciences compared to five relevant social science disciplines. Fourth, we contrast the unification ambition in resilience theory with methodological pluralism. Throughout, we develop the argument that incommensurability and unification constrain the interdisciplinary dialogue, whereas pluralism drawing on core social scientific concepts would better facilitate integrated sustainability research. PMID:26601176

  1. Why resilience is unappealing to social science: Theoretical and empirical investigations of the scientific use of resilience.

    PubMed

    Olsson, Lennart; Jerneck, Anne; Thoren, Henrik; Persson, Johannes; O'Byrne, David

    2015-05-01

    Resilience is often promoted as a boundary concept to integrate the social and natural dimensions of sustainability. However, it is a troubled dialogue from which social scientists may feel detached. To explain this, we first scrutinize the meanings, attributes, and uses of resilience in ecology and elsewhere to construct a typology of definitions. Second, we analyze core concepts and principles in resilience theory that cause disciplinary tensions between the social and natural sciences (system ontology, system boundary, equilibria and thresholds, feedback mechanisms, self-organization, and function). Third, we provide empirical evidence of the asymmetry in the use of resilience theory in ecology and environmental sciences compared to five relevant social science disciplines. Fourth, we contrast the unification ambition in resilience theory with methodological pluralism. Throughout, we develop the argument that incommensurability and unification constrain the interdisciplinary dialogue, whereas pluralism drawing on core social scientific concepts would better facilitate integrated sustainability research.

  2. Time Domain Strain/Stress Reconstruction Based on Empirical Mode Decomposition: Numerical Study and Experimental Validation

    PubMed Central

    He, Jingjing; Zhou, Yibin; Guan, Xuefei; Zhang, Wei; Zhang, Weifang; Liu, Yongming

    2016-01-01

    Structural health monitoring has been studied by a number of researchers as well as various industries to keep up with the increasing demand for preventive maintenance routines. This work presents a novel method for reconstruct prompt, informed strain/stress responses at the hot spots of the structures based on strain measurements at remote locations. The structural responses measured from usage monitoring system at available locations are decomposed into modal responses using empirical mode decomposition. Transformation equations based on finite element modeling are derived to extrapolate the modal responses from the measured locations to critical locations where direct sensor measurements are not available. Then, two numerical examples (a two-span beam and a 19956-degree of freedom simplified airfoil) are used to demonstrate the overall reconstruction method. Finally, the present work investigates the effectiveness and accuracy of the method through a set of experiments conducted on an aluminium alloy cantilever beam commonly used in air vehicle and spacecraft. The experiments collect the vibration strain signals of the beam via optical fiber sensors. Reconstruction results are compared with theoretical solutions and a detailed error analysis is also provided. PMID:27537889

  3. Time Domain Strain/Stress Reconstruction Based on Empirical Mode Decomposition: Numerical Study and Experimental Validation.

    PubMed

    He, Jingjing; Zhou, Yibin; Guan, Xuefei; Zhang, Wei; Zhang, Weifang; Liu, Yongming

    2016-08-16

    Structural health monitoring has been studied by a number of researchers as well as various industries to keep up with the increasing demand for preventive maintenance routines. This work presents a novel method for reconstruct prompt, informed strain/stress responses at the hot spots of the structures based on strain measurements at remote locations. The structural responses measured from usage monitoring system at available locations are decomposed into modal responses using empirical mode decomposition. Transformation equations based on finite element modeling are derived to extrapolate the modal responses from the measured locations to critical locations where direct sensor measurements are not available. Then, two numerical examples (a two-span beam and a 19956-degree of freedom simplified airfoil) are used to demonstrate the overall reconstruction method. Finally, the present work investigates the effectiveness and accuracy of the method through a set of experiments conducted on an aluminium alloy cantilever beam commonly used in air vehicle and spacecraft. The experiments collect the vibration strain signals of the beam via optical fiber sensors. Reconstruction results are compared with theoretical solutions and a detailed error analysis is also provided.

  4. Time Domain Strain/Stress Reconstruction Based on Empirical Mode Decomposition: Numerical Study and Experimental Validation.

    PubMed

    He, Jingjing; Zhou, Yibin; Guan, Xuefei; Zhang, Wei; Zhang, Weifang; Liu, Yongming

    2016-01-01

    Structural health monitoring has been studied by a number of researchers as well as various industries to keep up with the increasing demand for preventive maintenance routines. This work presents a novel method for reconstruct prompt, informed strain/stress responses at the hot spots of the structures based on strain measurements at remote locations. The structural responses measured from usage monitoring system at available locations are decomposed into modal responses using empirical mode decomposition. Transformation equations based on finite element modeling are derived to extrapolate the modal responses from the measured locations to critical locations where direct sensor measurements are not available. Then, two numerical examples (a two-span beam and a 19956-degree of freedom simplified airfoil) are used to demonstrate the overall reconstruction method. Finally, the present work investigates the effectiveness and accuracy of the method through a set of experiments conducted on an aluminium alloy cantilever beam commonly used in air vehicle and spacecraft. The experiments collect the vibration strain signals of the beam via optical fiber sensors. Reconstruction results are compared with theoretical solutions and a detailed error analysis is also provided. PMID:27537889

  5. Landscape influences on dispersal behaviour: a theoretical model and empirical test using the fire salamander, Salamandra infraimmaculata.

    PubMed

    Kershenbaum, Arik; Blank, Lior; Sinai, Iftach; Merilä, Juha; Blaustein, Leon; Templeton, Alan R

    2014-06-01

    When populations reside within a heterogeneous landscape, isolation by distance may not be a good predictor of genetic divergence if dispersal behaviour and therefore gene flow depend on landscape features. Commonly used approaches linking landscape features to gene flow include the least cost path (LCP), random walk (RW), and isolation by resistance (IBR) models. However, none of these models is likely to be the most appropriate for all species and in all environments. We compared the performance of LCP, RW and IBR models of dispersal with the aid of simulations conducted on artificially generated landscapes. We also applied each model to empirical data on the landscape genetics of the endangered fire salamander, Salamandra infraimmaculata, in northern Israel, where conservation planning requires an understanding of the dispersal corridors. Our simulations demonstrate that wide dispersal corridors of the low-cost environment facilitate dispersal in the IBR model, but inhibit dispersal in the RW model. In our empirical study, IBR explained the genetic divergence better than the LCP and RW models (partial Mantel correlation 0.413 for IBR, compared to 0.212 for LCP, and 0.340 for RW). Overall dispersal cost in salamanders was also well predicted by landscape feature slope steepness (76%), and elevation (24%). We conclude that fire salamander dispersal is well characterised by IBR predictions. Together with our simulation findings, these results indicate that wide dispersal corridors facilitate, rather than hinder, salamander dispersal. Comparison of genetic data to dispersal model outputs can be a useful technique in inferring dispersal behaviour from population genetic data.

  6. Fleet Fatality Risk and its Sensitivity to Vehicle Mass Change in Frontal Vehicle-to-Vehicle Crashes, Using a Combined Empirical and Theoretical Model.

    PubMed

    Shi, Yibing; Nusholtz, Guy S

    2015-11-01

    The objective of this study is to analytically model the fatality risk in frontal vehicle-to-vehicle crashes of the current vehicle fleet, and its sensitivity to vehicle mass change. A model is built upon an empirical risk ratio-mass ratio relationship from field data and a theoretical mass ratio-velocity change ratio relationship dictated by conservation of momentum. The fatality risk of each vehicle is averaged over the closing velocity distribution to arrive at the mean fatality risks. The risks of the two vehicles are summed and averaged over all possible crash partners to find the societal mean fatality risk associated with a subject vehicle of a given mass from a fleet specified by a mass distribution function. Based on risk exponent and mass distribution from a recent fleet, the subject vehicle mean fatality risk is shown to increase, while at the same time that for the partner vehicles decreases, as the mass of the subject vehicle decreases. The societal mean fatality risk, the sum of these, incurs a penalty with respect to a fleet with complete mass equality. This penalty reaches its minimum (~8% for the example fleet) for crashes with a subject vehicle whose mass is close to the fleet mean mass. The sensitivity, i.e., the rate of change of the societal mean fatality risk with respect to the mass of the subject vehicle is assessed. Results from two sets of fully regression-based analyses, Kahane (2012) and Van Auken and Zellner (2013), are approximately compared with the current result. The general magnitudes of the results are comparable, but differences exist at a more detailed level. The subject vehicle-oriented societal mean fatality risk is averaged over all possible subject vehicle masses of a given fleet to obtain the overall mean fatality risk of the fleet. It is found to increase approximately linearly at a rate of about 0.8% for each 100 lb decrease in mass of all vehicles in the fleet. PMID:26660748

  7. Landfill modelling in LCA - a contribution based on empirical data.

    PubMed

    Obersteiner, Gudrun; Binner, Erwin; Mostbauer, Peter; Salhofer, Stefan

    2007-01-01

    Landfills at various stages of development, depending on their age and location, can be found throughout Europe. The type of facilities goes from uncontrolled dumpsites to highly engineered facilities with leachate and gas management. In addition, some landfills are designed to receive untreated waste, while others can receive incineration residues (MSWI) or residues after mechanical biological treatment (MBT). Dimension, type and duration of the emissions from landfills depend on the quality of the disposed waste, the technical design, and the location of the landfill. Environmental impacts are produced by the leachate (heavy metals, organic loading), emissions into the air (CH(4), hydrocarbons, halogenated hydrocarbons) and from the energy or fuel requirements for the operation of the landfill (SO(2) and NO(x) from the production of electricity from fossil fuels). To include landfilling in an life-cycle assessment (LCA) approach entails several methodological questions (multi-input process, site-specific influence, time dependency). Additionally, no experiences are available with regard to mid-term behaviour (decades) for the relatively new types of landfill (MBT landfill, landfill for residues from MSWI). The present paper focuses on two main issues concerning modelling of landfills in LCA: Firstly, it is an acknowledged fact that emissions from landfills may prevail for a very long time, often thousands of years or longer. The choice of time frame in the LCA of landfilling may therefore clearly affect the results. Secondly, the reliability of results obtained through a life-cycle assessment depends on the availability and quality of Life Cycle Inventory (LCI) data. Therefore the choice of the general approach, using multi-input inventory tool versus empirical results, may also influence the results. In this paper the different approaches concerning time horizon and LCI will be introduced and discussed. In the application of empirical results, the presence of

  8. Theoretical detection ranges for acoustic based manatee avoidance technology.

    PubMed

    Phillips, Richard; Niezrecki, Christopher; Beusse, Diedrich O

    2006-07-01

    The West Indian manatee (Trichechus manatus latirostris) has become endangered partly because of watercraft collisions in Florida's coastal waterways. To reduce the number of collisions, warning systems based upon detecting manatee vocalizations have been proposed. One aspect of the feasibility of an acoustically based warning system relies upon the distance at which a manatee vocalization is detectable. Assuming a mixed spreading model, this paper presents a theoretical analysis of the system detection capabilities operating within various background and watercraft noise conditions. This study combines measured source levels of manatee vocalizations with the modeled acoustic properties of manatee habitats to develop a method for determining the detection range and hydrophone spacing requirements for acoustic based manatee avoidance technologies. In quiet environments (background noise approximately 70 dB) it was estimated that manatee vocalizations are detectable at approximately 250 m, with a 6 dB detection threshold, In louder environments (background noise approximately 100dB) the detection range drops to 2.5 m. In a habitat with 90 dB of background noise, a passing boat with a maximum noise floor of 120 dB would be the limiting factor when it is within approximately 100 m of a hydrophone. The detection range was also found to be strongly dependent on the manatee vocalization source level.

  9. Theoretical detection ranges for acoustic based manatee avoidance technology.

    PubMed

    Phillips, Richard; Niezrecki, Christopher; Beusse, Diedrich O

    2006-07-01

    The West Indian manatee (Trichechus manatus latirostris) has become endangered partly because of watercraft collisions in Florida's coastal waterways. To reduce the number of collisions, warning systems based upon detecting manatee vocalizations have been proposed. One aspect of the feasibility of an acoustically based warning system relies upon the distance at which a manatee vocalization is detectable. Assuming a mixed spreading model, this paper presents a theoretical analysis of the system detection capabilities operating within various background and watercraft noise conditions. This study combines measured source levels of manatee vocalizations with the modeled acoustic properties of manatee habitats to develop a method for determining the detection range and hydrophone spacing requirements for acoustic based manatee avoidance technologies. In quiet environments (background noise approximately 70 dB) it was estimated that manatee vocalizations are detectable at approximately 250 m, with a 6 dB detection threshold, In louder environments (background noise approximately 100dB) the detection range drops to 2.5 m. In a habitat with 90 dB of background noise, a passing boat with a maximum noise floor of 120 dB would be the limiting factor when it is within approximately 100 m of a hydrophone. The detection range was also found to be strongly dependent on the manatee vocalization source level. PMID:16875213

  10. The Demand for Cigarettes as Derived from the Demand for Weight Loss: A Theoretical and Empirical Investigation.

    PubMed

    Cawley, John; Dragone, Davide; Von Hinke Kessler Scholder, Stephanie

    2016-01-01

    This paper offers an economic model of smoking and body weight and provides new empirical evidence on the extent to which the demand for cigarettes is derived from the demand for weight loss. In the model, smoking causes weight loss in addition to having direct utility benefits and direct health consequences. It predicts that some individuals smoke for weight loss and that the practice is more common among those who consider themselves overweight and those who experience greater disutility from excess weight. We test these hypotheses using nationally representative data in which adolescents are directly asked whether they smoke to control their weight. We find that, among teenagers who smoke frequently, 46% of girls and 30% of boys are smoking in part to control their weight. As predicted by the model, this practice is significantly more common among those who describe themselves as too fat and among groups that tend to experience greater disutility from obesity. We conclude by discussing the implications of these findings for tax policy; specifically, the demand for cigarettes is less price elastic among those who smoke for weight loss, all else being equal. Public health efforts to reduce smoking initiation and encourage cessation may wish to design campaigns to alter the derived nature of cigarette demand, especially among adolescent girls. PMID:25346511

  11. Attachment-Based Family Therapy: A Review of the Empirical Support.

    PubMed

    Diamond, Guy; Russon, Jody; Levy, Suzanne

    2016-09-01

    Attachment-based family therapy (ABFT) is an empirically supported treatment designed to capitalize on the innate, biological desire for meaningful and secure relationships. The therapy is grounded in attachment theory and provides an interpersonal, process-oriented, trauma-focused approach to treating adolescent depression, suicidality, and trauma. Although a process-oriented therapy, ABFT offers a clear structure and road map to help therapists quickly address attachment ruptures that lie at the core of family conflict. Several clinical trials and process studies have demonstrated empirical support for the model and its proposed mechanism of change. This article provides an overview of the clinical model and the existing empirical support for ABFT.

  12. An empirical formula based on Monte Carlo simulation for diffuse reflectance from turbid media

    NASA Astrophysics Data System (ADS)

    Gnanatheepam, Einstein; Aruna, Prakasa Rao; Ganesan, Singaravelu

    2016-03-01

    Diffuse reflectance spectroscopy has been widely used in diagnostic oncology and characterization of laser irradiated tissue. However, still accurate and simple analytical equation does not exist for estimation of diffuse reflectance from turbid media. In this work, a diffuse reflectance lookup table for a range of tissue optical properties was generated using Monte Carlo simulation. Based on the generated Monte Carlo lookup table, an empirical formula for diffuse reflectance was developed using surface fitting method. The variance between the Monte Carlo lookup table surface and the surface obtained from the proposed empirical formula is less than 1%. The proposed empirical formula may be used for modeling of diffuse reflectance from tissue.

  13. Empirical Analysis and Refinement of Expert System Knowledge Bases

    PubMed Central

    Weiss, Sholom M.; Politakis, Peter; Ginsberg, Allen

    1986-01-01

    Recent progress in knowledge base refinement for expert systems is reviewed. Knowledge base refinement is characterized by the constrained modification of rule-components in an existing knowledge base. The goals are to localize specific weaknesses in a knowledge base and to improve an expert system's performance. Systems that automate some aspects of knowledge base refinement can have a significant impact on the related problems of knowledge base acquisition, maintenance, verification, and learning from experience. The SEEK empiricial analysis and refinement system is reviewed and its successor system, SEEK2, is introduced. Important areas for future research in knowledge base refinement are described.

  14. Deep in Data. Empirical Data Based Software Accuracy Testing Using the Building America Field Data Repository

    SciTech Connect

    Neymark, J.; Roberts, D.

    2013-06-01

    This paper describes progress toward developing a usable, standardized, empirical data-based software accuracy test suite using home energy consumption and building description data. Empirical data collected from around the United States have been translated into a uniform Home Performance Extensible Markup Language format that may enable software developers to create translators to their input schemes for efficient access to the data. This could allow for modeling many homes expediently, and thus implementing software accuracy test cases by applying the translated data.

  15. Organizing the public health-clinical health interface: theoretical bases.

    PubMed

    St-Pierre, Michèle; Reinharz, Daniel; Gauthier, Jacques-Bernard

    2006-01-01

    This article addresses the issue of the interface between public health and clinical health within the context of the search for networking approaches geared to a more integrated delivery of health services. The articulation of an operative interface is complicated by the fact that the definition of networking modalities involves complex intra- and interdisciplinary and intra- and interorganizational systems across which a new transversal dynamics of intervention practices and exchanges between service structures must be established. A better understanding of the situation is reached by shedding light on the rationale underlying the organizational methods that form the bases of the interface between these two sectors of activity. The Quebec experience demonstrates that neither the structural-functionalist approach, which emphasizes remodelling establishment structures and functions as determinants of integration, nor the structural-constructivist approach, which prioritizes distinct fields of practice in public health and clinical health, adequately serves the purpose of networking and integration. Consequently, a theoretical reframing is imperative. In this regard, structuration theory, which fosters the simultaneous study of methods of inter-structure coordination and inter-actor cooperation, paves the way for a better understanding of the situation and, in turn, to the emergence of new integration possibilities.

  16. An Empirical Analysis of Knowledge Based Hypertext Navigation

    PubMed Central

    Snell, J.R.; Boyle, C.

    1990-01-01

    Our purpose is to investigate the effectiveness of knowledge-based navigation in a dermatology hypertext network. The chosen domain is a set of dermatology class notes implemented in Hypercard and SINS. The study measured time, number of moves, and success rates for subjects to find solutions to ten questions. The subjects were required to navigate within a dermatology hypertext network in order to find the solutions to a question. Our results indicate that knowledge-based navigation can assist the user in finding information of interest in a fewer number of node visits (moves) than with traditional button-based browsing or keyword searching. The time necessary to find an item of interest was lower for traditional-based methods. There was no difference in success rates for the two test groups.

  17. Performance-Based Service Quality Model: An Empirical Study on Japanese Universities

    ERIC Educational Resources Information Center

    Sultan, Parves; Wong, Ho

    2010-01-01

    Purpose: This paper aims to develop and empirically test the performance-based higher education service quality model. Design/methodology/approach: The study develops 67-item instrument for measuring performance-based service quality with a particular focus on the higher education sector. Scale reliability is confirmed using the Cronbach's alpha.…

  18. Empirically Based School Interventions Targeted at Academic and Mental Health Functioning

    ERIC Educational Resources Information Center

    Hoagwood, Kimberly E.; Olin, S. Serene; Kerker, Bonnie D.; Kratochwill, Thomas R.; Crowe, Maura; Saka, Noa

    2007-01-01

    This review examines empirically based studies of school-based mental health interventions. The review identified 64 out of more than 2,000 articles published between 1990 and 2006 that met methodologically rigorous criteria for inclusion. Of these 64 articles, only 24 examined both mental health "and" educational outcomes. The majority of…

  19. Semantically-Based Child Grammars: Some Empirical Inadequacies.

    ERIC Educational Resources Information Center

    Hyams, Nina

    It is argued that the general consensus of researchers of child language that the grammatical system underlying the child's earliest multiword utterances is semantically-based, fails to provide an adequate description of even the earliest multiword utterances, and that the most sparing account of the acquisition data must include reference to…

  20. Empirical comparison of structure-based pathway methods

    PubMed Central

    Jaakkola, Maria K.

    2016-01-01

    Multiple methods have been proposed to estimate pathway activities from expression profiles, and yet, there is not enough information available about the performance of those methods. This makes selection of a suitable tool for pathway analysis difficult. Although methods based on simple gene lists have remained the most common approach, various methods that also consider pathway structure have emerged. To provide practical insight about the performance of both list-based and structure-based methods, we tested six different approaches to estimate pathway activities in two different case study settings of different characteristics. The first case study setting involved six renal cell cancer data sets, and the differences between expression profiles of case and control samples were relatively big. The second case study setting involved four type 1 diabetes data sets, and the profiles of case and control samples were more similar to each other. In general, there were marked differences in the outcomes of the different pathway tools even with the same input data. In the cancer studies, the results of a tested method were typically consistent across the different data sets, yet different between the methods. In the more challenging diabetes studies, almost all the tested methods detected as significant only few pathways if any. PMID:26197809

  1. Polymer electrolyte membrane fuel cell fault diagnosis based on empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Damour, Cédric; Benne, Michel; Grondin-Perez, Brigitte; Bessafi, Miloud; Hissel, Daniel; Chabriat, Jean-Pierre

    2015-12-01

    Diagnosis tool for water management is relevant to improve the reliability and lifetime of polymer electrolyte membrane fuel cells (PEMFCs). This paper presents a novel signal-based diagnosis approach, based on Empirical Mode Decomposition (EMD), dedicated to PEMFCs. EMD is an empirical, intuitive, direct and adaptive signal processing method, without pre-determined basis functions. The proposed diagnosis approach relies on the decomposition of FC output voltage to detect and isolate flooding and drying faults. The low computational cost of EMD, the reduced number of required measurements, and the high diagnosis accuracy of flooding and drying faults diagnosis make this approach a promising online diagnosis tool for PEMFC degraded modes management.

  2. An Empirical Investigation of a Theoretically Based Measure of Perceived Wellness

    ERIC Educational Resources Information Center

    Harari, Marc J.; Waehler, Charles A.; Rogers, James R.

    2005-01-01

    The Perceived Wellness Survey (PWS; T. Adams, 1995; T. Adams, J. Bezner, & M. Steinhardt, 1997) is a recently developed instrument intended to operationalize the comprehensive Perceived Wellness Model (T. Adams, J. Bezner, & M. Steinhardt, 1997), an innovative model that attempts to include the balance of multiple life activities in its evaluation…

  3. The Empirical Investigation of Perspective-Based Reading

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Green, Scott; Laitenberger, Oliver; Shull, Forrest; Sorumgard, Sivert; Zelkowitz, Marvin V.

    1996-01-01

    We consider reading techniques a fundamental means of achieving high quality software. Due to the lack of research in this area, we are experimenting with the application and comparison of various reading techniques. This paper deals with our experiences with Perspective-Based Reading (PBR), a particular reading technique for requirements documents. The goal of PBR is to provide operational scenarios where members of a review team read a document from a particular perspective (e.g., tester, developer, user). Our assumption is that the combination of different perspectives provides better coverage of the document than the same number of readers using their usual technique.

  4. Behavioral Modeling Based on Probabilistic Finite Automata: An Empirical Study.

    PubMed

    Tîrnăucă, Cristina; Montaña, José L; Ontañón, Santiago; González, Avelino J; Pardo, Luis M

    2016-01-01

    Imagine an agent that performs tasks according to different strategies. The goal of Behavioral Recognition (BR) is to identify which of the available strategies is the one being used by the agent, by simply observing the agent's actions and the environmental conditions during a certain period of time. The goal of Behavioral Cloning (BC) is more ambitious. In this last case, the learner must be able to build a model of the behavior of the agent. In both settings, the only assumption is that the learner has access to a training set that contains instances of observed behavioral traces for each available strategy. This paper studies a machine learning approach based on Probabilistic Finite Automata (PFAs), capable of achieving both the recognition and cloning tasks. We evaluate the performance of PFAs in the context of a simulated learning environment (in this case, a virtual Roomba vacuum cleaner robot), and compare it with a collection of other machine learning approaches. PMID:27347956

  5. Lightning Detection Efficiency Analysis Process: Modeling Based on Empirical Data

    NASA Technical Reports Server (NTRS)

    Rompala, John T.

    2005-01-01

    A ground based lightning detection system employs a grid of sensors, which record and evaluate the electromagnetic signal produced by a lightning strike. Several detectors gather information on that signal s strength, time of arrival, and behavior over time. By coordinating the information from several detectors, an event solution can be generated. That solution includes the signal s point of origin, strength and polarity. Determination of the location of the lightning strike uses algorithms based on long used techniques of triangulation. Determination of the event s original signal strength relies on the behavior of the generated magnetic field over distance and time. In general the signal from the event undergoes geometric dispersion and environmental attenuation as it progresses. Our knowledge of that radial behavior together with the strength of the signal received by detecting sites permits an extrapolation and evaluation of the original strength of the lightning strike. It also limits the detection efficiency (DE) of the network. For expansive grids and with a sparse density of detectors, the DE varies widely over the area served. This limits the utility of the network in gathering information on regional lightning strike density and applying it to meteorological studies. A network of this type is a grid of four detectors in the Rondonian region of Brazil. The service area extends over a million square kilometers. Much of that area is covered by rain forests. Thus knowledge of lightning strike characteristics over the expanse is of particular value. I have been developing a process that determines the DE over the region [3]. In turn, this provides a way to produce lightning strike density maps, corrected for DE, over the entire region of interest. This report offers a survey of that development to date and a record of present activity.

  6. Implementing community-based provider participation in research: an empirical study

    PubMed Central

    2012-01-01

    Background Since 2003, the United States National Institutes of Health (NIH) has sought to restructure the clinical research enterprise in the United States by promoting collaborative research partnerships between academically-based investigators and community-based physicians. By increasing community-based provider participation in research (CBPPR), the NIH seeks to advance the science of discovery by conducting research in clinical settings where most people get their care, and accelerate the translation of research results into everyday clinical practice. Although CBPPR is seen as a promising strategy for promoting the use of evidence-based clinical services in community practice settings, few empirical studies have examined the organizational factors that facilitate or hinder the implementation of CBPPR. The purpose of this study is to explore the organizational start-up and early implementation of CBPPR in community-based practice. Methods We used longitudinal, case study research methods and an organizational model of innovation implementation to theoretically guide our study. Our sample consisted of three community practice settings that recently joined the National Cancer Institute’s (NCI) Community Clinical Oncology Program (CCOP) in the United States. Data were gathered through site visits, telephone interviews, and archival documents from January 2008 to May 2011. Results The organizational model for innovation implementation was useful in identifying and investigating the organizational factors influencing start-up and early implementation of CBPPR in CCOP organizations. In general, the three CCOP organizations varied in the extent to which they achieved consistency in CBPPR over time and across physicians. All three CCOP organizations demonstrated mixed levels of organizational readiness for change. Hospital management support and resource availability were limited across CCOP organizations early on, although they improved in one CCOP organization

  7. The Empirical Investigation of Perspective-Based Reading

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Green, Scott; Laitenberger, Oliver; Shull, Forrest; Sorumgard, Sivert; Zelkowitz, Marvin V.

    1995-01-01

    We consider reading techniques a fundamental means of achieving high quality software. Due to lack of research in this area, we are experimenting with the application and comparison of various reading techniques. This paper deals with our experiences with Perspective Based Reading (PBR) a particular reading technique for requirement documents. The goal of PBR is to provide operation scenarios where members of a review team read a document from a particular perspective (eg., tester, developer, user). Our assumption is that the combination of different perspective provides better coverage of the document than the same number of readers using their usual technique. To test the efficacy of PBR, we conducted two runs of a controlled experiment in the environment of NASA GSFC Software Engineering Laboratory (SEL), using developers from the environment. The subjects read two types of documents, one generic in nature and the other from the NASA Domain, using two reading techniques, PBR and their usual technique. The results from these experiment as well as the experimental design, are presented and analyzed. When there is a statistically significant distinction, PBR performs better than the subjects' usual technique. However, PBR appears to be more effective on the generic documents than on the NASA documents.

  8. Towards an Empirically Based Parametric Explosion Spectral Model

    SciTech Connect

    Ford, S R; Walter, W R; Ruppert, S; Matzel, E; Hauk, T; Gok, R

    2009-08-31

    Small underground nuclear explosions need to be confidently detected, identified, and characterized in regions of the world where they have never before been tested. The focus of our work is on the local and regional distances (< 2000 km) and phases (Pn, Pg, Sn, Lg) necessary to see small explosions. We are developing a parametric model of the nuclear explosion seismic source spectrum that is compatible with the earthquake-based geometrical spreading and attenuation models developed using the Magnitude Distance Amplitude Correction (MDAC) techniques (Walter and Taylor, 2002). The explosion parametric model will be particularly important in regions without any prior explosion data for calibration. The model is being developed using the available body of seismic data at local and regional distances for past nuclear explosions at foreign and domestic test sites. Parametric modeling is a simple and practical approach for widespread monitoring applications, prior to the capability to carry out fully deterministic modeling. The achievable goal of our parametric model development is to be able to predict observed local and regional distance seismic amplitudes for event identification and yield determination in regions with incomplete or no prior history of underground nuclear testing. The relationship between the parametric equations and the geologic and containment conditions will assist in our physical understanding of the nuclear explosion source.

  9. Effect of balancing selection on spatial genetic structure within populations: theoretical investigations on the self-incompatibility locus and empirical studies in Arabidopsis halleri

    PubMed Central

    Leducq, J-B; Llaurens, V; Castric, V; Saumitou-Laprade, P; Hardy, O J; Vekemans, X

    2011-01-01

    The effect of selection on patterns of genetic structure within and between populations may be studied by contrasting observed patterns at the genes targeted by selection with those of unlinked neutral marker loci. Local directional selection on target genes will produce stronger population genetic structure than at neutral loci, whereas the reverse is expected for balancing selection. However, theoretical predictions on the intensity of this signal under precise models of balancing selection are still lacking. Using negative frequency-dependent selection acting on self-incompatibility systems in plants as a model of balancing selection, we investigated the effect of such selection on patterns of spatial genetic structure within a continuous population. Using numerical simulations, we tested the effect of the type of self-incompatibility system, the number of alleles at the self-incompatibility locus and the dominance interactions among them, the extent of gene dispersal, and the immigration rate on spatial genetic structure at the selected locus and at unlinked neutral loci. We confirm that frequency-dependent selection is expected to reduce the extent of spatial genetic structure as compared to neutral loci, particularly in situations with low number of alleles at the self-incompatibility locus, high frequency of codominant interactions among alleles, restricted gene dispersal and restricted immigration from outside populations. Hence the signature of selection on spatial genetic structure is expected to vary across species and populations, and we show that empirical data from the literature as well as data reported here on three natural populations of the herb Arabidopsis halleri confirm these theoretical results. PMID:20531450

  10. Untangling the Evidence: Introducing an Empirical Model for Evidence-Based Library and Information Practice

    ERIC Educational Resources Information Center

    Gillespie, Ann

    2014-01-01

    Introduction: This research is the first to investigate the experiences of teacher-librarians as evidence-based practice. An empirically derived model is presented in this paper. Method: This qualitative study utilised the expanded critical incident approach, and investigated the real-life experiences of fifteen Australian teacher-librarians,…

  11. Satellite-based empirical models linking river plume dynamics with hypoxic area andvolume

    EPA Science Inventory

    Satellite-based empirical models explaining hypoxic area and volume variation were developed for the seasonally hypoxic (O2 < 2 mg L−1) northern Gulf of Mexico adjacent to the Mississippi River. Annual variations in midsummer hypoxic area and ...

  12. An Empirically Based Method of Q-Matrix Validation for the DINA Model: Development and Applications

    ERIC Educational Resources Information Center

    de la Torre, Jimmy

    2008-01-01

    Most model fit analyses in cognitive diagnosis assume that a Q matrix is correct after it has been constructed, without verifying its appropriateness. Consequently, any model misfit attributable to the Q matrix cannot be addressed and remedied. To address this concern, this paper proposes an empirically based method of validating a Q matrix used…

  13. An Empirically-Based Statewide System for Identifying Quality Pre-Kindergarten Programs

    ERIC Educational Resources Information Center

    Williams, Jeffrey M.; Landry, Susan H.; Anthony, Jason L.; Swank, Paul R.; Crawford, April D.

    2012-01-01

    This study presents an empirically-based statewide system that links information about pre-kindergarten programs with children's school readiness scores to certify pre-kindergarten classrooms as promoting school readiness. Over 8,000 children from 1,255 pre-kindergarten classrooms were followed longitudinally for one year. Pre-kindergarten quality…

  14. Empirically Derived Consequences: A Data-Based Method for Prescribing Treatments for Destructive Behavior.

    ERIC Educational Resources Information Center

    Fisher, Wayne; And Others

    1994-01-01

    This study used a data-based assessment to identify reinforcers and punishers for successful treatment of two children with severe destructive behaviors. Results suggested that empirically derived consequences may be useful in decreasing destructive behavior when a functional assessment is inconclusive or suggests that internal stimuli are…

  15. Feasibility of an Empirically Based Program for Parents of Preschoolers with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Dababnah, Sarah; Parish, Susan L.

    2016-01-01

    This article reports on the feasibility of implementing an existing empirically based program, "The Incredible Years," tailored to parents of young children with autism spectrum disorder. Parents raising preschool-aged children (aged 3-6?years) with autism spectrum disorder (N?=?17) participated in a 15-week pilot trial of the…

  16. Empirical vs. Expected IRT-Based Reliability Estimation in Computerized Multistage Testing (MST)

    ERIC Educational Resources Information Center

    Zhang, Yanwei; Breithaupt, Krista; Tessema, Aster; Chuah, David

    2006-01-01

    Two IRT-based procedures to estimate test reliability for a certification exam that used both adaptive (via a MST model) and non-adaptive design were considered in this study. Both procedures rely on calibrated item parameters to estimate error variance. In terms of score variance, one procedure (Method 1) uses the empirical ability distribution…

  17. Development of an Empirically Based Questionnaire to Investigate Young Students' Ideas about Nature of Science

    ERIC Educational Resources Information Center

    Chen, Sufen; Chang, Wen-Hua; Lieu, Sang-Chong; Kao, Huey-Lien; Huang, Mao-Tsai; Lin, Shu-Fen

    2013-01-01

    This study developed an empirically based questionnaire to monitor young learners' conceptions of nature of science (NOS). The questionnaire, entitled Students' Ideas about Nature of Science (SINOS), measured views on theory-ladenness, use of creativity and imagination, tentativeness of scientific knowledge, durability of scientific knowledge,…

  18. Implementing Evidence-Based Practice: A Review of the Empirical Research Literature

    ERIC Educational Resources Information Center

    Gray, Mel; Joy, Elyssa; Plath, Debbie; Webb, Stephen A.

    2013-01-01

    The article reports on the findings of a review of empirical studies examining the implementation of evidence-based practice (EBP) in the human services. Eleven studies were located that defined EBP as a research-informed, clinical decision-making process and identified barriers and facilitators to EBP implementation. A thematic analysis of the…

  19. Training Community-Based Professionals to Implement an Empirically Supported Parenting Program

    ERIC Educational Resources Information Center

    Fox, Robert A.; Duffy, Kathleen M.; Keller, Kathryn M.

    2006-01-01

    Professionals representing 14 community-based organizations were trained at three different sites serving urban and rural families to implement an empirically supported parenting program for families of young children with challenging behaviors. Of the 44 practitioners trained, 23 successfully completed the program, which involved passing a…

  20. School-Based Management and Paradigm Shift in Education an Empirical Study

    ERIC Educational Resources Information Center

    Cheng, Yin Cheong; Mok, Magdalena Mo Ching

    2007-01-01

    Purpose: This paper aims to report empirical research investigating how school-based management (SBM) and paradigm shift (PS) in education are closely related to teachers' student-centered teaching and students' active learning in a sample of Hong Kong secondary schools. Design/methodology/approach: It is a cross-sectional survey research…

  1. Empirically-Based Predictions and Their Generalizability for Decision-Making in Occupational Education

    ERIC Educational Resources Information Center

    Passmore, David L.; Irvin, Donald E.

    1974-01-01

    The generation of empirically-based equations is within the grasp of most researchers in occupational education. This paper was designed to develop a sensitivity to the need for cross-validation in these circumstances. Program CROSVAL is a means of bridging the gap between the need for, and mechanization of, cross validation. (Author/DS)

  2. A Rigorous Test of the Fit of the Circumplex Model to Big Five Personality Data: Theoretical and Methodological Issues and Two Large Sample Empirical Tests.

    PubMed

    DeGeest, David Scott; Schmidt, Frank

    2015-01-01

    Our objective was to apply the rigorous test developed by Browne (1992) to determine whether the circumplex model fits Big Five personality data. This test has yet to be applied to personality data. Another objective was to determine whether blended items explained correlations among the Big Five traits. We used two working adult samples, the Eugene-Springfield Community Sample and the Professional Worker Career Experience Survey. Fit to the circumplex was tested via Browne's (1992) procedure. Circumplexes were graphed to identify items with loadings on multiple traits (blended items), and to determine whether removing these items changed five-factor model (FFM) trait intercorrelations. In both samples, the circumplex structure fit the FFM traits well. Each sample had items with dual-factor loadings (8 items in the first sample, 21 in the second). Removing blended items had little effect on construct-level intercorrelations among FFM traits. We conclude that rigorous tests show that the fit of personality data to the circumplex model is good. This finding means the circumplex model is competitive with the factor model in understanding the organization of personality traits. The circumplex structure also provides a theoretically and empirically sound rationale for evaluating intercorrelations among FFM traits. Even after eliminating blended items, FFM personality traits remained correlated.

  3. Theoretical magnetograms based on quantitative simulation of a magnetospheric substorm

    NASA Technical Reports Server (NTRS)

    Chen, C.-K.; Wolf, R. A.; Karty, J. L.; Harel, M.

    1982-01-01

    Substorm currents derived from the Rice University computer simulation of the September 19, 1976 substorm event are used to compute theoretical magnetograms as a function of universal time for various stations, integrating the Biot-Savart law over a maze of about 2700 wires and bands that carry the ring, Birkeland and horizontal ionospheric currents. A comparison of theoretical results with corresponding observations leads to a claim of general agreement, especially for stations at high and middle magnetic latitudes. Model results suggest that the ground magnetic field perturbations arise from complicated combinations of different kinds of currents, and that magnetic field disturbances due to different but related currents cancel each other out despite the inapplicability of Fukushima's (1973) theorem. It is also found that the dawn-dusk asymmetry in the horizontal magnetic field disturbance component at low latitudes is due to a net downward Birkeland current at noon, a net upward current at midnight, and, generally, antisunward-flowing electrojets.

  4. Attachment-Based Family Therapy: A Review of the Empirical Support.

    PubMed

    Diamond, Guy; Russon, Jody; Levy, Suzanne

    2016-09-01

    Attachment-based family therapy (ABFT) is an empirically supported treatment designed to capitalize on the innate, biological desire for meaningful and secure relationships. The therapy is grounded in attachment theory and provides an interpersonal, process-oriented, trauma-focused approach to treating adolescent depression, suicidality, and trauma. Although a process-oriented therapy, ABFT offers a clear structure and road map to help therapists quickly address attachment ruptures that lie at the core of family conflict. Several clinical trials and process studies have demonstrated empirical support for the model and its proposed mechanism of change. This article provides an overview of the clinical model and the existing empirical support for ABFT. PMID:27541199

  5. Theoretical Investigations of Plasma-Based Accelerators and Other Advanced Accelerator Concepts

    SciTech Connect

    Shuets, G.

    2004-05-21

    Theoretical investigations of plasma-based accelerators and other advanced accelerator concepts. The focus of the work was on the development of plasma based and structure based accelerating concepts, including laser-plasma, plasma channel, and microwave driven plasma accelerators.

  6. Theoretical magnetograms based on quantitative simulation of a magnetospheric substorm

    SciTech Connect

    Chen, C.; Wolf, R.A.; Harel, M.; Karty, J.L.

    1982-08-01

    Using substorm currents derived from the Rice computer simulation of the substorm event of September 19, 1976, we have computed theoretical magnetograms as a function of universal time for various stations. A theoretical Dst has also been computed. Our computed magnetograms were obtained by integrating the Biot-Savart law over a maze of approximately 2700 wires and bands that carry the ring currents, the Birkeland currents, and the horizontal ionospheric currents. Ground currents and dynamo currents were neglected. Computed contributions to the magnetic field perturbation from eleven different kinds of currents are displayed (e.g., ring currents, northern hemisphere Birkeland currents). First, overall agreement of theory and data is generally satisfactory, especially for stations at high and mid-magnetic latitudes. Second, model results suggest that the ground magnetic field perturbations arise from very complicated combinations of different kinds of currents and that the magnetic field disturbances due to different but related currents often cancel each other, despite the fact that complicated inhomogeneous conductivities in our model prevent rigorous application of Fukushima's theorem. Third, both the theoretical and observed Dst decrease during the expansion phase of the substorm, but data indicate that Dst relaxes back toward its initial value within about an hour after the peak of the substorm. Fourth, the dawn-dusk asymmetry in the horizontal component of magnetic field disturbance at low latitudes in a substorm is essentially due to a net downward Birkeland current at noon, net upward current at midnight, and generally antisunward flowing electrojets; it is not due to a physical partial ring current injected into the duskside of the inner magnetosphere.

  7. PDE-based Non-Linear Diffusion Techniques for Denoising Scientific and Industrial Images: An Empirical Study

    SciTech Connect

    Weeratunga, S K; Kamath, C

    2001-12-20

    Removing noise from data is often the first step in data analysis. Denoising techniques should not only reduce the noise, but do so without blurring or changing the location of the edges. Many approaches have been proposed to accomplish this; in this paper, they focus on one such approach, namely the use of non-linear diffusion operators. This approach has been studied extensively from a theoretical viewpoint ever since the 1987 work of Perona and Malik showed that non-linear filters outperformed the more traditional linear Canny edge detector. They complement this theoretical work by investigating the performance of several isotropic diffusion operators on test images from scientific domains. They explore the effects of various parameters such as the choice of diffusivity function, explicit and implicit methods for the discretization of the PDE, and approaches for the spatial discretization of the non-linear operator etc. They also compare these schemes with simple spatial filters and the more complex wavelet-based shrinkage techniques. The empirical results show that, with an appropriate choice of parameters, diffusion-based schemes can be as effective as competitive techniques.

  8. Effectiveness of a theoretically-based judgment and decision making intervention for adolescents.

    PubMed

    Knight, Danica K; Dansereau, Donald F; Becan, Jennifer E; Rowan, Grace A; Flynn, Patrick M

    2015-05-01

    Although adolescents demonstrate capacity for rational decision making, their tendency to be impulsive, place emphasis on peers, and ignore potential consequences of their actions often translates into higher risk-taking including drug use, illegal activity, and physical harm. Problems with judgment and decision making contribute to risky behavior and are core issues for youth in treatment. Based on theoretical and empirical advances in cognitive science, the Treatment Readiness and Induction Program (TRIP) represents a curriculum-based decision making intervention that can be easily inserted into a variety of content-oriented modalities as well as administered as a separate therapeutic course. The current study examined the effectiveness of TRIP for promoting better judgment among 519 adolescents (37 % female; primarily Hispanic and Caucasian) in residential substance abuse treatment. Change over time in decision making and premeditation (i.e., thinking before acting) was compared among youth receiving standard operating practice (n = 281) versus those receiving standard practice plus TRIP (n = 238). Change in TRIP-specific content knowledge was examined among clients receiving TRIP. Premeditation improved among youth in both groups; TRIP clients showed greater improvement in decision making. TRIP clients also reported significant increases over time in self-awareness, positive-focused thinking (e.g., positive self-talk, goal setting), and recognition of the negative effects of drug use. While both genders showed significant improvement, males showed greater gains in metacognitive strategies (i.e., awareness of one's own cognitive process) and recognition of the negative effects of drug use. These results suggest that efforts to teach core thinking strategies and apply/practice them through independent intervention modules may benefit adolescents when used in conjunction with content-based programs designed to change problematic behaviors.

  9. Effectiveness of a Theoretically-Based Judgment and Decision Making Intervention for Adolescents

    PubMed Central

    Knight, Danica K.; Dansereau, Donald F.; Becan, Jennifer E.; Rowan, Grace A.; Flynn, Patrick M.

    2014-01-01

    Although adolescents demonstrate capacity for rational decision making, their tendency to be impulsive, place emphasis on peers, and ignore potential consequences of their actions often translates into higher risk-taking including drug use, illegal activity, and physical harm. Problems with judgment and decision making contribute to risky behavior and are core issues for youth in treatment. Based on theoretical and empirical advances in cognitive science, the Treatment Readiness and Induction Program (TRIP) represents a curriculum-based decision making intervention that can be easily inserted into a variety of content-oriented modalities as well as administered as a separate therapeutic course. The current study examined the effectiveness of TRIP for promoting better judgment among 519 adolescents (37% female; primarily Hispanic and Caucasian) in residential substance abuse treatment. Change over time in decision making and premeditation (i.e., thinking before acting) was compared among youth receiving standard operating practice (n = 281) versus those receiving standard practice plus TRIP (n = 238). Change in TRIP-specific content knowledge was examined among clients receiving TRIP. Premeditation improved among youth in both groups; TRIP clients showed greater improvement in decision making. TRIP clients also reported significant increases over time in self-awareness, positive-focused thinking (e.g., positive self-talk, goal setting), and recognition of the negative effects of drug use. While both genders showed significant improvement, males showed greater gains in metacognitive strategies (i.e., awareness of one’s own cognitive process) and recognition of the negative effects of drug use. These results suggest that efforts to teach core thinking strategies and apply/practice them through independent intervention modules may benefit adolescents when used in conjunction with content-based programs designed to change problematic behaviors. PMID:24760288

  10. TheoReTS - An information system for theoretical spectra based on variational predictions from molecular potential energy and dipole moment surfaces

    NASA Astrophysics Data System (ADS)

    Rey, Michaël; Nikitin, Andrei V.; Babikov, Yurii L.; Tyuterev, Vladimir G.

    2016-09-01

    Knowledge of intensities of rovibrational transitions of various molecules and theirs isotopic species in wide spectral and temperature ranges is essential for the modeling of optical properties of planetary atmospheres, brown dwarfs and for other astrophysical applications. TheoReTS ("Theoretical Reims-Tomsk Spectral data") is an Internet accessible information system devoted to ab initio based rotationally resolved spectra predictions for some relevant molecular species. All data were generated from potential energy and dipole moment surfaces computed via high-level electronic structure calculations using variational methods for vibration-rotation energy levels and transitions. When available, empirical corrections to band centers were applied, all line intensities remaining purely ab initio. The current TheoReTS implementation contains information on four-to-six atomic molecules, including phosphine, methane, ethylene, silane, methyl-fluoride, and their isotopic species 13CH4 , 12CH3D , 12CH2D2 , 12CD4 , 13C2H4, … . Predicted hot methane line lists up to T = 2000 K are included. The information system provides the associated software for spectra simulation including absorption coefficient, absorption and emission cross-sections, transmittance and radiance. The simulations allow Lorentz, Gauss and Voight line shapes. Rectangular, triangular, Lorentzian, Gaussian, sinc and sinc squared apparatus function can be used with user-defined specifications for broadening parameters and spectral resolution. All information is organized as a relational database with the user-friendly graphical interface according to Model-View-Controller architectural tools. The full-featured web application is written on PHP using Yii framework and C++ software modules. In case of very large high-temperature line lists, a data compression is implemented for fast interactive spectra simulations of a quasi-continual absorption due to big line density. Applications for the TheoReTS may

  11. [Empirically based early intervention programs for children with autistic disorders - a selective literature review].

    PubMed

    Freitag, Christine M

    2010-07-01

    Autistic Disorders (AD) are characterized by impairments in social interaction and communication, as well as by stereotyped behaviors and interests. Early intervention programs in AD aim to improve several aspects of the child's abilities: joint attention, play abilities, language development, and especially social interaction and communication. In this review article based on a selective literature search, the relatively best empirically based early intervention programs will be discussed with a focus on the proven efficacy of these interventions.

  12. Information Theoretic Similarity Measures for Content Based Image Retrieval.

    ERIC Educational Resources Information Center

    Zachary, John; Iyengar, S. S.

    2001-01-01

    Content-based image retrieval is based on the idea of extracting visual features from images and using them to index images in a database. Proposes similarity measures and an indexing algorithm based on information theory that permits an image to be represented as a single number. When used in conjunction with vectors, this method displays…

  13. A theoretical drought classification method for the multivariate drought index based on distribution properties of standardized drought indices

    NASA Astrophysics Data System (ADS)

    Hao, Zengchao; Hao, Fanghua; Singh, Vijay P.; Xia, Youlong; Ouyang, Wei; Shen, Xinyi

    2016-06-01

    Drought indices have been commonly used to characterize different properties of drought and the need to combine multiple drought indices for accurate drought monitoring has been well recognized. Based on linear combinations of multiple drought indices, a variety of multivariate drought indices have recently been developed for comprehensive drought monitoring to integrate drought information from various sources. For operational drought management, it is generally required to determine thresholds of drought severity for drought classification to trigger a mitigation response during a drought event to aid stakeholders and policy makers in decision making. Though the classification of drought categories based on the univariate drought indices has been well studied, drought classification method for the multivariate drought index has been less explored mainly due to the lack of information about its distribution property. In this study, a theoretical drought classification method is proposed for the multivariate drought index, based on a linear combination of multiple indices. Based on the distribution property of the standardized drought index, a theoretical distribution of the linear combined index (LDI) is derived, which can be used for classifying drought with the percentile approach. Application of the proposed method for drought classification of LDI, based on standardized precipitation index (SPI), standardized soil moisture index (SSI), and standardized runoff index (SRI) is illustrated with climate division data from California, United States. Results from comparison with the empirical methods show a satisfactory performance of the proposed method for drought classification.

  14. The Theoretical Astrophysical Observatory: Cloud-based Mock Galaxy Catalogs

    NASA Astrophysics Data System (ADS)

    Bernyk, Maksym; Croton, Darren J.; Tonini, Chiara; Hodkinson, Luke; Hassan, Amr H.; Garel, Thibault; Duffy, Alan R.; Mutch, Simon J.; Poole, Gregory B.; Hegarty, Sarah

    2016-03-01

    We introduce the Theoretical Astrophysical Observatory (TAO), an online virtual laboratory that houses mock observations of galaxy survey data. Such mocks have become an integral part of the modern analysis pipeline. However, building them requires expert knowledge of galaxy modeling and simulation techniques, significant investment in software development, and access to high performance computing. These requirements make it difficult for a small research team or individual to quickly build a mock catalog suited to their needs. To address this TAO offers access to multiple cosmological simulations and semi-analytic galaxy formation models from an intuitive and clean web interface. Results can be funnelled through science modules and sent to a dedicated supercomputer for further processing and manipulation. These modules include the ability to (1) construct custom observer light cones from the simulation data cubes; (2) generate the stellar emission from star formation histories, apply dust extinction, and compute absolute and/or apparent magnitudes; and (3) produce mock images of the sky. All of TAO’s features can be accessed without any programming requirements. The modular nature of TAO opens it up for further expansion in the future.

  15. Empirically Based Psychosocial Therapies for Schizophrenia: The Disconnection between Science and Practice.

    PubMed

    Shean, Glenn D

    2013-01-01

    Empirically validated psychosocial therapies for individuals diagnosed with schizophrenia were described in the report of the Schizophrenia Patient Outcomes Research Team (PORT, 2009). The PORT team identified eight psychosocial treatments: assertive community treatment, supported employment, cognitive behavioral therapy, family-based services, token economy, skills training, psychosocial interventions for alcohol and substance use disorders, and psychosocial interventions for weight management. PORT listings of empirically validated psychosocial therapies provide a useful template for the design of effective recovery-oriented mental health care systems. Unfortunately, surveys indicate that PORT listings have not been implemented in clinical settings. Obstacles to the implementation of PORT psychosocial therapy listings and suggestions for changes needed to foster implementation are discussed. Limitations of PORT therapy listings that are based on therapy outcome efficacy studies are discussed, and cross-cultural and course and outcome studies of correlates of recovery are summarized. PMID:23738068

  16. Scaling up explanation generation: Large-scale knowledge bases and empirical studies

    SciTech Connect

    Lester, J.C.; Porter, B.W.

    1996-12-31

    To explain complex phenomena, an explanation system must be able to select information from a formal representation of domain knowledge, organize the selected information into multisentential discourse plans, and realize the discourse plans in text. Although recent years have witnessed significant progress in the development of sophisticated computational mechanisms for explanation, empirical results have been limited. This paper reports on a seven year effort to empirically study explanation generation from semantically rich, large-scale knowledge bases. We first describe Knight, a robust explanation system that constructs multi-sentential and multi-paragraph explanations from the Biology Knowledge Base, a large-scale knowledge base in the domain of botanical anatomy, physiology, and development. We then introduce the Two Panel evaluation methodology and describe how Knight`s performance was assessed with this methodology in the most extensive empirical evaluation conducted on an explanation system. In this evaluation, Knight scored within {open_quotes}half a grade{close_quote} of domain experts, and its performance exceeded that of one of the domain experts.

  17. Measuring microscopic evolution processes of complex networks based on empirical data

    NASA Astrophysics Data System (ADS)

    Chi, Liping

    2015-04-01

    Aiming at understanding the microscopic mechanism of complex systems in real world, we perform the measurement that characterizes the evolution properties on two empirical data sets. In the Autonomous Systems Internet data, the network size keeps growing although the system suffers a high rate of node deletion (r = 0.4) and link deletion (q = 0.81). However, the average degree keeps almost unchanged during the whole time range. At each time step the external links attached to a new node are about c = 1.1 and the internal links added between existing nodes are approximately m = 8. For the Scientific Collaboration data, it is a cumulated result of all the authors from 1893 up to the considered year. There is no deletion of nodes and links, r = q = 0. The external and internal links at each time step are c = 1.04 and m = 0, correspondingly. The exponents of degree distribution p(k) ∼ k-γ of these two empirical datasets γdata are in good agreement with that obtained theoretically γtheory. The results indicate that these evolution quantities may provide an insight into capturing the microscopic dynamical processes that govern the network topology.

  18. Why Problem-Based Learning Works: Theoretical Foundations

    ERIC Educational Resources Information Center

    Marra, Rose M.; Jonassen, David H.; Palmer, Betsy; Luft, Steve

    2014-01-01

    Problem-based learning (PBL) is an instructional method where student learning occurs in the context of solving an authentic problem. PBL was initially developed out of an instructional need to help medical school students learn their basic sciences knowledge in a way that would be more lasting while helping to develop clinical skills…

  19. Theoretical Foundations of "Competitive Team-Based Learning"

    ERIC Educational Resources Information Center

    Hosseini, Seyed Mohammad Hassan

    2010-01-01

    This paper serves as a platform to precisely substantiate the success of "Competitive Team-Based Learning" (CTBL) as an effective and rational educational approach. To that end, it brings to the fore part of the (didactic) theories and hypotheses which in one way or another delineate and confirm the mechanisms under which successful…

  20. Flavor symmetry based MSSM: Theoretical models and phenomenological analysis

    NASA Astrophysics Data System (ADS)

    Babu, K. S.; Gogoladze, Ilia; Raza, Shabbar; Shafi, Qaisar

    2014-09-01

    We present a class of supersymmetric models in which symmetry considerations alone dictate the form of the soft SUSY breaking Lagrangian. We develop a class of minimal models, denoted as sMSSM—for flavor symmetry-based minimal supersymmetric standard model—that respect a grand unified symmetry such as SO(10) and a non-Abelian flavor symmetry H which suppresses SUSY-induced flavor violation. Explicit examples are constructed with the flavor symmetry being gauged SU(2)H and SO(3)H with the three families transforming as 2+1 and 3 representations, respectively. A simple solution is found in the case of SU(2)H for suppressing the flavor violating D-terms based on an exchange symmetry. Explicit models based on SO(3)H without the D-term problem are developed. In addition, models based on discrete non-Abelian flavor groups are presented which are automatically free from D-term issues. The permutation group S3 with a 2+1 family assignment, as well as the tetrahedral group A4 with a 3 assignment are studied. In all cases, a simple solution to the SUSY CP problem is found, based on spontaneous CP violation leading to a complex quark mixing matrix. We develop the phenomenology of the resulting sMSSM, which is controlled by seven soft SUSY breaking parameters for both the 2+1 assignment and the 3 assignment of fermion families. These models are special cases of the phenomenological MSSM (pMSSM), but with symmetry restrictions. We discuss the parameter space of sMSSM compatible with LHC searches, B-physics constraints and dark matter relic abundance. Fine-tuning in these models is relatively mild, since all SUSY particles can have masses below about 3 TeV.

  1. Band structure calculation of GaSe-based nanostructures using empirical pseudopotential method

    NASA Astrophysics Data System (ADS)

    Osadchy, A. V.; Volotovskiy, S. G.; Obraztsova, E. D.; Savin, V. V.; Golovashkin, D. L.

    2016-08-01

    In this paper we present the results of band structure computer simulation of GaSe- based nanostructures using the empirical pseudopotential method. Calculations were performed using a specially developed software that allows performing simulations using cluster computing. Application of this method significantly reduces the demands on computing resources compared to traditional approaches based on ab-initio techniques and provides receiving the adequate comparable results. The use of cluster computing allows to obtain information for structures that require an explicit account of a significant number of atoms, such as quantum dots and quantum pillars.

  2. Theoretically predicted Fox-7 based new high energy density molecules

    NASA Astrophysics Data System (ADS)

    Ghanta, Susanta

    2016-08-01

    Computational investigation of CHNO based high energy density molecules (HEDM) are designed with FOX-7 (1, 1-dinitro 2, 2-diamino ethylene) skeleton. We report structures, stability and detonation properties of these new molecules. A systematic analysis is presented for the crystal density, activation energy for nitro to nitrite isomerisation and the C-NO2 bond dissociation energy of these molecules. The Atoms in molecules (AIM) calculations have been performed to interpret the intra-molecular weak H-bonding interactions and the stability of C-NO2 bonds. The structure optimization, frequency and bond dissociation energy calculations have been performed at B3LYP level of theory by using G03 quantum chemistry package. Some of the designed molecules are found to be more promising HEDM than FOX-7 molecule, and are proposed to be candidate for synthetic purpose.

  3. HIRS-AMTS satellite sounding system test - Theoretical and empirical vertical resolving power. [High resolution Infrared Radiation Sounder - Advanced Moisture and Temperature Sounder

    NASA Technical Reports Server (NTRS)

    Thompson, O. E.

    1982-01-01

    The present investigation is concerned with the vertical resolving power of satellite-borne temperature sounding instruments. Information is presented on the capabilities of the High Resolution Infrared Radiation Sounder (HIRS) and a proposed sounding instrument called the Advanced Moisture and Temperature Sounder (AMTS). Two quite different methods for assessing the vertical resolving power of satellite sounders are discussed. The first is the theoretical method of Conrath (1972) which was patterned after the work of Backus and Gilbert (1968) The Backus-Gilbert-Conrath (BGC) approach includes a formalism for deriving a retrieval algorithm for optimizing the vertical resolving power. However, a retrieval algorithm constructed in the BGC optimal fashion is not necessarily optimal as far as actual temperature retrievals are concerned. Thus, an independent criterion for vertical resolving power is discussed. The criterion is based on actual retrievals of signal structure in the temperature field.

  4. An Empirical Typology of Residential Care/Assisted Living Based on a Four-State Study

    ERIC Educational Resources Information Center

    Park, Nan Sook; Zimmerman, Sheryl; Sloane, Philip D.; Gruber-Baldini, Ann L.; Eckert, J. Kevin

    2006-01-01

    Purpose: Residential care/assisted living describes diverse facilities providing non-nursing home care to a heterogeneous group of primarily elderly residents. This article derives typologies of assisted living based on theoretically and practically grounded evidence. Design and Methods: We obtained data from the Collaborative Studies of Long-Term…

  5. Fault Diagnosis of Rotating Machinery Based on an Adaptive Ensemble Empirical Mode Decomposition

    PubMed Central

    Lei, Yaguo; Li, Naipeng; Lin, Jing; Wang, Sizhe

    2013-01-01

    The vibration based signal processing technique is one of the principal tools for diagnosing faults of rotating machinery. Empirical mode decomposition (EMD), as a time-frequency analysis technique, has been widely used to process vibration signals of rotating machinery. But it has the shortcoming of mode mixing in decomposing signals. To overcome this shortcoming, ensemble empirical mode decomposition (EEMD) was proposed accordingly. EEMD is able to reduce the mode mixing to some extent. The performance of EEMD, however, depends on the parameters adopted in the EEMD algorithms. In most of the studies on EEMD, the parameters were selected artificially and subjectively. To solve the problem, a new adaptive ensemble empirical mode decomposition method is proposed in this paper. In the method, the sifting number is adaptively selected, and the amplitude of the added noise changes with the signal frequency components during the decomposition process. The simulation, the experimental and the application results demonstrate that the adaptive EEMD provides the improved results compared with the original EEMD in diagnosing rotating machinery. PMID:24351666

  6. An Empirical Pixel-Based Correction for Imperfect CTE. I. HST's Advanced Camera for Surveys

    NASA Astrophysics Data System (ADS)

    Anderson, Jay; Bedin, Luigi

    2010-09-01

    We use an empirical approach to characterize the effect of charge-transfer efficiency (CTE) losses in images taken with the Wide-Field Channel of the Advanced Camera for Surveys (ACS). The study is based on profiles of warm pixels in 168 dark exposures taken between 2009 September and October. The dark exposures allow us to explore charge traps that affect electrons when the background is extremely low. We develop a model for the readout process that reproduces the observed trails out to 70 pixels. We then invert the model to convert the observed pixel values in an image into an estimate of the original pixel values. We find that when we apply this image-restoration process to science images with a variety of stars on a variety of background levels, it restores flux, position, and shape. This means that the observed trails contain essentially all of the flux lost to inefficient CTE. The Space Telescope Science Institute is currently evaluating this algorithm with the aim of optimizing it and eventually providing enhanced data products. The empirical procedure presented here should also work for other epochs (e.g., pre-SM4), though the parameters may have to be recomputed for the time when ACS was operated at a higher temperature than the current -81°C. Finally, this empirical approach may also hold promise for other instruments, such as WFPC2, STIS, the ASC's HRC, and even WFC3/UVIS.

  7. Awareness-based game-theoretic space resource management

    NASA Astrophysics Data System (ADS)

    Chen, Genshe; Chen, Huimin; Pham, Khanh; Blasch, Erik; Cruz, Jose B., Jr.

    2009-05-01

    Over recent decades, the space environment becomes more complex with a significant increase in space debris and a greater density of spacecraft, which poses great difficulties to efficient and reliable space operations. In this paper we present a Hierarchical Sensor Management (HSM) method to space operations by (a) accommodating awareness modeling and updating and (b) collaborative search and tracking space objects. The basic approach is described as follows. Firstly, partition the relevant region of interest into district cells. Second, initialize and model the dynamics of each cell with awareness and object covariance according to prior information. Secondly, explicitly assign sensing resources to objects with user specified requirements. Note that when an object has intelligent response to the sensing event, the sensor assigned to observe an intelligent object may switch from time-to-time between a strong, active signal mode and a passive mode to maximize the total amount of information to be obtained over a multi-step time horizon and avoid risks. Thirdly, if all explicitly specified requirements are satisfied and there are still more sensing resources available, we assign the additional sensing resources to objects without explicitly specified requirements via an information based approach. Finally, sensor scheduling is applied to each sensor-object or sensor-cell pair according to the object type. We demonstrate our method with realistic space resources management scenario using NASA's General Mission Analysis Tool (GMAT) for space object search and track with multiple space borne observers.

  8. A theoretically based determination of bowen-ratio fetch requirements

    USGS Publications Warehouse

    Stannard, D.I.

    1997-01-01

    Determination of fetch requirements for accurate Bowen-ratio measurements of latent- and sensible-heat fluxes is more involved than for eddy-correlation measurements because Bowen-ratio sensors are located at two heights, rather than just one. A simple solution to the diffusion equation is used to derive an expression for Bowen-ratio fetch requirements, downwind of a step change in surface fluxes. These requirements are then compared to eddy-correlation fetch requirements based on the same diffusion equation solution. When the eddy-correlation and upper Bowen-ratio sensor heights are equal, and the available energy upwind and downwind of the step change is constant, the Bowen-ratio method requires less fetch than does eddy correlation. Differences in fetch requirements between the two methods are greatest over relatively smooth surfaces. Bowen-ratio fetch can be reduced significantly by lowering the lower sensor, as well as the upper sensor. The Bowen-ratio fetch model was tested using data from a field experiment where multiple Bowen-ratio systems were deployed simultaneously at various fetches and heights above a field of bermudagrass. Initial comparisons were poor, but improved greatly when the model was modified (and operated numerically) to account for the large roughness of the upwind cotton field.

  9. Empirical and physics based mathematical models of uranium hydride decomposition kinetics with quantified uncertainties.

    SciTech Connect

    Salloum, Maher N.; Gharagozloo, Patricia E.

    2013-10-01

    Metal particle beds have recently become a major technique for hydrogen storage. In order to extract hydrogen from such beds, it is crucial to understand the decomposition kinetics of the metal hydride. We are interested in obtaining a a better understanding of the uranium hydride (UH3) decomposition kinetics. We first developed an empirical model by fitting data compiled from different experimental studies in the literature and quantified the uncertainty resulting from the scattered data. We found that the decomposition time range predicted by the obtained kinetics was in a good agreement with published experimental results. Secondly, we developed a physics based mathematical model to simulate the rate of hydrogen diffusion in a hydride particle during the decomposition. We used this model to simulate the decomposition of the particles for temperatures ranging from 300K to 1000K while propagating parametric uncertainty and evaluated the kinetics from the results. We compared the kinetics parameters derived from the empirical and physics based models and found that the uncertainty in the kinetics predicted by the physics based model covers the scattered experimental data. Finally, we used the physics-based kinetics parameters to simulate the effects of boundary resistances and powder morphological changes during decomposition in a continuum level model. We found that the species change within the bed occurring during the decomposition accelerates the hydrogen flow by increasing the bed permeability, while the pressure buildup and the thermal barrier forming at the wall significantly impede the hydrogen extraction.

  10. Consistent climate-driven spatial patterns of terrestrial ecosystem carbon fluxes in the northern hemisphere: a theoretical framework and synthesis of empirical evidence

    NASA Astrophysics Data System (ADS)

    Yu, G.; Niu, S.; Chen, Z.; Zhu, X.

    2013-12-01

    A predictive understanding of the terrestrial ecosystem carbon fluxes has been developed slowly, largely owing to lack of broad generalizations and a theoretical framework as well as clearly defined hypotheses. We synthesized Eddy flux data in different regions of northern hemisphere and previously published papers, then developed a framework for the climate controls on the geoecological patterns of terrestrial ecosystem C fluxes, and proposed the underlying mechanisms. Based on the case studies and synthesis, we found that the spatial patterns of ecosystem C fluxes in China, Asia, three continents of the northern hemisphere all had general patterns: predominately controlled by temperature and precipitation, supporting and further developing the traditional theory of 'climate controls on the spatial patterns of ecosystem productivity' in Miami and other models. Five hypotheses were proposed to explain the ecological mechanisms and processes that attribute to the climate-driven spatial patterns of C fluxes. (1) Two key processes determining gross primary productivity (GPP), i.e. growing season length and carbon uptake capacity, are jointly controlled by temperature and precipitation; (2) Ecosystem respiration (ER) is predominately determined also by temperature and precipitation, as well as substrate supply; (3) Components of ecosystem C fluxes are closely coupled with each other in response to climate change; (4) Vegetation types and soil nutrients in particular area are fundamentally determined by environmental factors, which may impact C fluxes within a certain range, but couldn't change the climate-driven pattern of C fluxes at large scale, (5) Land use only changes the magnitude of C fluxes, but doesn't change the spatial patterns and their climate dependence. All of these hypotheses were well validated by the evidences of data synthesis, which could provide the foundation for a theoretical framework for better understanding and predicting geoecological

  11. A theoretical model of drumlin formation based on observations at Múlajökull, Iceland

    NASA Astrophysics Data System (ADS)

    Iverson, Neal R.; McCracken, Reba; Zoet, Lucas; Benediktsson, Ívar; Schomacker, Anders; Johnson, Mark; Finlayson, Andrew; Phillips, Emrys; Everest, Jeremy

    2016-04-01

    Theoretical models of drumlin formation have generally been developed in isolation from observations in modern drumlin forming environments - a major limitation on the empiricism necessary to confidently formulate models and test them. Observations at a rare modern drumlin field exposed by the recession of the Icelandic surge-type glacier, Múlajökull, allow an empirically-grounded and physically-based model of drumlin formation to be formulated and tested. Till fabrics based on anisotropy of magnetic susceptibility and clast orientations, along with stratigraphic observations and results of ground penetrating radar, indicate that drumlin relief results from basal till deposition on drumlins and erosion between them. These data also indicate that surges cause till deposition both on and between drumlins and provide no evidence of the longitudinally compressive or extensional strain in till that would be expected if flux divergence in a deforming bed were significant. Over 2000 measurements of till density, together with consolidation tests on the till, indicate that effective stresses on the bed were higher between drumlins than within them. This observation agrees with evidence that subglacial water drainage during normal flow of the glacier is through channels in low areas between drumlins and that crevasse swarms, which reduce total normal stresses on the bed, are coincident with drumlins. In the new model slip of ice over a bed with a sinusoidal perturbation, crevasse swarms, and flow of subglacial water toward R-channels that bound the bed undulation during periods of normal flow result in effective stresses that increase toward channels and decrease from the stoss to the lee sides of the undulation. This effective-stress pattern causes till entrainment and erosion by regelation infiltration (Rempel, 2008, JGR, 113) that peaks at the heads of incipient drumlins and near R-channels, while bed shear is inhibited by effective stresses too high to allow

  12. An ISAR imaging algorithm for the space satellite based on empirical mode decomposition theory

    NASA Astrophysics Data System (ADS)

    Zhao, Tao; Dong, Chun-zhu

    2014-11-01

    Currently, high resolution imaging of the space satellite is a popular topic in the field of radar technology. In contrast with regular targets, the satellite target often moves along with its trajectory and simultaneously its solar panel substrate changes the direction toward the sun to obtain energy. Aiming at the imaging problem, a signal separating and imaging approach based on the empirical mode decomposition (EMD) theory is proposed, and the approach can realize separating the signal of two parts in the satellite target, the main body and the solar panel substrate and imaging for the target. The simulation experimentation can demonstrate the validity of the proposed method.

  13. An Empirical Pixel-Based Correction for Imperfect CTE. I. HST's Advanced Camera for Surveys

    NASA Astrophysics Data System (ADS)

    Anderson, Jay; Bedin, Luigi R.

    2010-09-01

    We use an empirical approach to characterize the effect of charge-transfer efficiency (CTE) losses in images taken with the Wide-Field Channel of the Advanced Camera for Surveys (ACS). The study is based on profiles of warm pixels in 168 dark exposures taken between 2009 September and October. The dark exposures allow us to explore charge traps that affect electrons when the background is extremely low. We develop a model for the readout process that reproduces the observed trails out to 70 pixels. We then invert the model to convert the observed pixel values in an image into an estimate of the original pixel values. We find that when we apply this image-restoration process to science images with a variety of stars on a variety of background levels, it restores flux, position, and shape. This means that the observed trails contain essentially all of the flux lost to inefficient CTE. The Space Telescope Science Institute is currently evaluating this algorithm with the aim of optimizing it and eventually providing enhanced data products. The empirical procedure presented here should also work for other epochs (e.g., pre-SM4), though the parameters may have to be recomputed for the time when ACS was operated at a higher temperature than the current -81°C. Finally, this empirical approach may also hold promise for other instruments, such as WFPC2, STIS, the ACS's HRC, and even WFC3/UVIS. Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555.

  14. Advances on Empirical Mode Decomposition-based Time-Frequency Analysis Methods in Hydrocarbon Detection

    NASA Astrophysics Data System (ADS)

    Chen, H. X.; Xue, Y. J.; Cao, J.

    2015-12-01

    Empirical mode decomposition (EMD), which is a data-driven adaptive decomposition method and is not limited by time-frequency uncertainty spreading, is proved to be more suitable for seismic signals which are nonlinear and non-stationary. Compared with other Fourier-based and wavelet-based time-frequency methods, EMD-based time-frequency methods have higher temporal and spatial resolution and yield hydrocarbon interpretations with more statistical significance. Empirical mode decomposition algorithm has now evolved from EMD to Ensemble EMD (EEMD) to Complete Ensemble EMD (CEEMD). Even though EMD-based time-frequency methods offer many promising features for analyzing and processing geophysical data, there are some limitations or defects in EMD-based time-frequency methods. This presentation will present a comparative study on hydrocarbon detection using seven EMD-based time-frequency analysis methods, which include: (1) first, EMD combined with Hilbert transform (HT) as a time-frequency analysis method is used for hydrocarbon detection; and (2) second, Normalized Hilbert transform (NHT) and HU Methods respectively combined with HT as improved time-frequency analysis methods are applied for hydrocarbon detection; and (3) three, EMD combined with Teager-Kaiser energy (EMD/TK) is investigated for hydrocarbon detection; and (4) four, EMD combined with wavelet transform (EMDWave) as a seismic attenuation estimation method is comparatively studied; and (5) EEMD- and CEEMD- based time-frequency analysis methods used as highlight volumes technology are studied. The differences between these methods in hydrocarbon detection will be discussed. The question of getting a meaningful instantaneous frequency by HT and mode-mixing issues in EMD will be analysed. The work was supported by NSFC under grant Nos. 41430323, 41404102 and 41274128.

  15. Dip-separated structural filtering using seislet transform and adaptive empirical mode decomposition based dip filter

    NASA Astrophysics Data System (ADS)

    Chen, Yangkang

    2016-07-01

    The seislet transform has been demonstrated to have a better compression performance for seismic data compared with other well-known sparsity promoting transforms, thus it can be used to remove random noise by simply applying a thresholding operator in the seislet domain. Since the seislet transform compresses the seismic data along the local structures, the seislet thresholding can be viewed as a simple structural filtering approach. Because of the dependence on a precise local slope estimation, the seislet transform usually suffers from low compression ratio and high reconstruction error for seismic profiles that have dip conflicts. In order to remove the limitation of seislet thresholding in dealing with conflicting-dip data, I propose a dip-separated filtering strategy. In this method, I first use an adaptive empirical mode decomposition based dip filter to separate the seismic data into several dip bands (5 or 6). Next, I apply seislet thresholding to each separated dip component to remove random noise. Then I combine all the denoised components to form the final denoised data. Compared with other dip filters, the empirical mode decomposition based dip filter is data-adaptive. One only needs to specify the number of dip components to be separated. Both complicated synthetic and field data examples show superior performance of my proposed approach than the traditional alternatives. The dip-separated structural filtering is not limited to seislet thresholding, and can also be extended to all those methods that require slope information.

  16. Determination of knock characteristics in spark ignition engines: an approach based on ensemble empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Li, Ning; Yang, Jianguo; Zhou, Rui; Liang, Caiping

    2016-04-01

    Knock is one of the major constraints to improve the performance and thermal efficiency of spark ignition (SI) engines. It can also result in severe permanent engine damage under certain operating conditions. Based on the ensemble empirical mode decomposition (EEMD), this paper proposes a new approach to determine the knock characteristics in SI engines. By adding a uniformly distributed and finite white Gaussian noise, the EEMD can preserve signal continuity in different scales and therefore alleviates the mode-mixing problem occurring in the classic empirical mode decomposition (EMD). The feasibilities of applying the EEMD to detect the knock signatures of a test SI engine via the pressure signal measured from combustion chamber and the vibration signal measured from cylinder head are investigated. Experimental results show that the EEMD-based method is able to detect the knock signatures from both the pressure signal and vibration signal, even in initial stage of knock. Finally, by comparing the application results with those obtained by short-time Fourier transform (STFT), Wigner-Ville distribution (WVD) and discrete wavelet transform (DWT), the superiority of the EEMD method in determining knock characteristics is demonstrated.

  17. Dynamics of bloggers’ communities: Bipartite networks from empirical data and agent-based modeling

    NASA Astrophysics Data System (ADS)

    Mitrović, Marija; Tadić, Bosiljka

    2012-11-01

    We present an analysis of the empirical data and the agent-based modeling of the emotional behavior of users on the Web portals where the user interaction is mediated by posted comments, like Blogs and Diggs. We consider the dataset of discussion-driven popular Diggs, in which all comments are screened by machine-learning emotion detection in the text, to determine positive and negative valence (attractiveness and aversiveness) of each comment. By mapping the data onto a suitable bipartite network, we perform an analysis of the network topology and the related time-series of the emotional comments. The agent-based model is then introduced to simulate the dynamics and to capture the emergence of the emotional behaviors and communities. The agents are linked to posts on a bipartite network, whose structure evolves through their actions on the posts. The emotional states (arousal and valence) of each agent fluctuate in time, subject to the current contents of the posts to which the agent is exposed. By an agent’s action on a post its current emotions are transferred to the post. The model rules and the key parameters are inferred from the considered empirical data to ensure their realistic values and mutual consistency. The model assumes that the emotional arousal over posts drives the agent’s action. The simulations are preformed for the case of constant flux of agents and the results are analyzed in full analogy with the empirical data. The main conclusions are that the emotion-driven dynamics leads to long-range temporal correlations and emergent networks with community structure, that are comparable with the ones in the empirical system of popular posts. In view of pure emotion-driven agents actions, this type of comparisons provide a quantitative measure for the role of emotions in the dynamics on real blogs. Furthermore, the model reveals the underlying mechanisms which relate the post popularity with the emotion dynamics and the prevalence of negative

  18. Towards high performing hospital enterprise systems: an empirical and literature based design framework

    NASA Astrophysics Data System (ADS)

    dos Santos Fradinho, Jorge Miguel

    2014-05-01

    Our understanding of enterprise systems (ES) is gradually evolving towards a sense of design which leverages multidisciplinary bodies of knowledge that may bolster hybrid research designs and together further the characterisation of ES operation and performance. This article aims to contribute towards ES design theory with its hospital enterprise systems design (HESD) framework, which reflects a rich multidisciplinary literature and two in-depth hospital empirical cases from the US and UK. In doing so it leverages systems thinking principles and traditionally disparate bodies of knowledge to bolster the theoretical evolution and foundation of ES. A total of seven core ES design elements are identified and characterised with 24 main categories and 53 subcategories. In addition, it builds on recent work which suggests that hospital enterprises are comprised of multiple internal ES configurations which may generate different levels of performance. Multiple sources of evidence were collected including electronic medical records, 54 recorded interviews, observation, and internal documents. Both in-depth cases compare and contrast higher and lower performing ES configurations. Following literal replication across in-depth cases, this article concludes that hospital performance can be improved through an enriched understanding of hospital ES design.

  19. Polarizable Empirical Force Field for Hexopyranose Monosaccharides Based on the Classical Drude Oscillator

    PubMed Central

    2015-01-01

    A polarizable empirical force field based on the classical Drude oscillator is presented for the hexopyranose form of selected monosaccharides. Parameter optimization targeted quantum mechanical (QM) dipole moments, solute–water interaction energies, vibrational frequencies, and conformational energies. Validation of the model was based on experimental data on crystals, densities of aqueous-sugar solutions, diffusion constants of glucose, and rotational preferences of the exocylic hydroxymethyl of d-glucose and d-galactose in aqueous solution as well as additional QM data. Notably, the final model involves a single electrostatic model for all sixteen diastereomers of the monosaccharides, indicating the transferability of the polarizable model. The presented parameters are anticipated to lay the foundation for a comprehensive polarizable force field for saccharides that will be compatible with the polarizable Drude parameters for lipids and proteins, allowing for simulations of glycolipids and glycoproteins. PMID:24564643

  20. STEAM: a software tool based on empirical analysis for micro electro mechanical systems

    NASA Astrophysics Data System (ADS)

    Devasia, Archana; Pasupuleti, Ajay; Sahin, Ferat

    2006-03-01

    In this research a generalized software framework that enables accurate computer aided design of MEMS devices is developed. The proposed simulation engine utilizes a novel material property estimation technique that generates effective material properties at the microscopic level. The material property models were developed based on empirical analysis and the behavior extraction of standard test structures. A literature review is provided on the physical phenomena that govern the mechanical behavior of thin films materials. This survey indicates that the present day models operate under a wide range of assumptions that may not be applicable to the micro-world. Thus, this methodology is foreseen to be an essential tool for MEMS designers as it would develop empirical models that relate the loading parameters, material properties, and the geometry of the microstructures with its performance characteristics. This process involves learning the relationship between the above parameters using non-parametric learning algorithms such as radial basis function networks and genetic algorithms. The proposed simulation engine has a graphical user interface (GUI) which is very adaptable, flexible, and transparent. The GUI is able to encompass all parameters associated with the determination of the desired material property so as to create models that provide an accurate estimation of the desired property. This technique was verified by fabricating and simulating bilayer cantilevers consisting of aluminum and glass (TEOS oxide) in our previous work. The results obtained were found to be very encouraging.

  1. Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition

    PubMed Central

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.

    2015-01-01

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714

  2. An empirical Bayesian approach for model-based inference of cellular signaling networks

    PubMed Central

    2009-01-01

    Background A common challenge in systems biology is to infer mechanistic descriptions of biological process given limited observations of a biological system. Mathematical models are frequently used to represent a belief about the causal relationships among proteins within a signaling network. Bayesian methods provide an attractive framework for inferring the validity of those beliefs in the context of the available data. However, efficient sampling of high-dimensional parameter space and appropriate convergence criteria provide barriers for implementing an empirical Bayesian approach. The objective of this study was to apply an Adaptive Markov chain Monte Carlo technique to a typical study of cellular signaling pathways. Results As an illustrative example, a kinetic model for the early signaling events associated with the epidermal growth factor (EGF) signaling network was calibrated against dynamic measurements observed in primary rat hepatocytes. A convergence criterion, based upon the Gelman-Rubin potential scale reduction factor, was applied to the model predictions. The posterior distributions of the parameters exhibited complicated structure, including significant covariance between specific parameters and a broad range of variance among the parameters. The model predictions, in contrast, were narrowly distributed and were used to identify areas of agreement among a collection of experimental studies. Conclusion In summary, an empirical Bayesian approach was developed for inferring the confidence that one can place in a particular model that describes signal transduction mechanisms and for inferring inconsistencies in experimental measurements. PMID:19900289

  3. An Empirical Evaluation of Puzzle-Based Learning as an Interest Approach for Teaching Introductory Computer Science

    ERIC Educational Resources Information Center

    Merrick, K. E.

    2010-01-01

    This correspondence describes an adaptation of puzzle-based learning to teaching an introductory computer programming course. Students from two offerings of the course--with and without the puzzle-based learning--were surveyed over a two-year period. Empirical results show that the synthesis of puzzle-based learning concepts with existing course…

  4. Empirically based Suggested Insights into the Concept of False-Self Defense: Contributions From a Study on Normalization of Children With Disabilities.

    PubMed

    Eichengreen, Adva; Hoofien, Dan; Bachar, Eytan

    2016-02-01

    The concept of the false self has been used widely in psychoanalytic theory and practice but seldom in empirical research. In this empirically based study, elevated features of false-self defense were hypothetically associated with risk factors attendant on processes of rehabilitation and integration of children with disabilities, processes that encourage adaptation of the child to the able-bodied environment. Self-report questionnaires and in-depth interviews were conducted with 88 deaf and hard-of-hearing students and a comparison group of 88 hearing counterparts. Results demonstrate that despite the important contribution of rehabilitation and integration to the well-being of these children, these efforts may put the child at risk of increased use of the false-self defense. The empirical findings suggest two general theoretical conclusions: (1) The Winnicottian concept of the environment, usually confined to the parent-child relationship, can be understood more broadly as including cultural, social, and rehabilitational variables that both influence the parent-child relationship and operate independently of it. (2) The monolithic conceptualization of the false self may be more accurately unpacked to reveal two distinct subtypes: the compliant and the split false self.

  5. A Human ECG Identification System Based on Ensemble Empirical Mode Decomposition

    PubMed Central

    Zhao, Zhidong; Yang, Lei; Chen, Diandian; Luo, Yi

    2013-01-01

    In this paper, a human electrocardiogram (ECG) identification system based on ensemble empirical mode decomposition (EEMD) is designed. A robust preprocessing method comprising noise elimination, heartbeat normalization and quality measurement is proposed to eliminate the effects of noise and heart rate variability. The system is independent of the heart rate. The ECG signal is decomposed into a number of intrinsic mode functions (IMFs) and Welch spectral analysis is used to extract the significant heartbeat signal features. Principal component analysis is used reduce the dimensionality of the feature space, and the K-nearest neighbors (K-NN) method is applied as the classifier tool. The proposed human ECG identification system was tested on standard MIT-BIH ECG databases: the ST change database, the long-term ST database, and the PTB database. The system achieved an identification accuracy of 95% for 90 subjects, demonstrating the effectiveness of the proposed method in terms of accuracy and robustness. PMID:23698274

  6. Orientation-Independent Empirical Mode Decomposition for Images Based on Unconstrained Optimization.

    PubMed

    Colominas, Marcelo A; Humeau-Heurtier, Anne; Schlotthauer, Gastón

    2016-05-01

    This paper introduces a 2D extension of the empirical mode decomposition (EMD), through a novel approach based on unconstrained optimization. EMD is a fully data-driven method that locally separates, in a completely data-driven and unsupervised manner, signals into fast and slow oscillations. The present proposal implements the method in a very simple and fast way, and it is compared with the state-of-the-art methods evidencing the advantages of being computationally efficient, orientation-independent, and leads to better performances for the decomposition of amplitude modulated-frequency modulated (AM-FM) images. The resulting genuine 2D method is successfully tested on artificial AM-FM images and its capabilities are illustrated on a biomedical example. The proposed framework leaves room for an nD extension (n > 2 ). PMID:26992022

  7. Empirical likelihood based detection procedure for change point in mean residual life functions under random censorship.

    PubMed

    Chen, Ying-Ju; Ning, Wei; Gupta, Arjun K

    2016-05-01

    The mean residual life (MRL) function is one of the basic parameters of interest in survival analysis that describes the expected remaining time of an individual after a certain age. The study of changes in the MRL function is practical and interesting because it may help us to identify some factors such as age and gender that may influence the remaining lifetimes of patients after receiving a certain surgery. In this paper, we propose a detection procedure based on the empirical likelihood for the changes in MRL functions with right censored data. Two real examples are also given: Veterans' administration lung cancer study and Stanford heart transplant to illustrate the detecting procedure. Copyright © 2016 John Wiley & Sons, Ltd.

  8. Empirical likelihood based detection procedure for change point in mean residual life functions under random censorship.

    PubMed

    Chen, Ying-Ju; Ning, Wei; Gupta, Arjun K

    2016-05-01

    The mean residual life (MRL) function is one of the basic parameters of interest in survival analysis that describes the expected remaining time of an individual after a certain age. The study of changes in the MRL function is practical and interesting because it may help us to identify some factors such as age and gender that may influence the remaining lifetimes of patients after receiving a certain surgery. In this paper, we propose a detection procedure based on the empirical likelihood for the changes in MRL functions with right censored data. Two real examples are also given: Veterans' administration lung cancer study and Stanford heart transplant to illustrate the detecting procedure. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26936529

  9. The application of Kriging and empirical Kriging based on the variables selected by SCAD.

    PubMed

    Peng, Xiao-Ling; Yin, Hong; Li, Runze; Fang, Kai-Tai

    2006-09-25

    The commonly used approach for building a structure-activity/property relationship consists of three steps. First, one determines the descriptors for the molecular structure, then builds a metamodel by using some proper mathematical methods, and finally evaluates the meta-model. Some existing methods only can select important variables from the candidates, while most metamodels just explore linear relationships between inputs and outputs. Some techniques are useful to build more complicated relationship, but they may not be able to select important variables from a large number of variables. In this paper, we propose to screen important variables by the smoothly clipped absolute deviation (SCAD) variable selection procedure, and then apply Kriging model and empirical Kriging model for quantitative structure-activity/property relationship (QSAR/QSPR) research based on the selected important variables. We demonstrate the proposed procedure retains the virtues of both variable selection and Kriging model. PMID:17723710

  10. Study on the Theoretical Foundation of Business English Curriculum Design Based on ESP and Needs Analysis

    ERIC Educational Resources Information Center

    Zhu, Wenzhong; Liu, Dan

    2014-01-01

    Based on a review of the literature on ESP and needs analysis, this paper is intended to offer some theoretical supports and inspirations for BE instructors to develop BE curricula for business contexts. It discusses how the theory of need analysis can be used in Business English curriculum design, and proposes some principles of BE curriculum…

  11. Effects of a Theoretically Based Large-Scale Reading Intervention in a Multicultural Urban School District

    ERIC Educational Resources Information Center

    Sadoski, Mark; Willson, Victor L.

    2006-01-01

    In 1997, Lindamood-Bell Learning Processes partnered with Pueblo School District 60 (PSD60), a heavily minority urban district with many Title I schools, to implement a theoretically based initiative designed to improve Colorado Student Assessment Program reading scores. In this study, the authors examined achievement in Grades 3-5 during the…

  12. BODIPY based colorimetric fluorescent probe for selective thiophenol detection: theoretical and experimental studies.

    PubMed

    Kand, Dnyaneshwar; Mishra, Pratyush Kumar; Saha, Tanmoy; Lahiri, Mayurika; Talukdar, Pinaki

    2012-09-01

    A BODIPY-based selective thiophenol probe capable of discriminating aliphatic thiols is reported. The fluorescence off-on effect upon reaction with thiol is elucidated with theoretical calculations. The sensing of thiophenol is associated with a color change from red to yellow and 63-fold enhancement in green fluorescence. Application of the probe for selective thiophenol detection is demonstrated by live cell imaging.

  13. Web-based application for Data INterpolation Empirical Orthogonal Functions (DINEOF) analysis

    NASA Astrophysics Data System (ADS)

    Tomazic, Igor; Alvera-Azcarate, Aida; Barth, Alexander; Beckers, Jean-Marie

    2014-05-01

    DINEOF (Data INterpolating Empirical Orthogonal Functions) is a powerful tool based on EOF decomposition developed at the University of Liege/GHER for the reconstruction of missing data in satellite datasets, as well as for the reduction of noise and detection of outliers. DINEOF is openly available as a series of Fortran routines to be compiled by the user, and as binaries (that can be run directly without any compilation) both for Windows and Linux platforms. In order to facilitate the use of DINEOF and increase the number of interested users, we developed a web-based interface for DINEOF with the necessary parameters available to run high-quality DINEOF analysis. This includes choosing variable within selected dataset, defining a domain, time range, filtering criteria based on available variables in the dataset (e.g. quality flag, satellite zenith angle …) and defining necessary DINEOF parameters. Results, including reconstructed data and calculated EOF modes will be disseminated in NetCDF format using OpenDAP and WMS server allowing easy visualisation and analysis. First, we will include several satellite datasets of sea surface temperature and chlorophyll concentration obtained from MyOcean data centre and already remapped to the regular grid (L3C). Later, based on user's request, we plan to extend number of datasets available for reconstruction.

  14. An Empirical Study on Washback Effects of the Internet-Based College English Test Band 4 in China

    ERIC Educational Resources Information Center

    Wang, Chao; Yan, Jiaolan; Liu, Bao

    2014-01-01

    Based on Bailey's washback model, in respect of participants, process and products, the present empirical study was conducted to find the actual washback effects of the internet-based College English Test Band 4 (IB CET-4). The methods adopted are questionnaires, class observation, interview and the analysis of both the CET-4 teaching and testing…

  15. How "Does" the Comforting Process Work? An Empirical Test of an Appraisal-Based Model of Comforting

    ERIC Educational Resources Information Center

    Jones, Susanne M.; Wirtz, John G.

    2006-01-01

    Burleson and Goldsmith's (1998) comforting model suggests an appraisal-based mechanism through which comforting messages can bring about a positive change in emotional states. This study is a first empirical test of three causal linkages implied by the appraisal-based comforting model. Participants (N=258) talked about an upsetting event with a…

  16. Empirical source strength correlations for rans-based acoustic analogy methods

    NASA Astrophysics Data System (ADS)

    Kube-McDowell, Matthew Tyndall

    JeNo is a jet noise prediction code based on an acoustic analogy method developed by Mani, Gliebe, Balsa, and Khavaran. Using the flow predictions from a standard Reynolds-averaged Navier-Stokes computational fluid dynamics solver, JeNo predicts the overall sound pressure level and angular spectra for high-speed hot jets over a range of observer angles, with a processing time suitable for rapid design purposes. JeNo models the noise from hot jets as a combination of two types of noise sources; quadrupole sources dependent on velocity fluctuations, which represent the major noise of turbulent mixing, and dipole sources dependent on enthalpy fluctuations, which represent the effects of thermal variation. These two sources are modeled by JeNo as propagating independently into the far-field, with no cross-correlation at the observer location. However, high-fidelity computational fluid dynamics solutions demonstrate that this assumption is false. In this thesis, the theory, assumptions, and limitations of the JeNo code are briefly discussed, and a modification to the acoustic analogy method is proposed in which the cross-correlation of the two primary noise sources is allowed to vary with the speed of the jet and the observer location. As a proof-of-concept implementation, an empirical correlation correction function is derived from comparisons between JeNo's noise predictions and a set of experimental measurements taken for the Air Force Aero-Propulsion Laboratory. The empirical correlation correction is then applied to JeNo's predictions of a separate data set of hot jets tested at NASA's Glenn Research Center. Metrics are derived to measure the qualitative and quantitative performance of JeNo's acoustic predictions, and the empirical correction is shown to provide a quantitative improvement in the noise prediction at low observer angles with no freestream flow, and a qualitative improvement in the presence of freestream flow. However, the results also demonstrate

  17. Synthesis, characterization, theoretical prediction of activities and evaluation of biological activities of some sulfacetamide based hydroxytriazenes.

    PubMed

    Agarwal, Shilpa; Baroliya, Prabhat K; Bhargava, Amit; Tripathi, I P; Goswami, A K

    2016-06-15

    Six new N [(4-aminophenyl)sulfonyl]acetamide based hydroxytriazenes have been synthesized and characterized using elemental analysis, IR, 1H NMR, 13C NMR and MASS spectral analysis. Further, their theoretical predictions for probable activities have been taken using PASS (Prediction of Activity Spectra for Substance). Although a number of activities have been predicted but specifically anti-inflammatory, antiradical, anti-diabetic activities have been experimentally validated which proves that theoretical predictions agree with the experimental results. The object of the Letter is to establish Computer Aided Drug Design (CADD) using our compounds. PMID:27136718

  18. Written institutional ethics policies on euthanasia: an empirical-based organizational-ethical framework.

    PubMed

    Lemiengre, Joke; Dierckx de Casterlé, Bernadette; Schotsmans, Paul; Gastmans, Chris

    2014-05-01

    As euthanasia has become a widely debated issue in many Western countries, hospitals and nursing homes especially are increasingly being confronted with this ethically sensitive societal issue. The focus of this paper is how healthcare institutions can deal with euthanasia requests on an organizational level by means of a written institutional ethics policy. The general aim is to make a critical analysis whether these policies can be considered as organizational-ethical instruments that support healthcare institutions to take their institutional responsibility for dealing with euthanasia requests. By means of an interpretative analysis, we conducted a process of reinterpretation of results of former Belgian empirical studies on written institutional ethics policies on euthanasia in dialogue with the existing international literature. The study findings revealed that legal regulations, ethical and care-oriented aspects strongly affected the development, the content, and the impact of written institutional ethics policies on euthanasia. Hence, these three cornerstones-law, care and ethics-constituted the basis for the empirical-based organizational-ethical framework for written institutional ethics policies on euthanasia that is presented in this paper. However, having a euthanasia policy does not automatically lead to more legal transparency, or to a more professional and ethical care practice. The study findings suggest that the development and implementation of an ethics policy on euthanasia as an organizational-ethical instrument should be considered as a dynamic process. Administrators and ethics committees must take responsibility to actively create an ethical climate supporting care providers who have to deal with ethical dilemmas in their practice.

  19. The Role of Social Network Technologies in Online Health Promotion: A Narrative Review of Theoretical and Empirical Factors Influencing Intervention Effectiveness

    PubMed Central

    Kennedy, Catriona M; Buchan, Iain; Powell, John; Ainsworth, John

    2015-01-01

    Background Social network technologies have become part of health education and wider health promotion—either by design or happenstance. Social support, peer pressure, and information sharing in online communities may affect health behaviors. If there are positive and sustained effects, then social network technologies could increase the effectiveness and efficiency of many public health campaigns. Social media alone, however, may be insufficient to promote health. Furthermore, there may be unintended and potentially harmful consequences of inaccurate or misleading health information. Given these uncertainties, there is a need to understand and synthesize the evidence base for the use of online social networking as part of health promoting interventions to inform future research and practice. Objective Our aim was to review the research on the integration of expert-led health promotion interventions with online social networking in order to determine the extent to which the complementary benefits of each are understood and used. We asked, in particular, (1) How is effectiveness being measured and what are the specific problems in effecting health behavior change?, and (2) To what extent is the designated role of social networking grounded in theory? Methods The narrative synthesis approach to literature review was used to analyze the existing evidence. We searched the indexed scientific literature using keywords associated with health promotion and social networking. The papers included were only those making substantial study of both social networking and health promotion—either reporting the results of the intervention or detailing evidence-based plans. General papers about social networking and health were not included. Results The search identified 162 potentially relevant documents after review of titles and abstracts. Of these, 42 satisfied the inclusion criteria after full-text review. Six studies described randomized controlled trials (RCTs) evaluating

  20. Computing theoretical rates of part C eligibility based on developmental delays.

    PubMed

    Rosenberg, Steven A; Ellison, Misoo C; Fast, Bruce; Robinson, Cordelia C; Lazar, Radu

    2013-02-01

    Part C early intervention is a nationwide program that serves infants and toddlers who have developmental delays. This article presents a methodology for computing a theoretical estimate of the proportion of children who are likely to be eligible for Part C services based on delays in any of the 5 developmental domains (cognitive, motor, communication, social-emotional and adaptive) that are assessed to determine eligibility. Rates of developmental delays were estimated from a multivariate normal cumulative distribution function. This approach calculates theoretical rates of occurrence for conditions that are defined in terms of standard deviations from the mean on several variables that are approximately normally distributed. Evidence is presented to suggest that the procedures described produce accurate estimates of rates of child developmental delays. The methodology used in this study provides a useful tool for computing theoretical rates of occurrence of developmental delays that make children candidates for early intervention.

  1. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User's Head Movement.

    PubMed

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-08-31

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest.

  2. Investigating properties of the cardiovascular system using innovative analysis algorithms based on ensemble empirical mode decomposition.

    PubMed

    Yeh, Jia-Rong; Lin, Tzu-Yu; Chen, Yun; Sun, Wei-Zen; Abbod, Maysam F; Shieh, Jiann-Shing

    2012-01-01

    Cardiovascular system is known to be nonlinear and nonstationary. Traditional linear assessments algorithms of arterial stiffness and systemic resistance of cardiac system accompany the problem of nonstationary or inconvenience in practical applications. In this pilot study, two new assessment methods were developed: the first is ensemble empirical mode decomposition based reflection index (EEMD-RI) while the second is based on the phase shift between ECG and BP on cardiac oscillation. Both methods utilise the EEMD algorithm which is suitable for nonlinear and nonstationary systems. These methods were used to investigate the properties of arterial stiffness and systemic resistance for a pig's cardiovascular system via ECG and blood pressure (BP). This experiment simulated a sequence of continuous changes of blood pressure arising from steady condition to high blood pressure by clamping the artery and an inverse by relaxing the artery. As a hypothesis, the arterial stiffness and systemic resistance should vary with the blood pressure due to clamping and relaxing the artery. The results show statistically significant correlations between BP, EEMD-based RI, and the phase shift between ECG and BP on cardiac oscillation. The two assessments results demonstrate the merits of the EEMD for signal analysis.

  3. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User's Head Movement.

    PubMed

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-01-01

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest. PMID:27589768

  4. Comparison of subset-based local and FE-based global digital image correlation: Theoretical error analysis and validation

    NASA Astrophysics Data System (ADS)

    Pan, B.; Wang, B.; Lubineau, G.

    2016-07-01

    Subset-based local and finite-element-based (FE-based) global digital image correlation (DIC) approaches are the two primary image matching algorithms widely used for full-field displacement mapping. Very recently, the performances of these different DIC approaches have been experimentally investigated using numerical and real-world experimental tests. The results have shown that in typical cases, where the subset (element) size is no less than a few pixels and the local deformation within a subset (element) can be well approximated by the adopted shape functions, the subset-based local DIC outperforms FE-based global DIC approaches because the former provides slightly smaller root-mean-square errors and offers much higher computation efficiency. Here we investigate the theoretical origin and lay a solid theoretical basis for the previous comparison. We assume that systematic errors due to imperfect intensity interpolation and undermatched shape functions are negligibly small, and perform a theoretical analysis of the random errors or standard deviation (SD) errors in the displacements measured by two local DIC approaches (i.e., a subset-based local DIC and an element-based local DIC) and two FE-based global DIC approaches (i.e., Q4-DIC and Q8-DIC). The equations that govern the random errors in the displacements measured by these local and global DIC approaches are theoretically derived. The correctness of the theoretically predicted SD errors is validated through numerical translation tests under various noise levels. We demonstrate that the SD errors induced by the Q4-element-based local DIC, the global Q4-DIC and the global Q8-DIC are 4, 1.8-2.2 and 1.2-1.6 times greater, respectively, than that associated with the subset-based local DIC, which is consistent with our conclusions from previous work.

  5. Empirical Evaluation Indicators in Thai Higher Education: Theory-Based Multidimensional Learners' Assessment

    ERIC Educational Resources Information Center

    Sritanyarat, Dawisa; Russ-Eft, Darlene

    2016-01-01

    This study proposed empirical indicators which can be validated and adopted in higher education institutions to evaluate quality of teaching and learning, and to serve as an evaluation criteria for human resource management and development of higher institutions in Thailand. The main purpose of this study was to develop empirical indicators of a…

  6. Impact of Inadequate Empirical Therapy on the Mortality of Patients with Bloodstream Infections: a Propensity Score-Based Analysis

    PubMed Central

    Retamar, Pilar; Portillo, María M.; López-Prieto, María Dolores; Rodríguez-López, Fernando; de Cueto, Marina; García, María V.; Gómez, María J.; del Arco, Alfonso; Muñoz, Angel; Sánchez-Porto, Antonio; Torres-Tortosa, Manuel; Martín-Aspas, Andrés; Arroyo, Ascensión; García-Figueras, Carolina; Acosta, Federico; Corzo, Juan E.; León-Ruiz, Laura; Escobar-Lara, Trinidad

    2012-01-01

    The impact of the adequacy of empirical therapy on outcome for patients with bloodstream infections (BSI) is key for determining whether adequate empirical coverage should be prioritized over other, more conservative approaches. Recent systematic reviews outlined the need for new studies in the field, using improved methodologies. We assessed the impact of inadequate empirical treatment on the mortality of patients with BSI in the present-day context, incorporating recent methodological recommendations. A prospective multicenter cohort including all BSI episodes in adult patients was performed in 15 hospitals in Andalucía, Spain, over a 2-month period in 2006 to 2007. The main outcome variables were 14- and 30-day mortality. Adjusted analyses were performed by multivariate analysis and propensity score-based matching. Eight hundred one episodes were included. Inadequate empirical therapy was administered in 199 (24.8%) episodes; mortality at days 14 and 30 was 18.55% and 22.6%, respectively. After controlling for age, Charlson index, Pitt score, neutropenia, source, etiology, and presentation with severe sepsis or shock, inadequate empirical treatment was associated with increased mortality at days 14 and 30 (odds ratios [ORs], 2.12 and 1.56; 95% confidence intervals [95% CI], 1.34 to 3.34 and 1.01 to 2.40, respectively). The adjusted ORs after a propensity score-based matched analysis were 3.03 and 1.70 (95% CI, 1.60 to 5.74 and 0.98 to 2.98, respectively). In conclusion, inadequate empirical therapy is independently associated with increased mortality in patients with BSI. Programs to improve the quality of empirical therapy in patients with suspicion of BSI and optimization of definitive therapy should be implemented. PMID:22005999

  7. Is Project Based Learning More Effective than Direct Instruction in School Science Classrooms? An Analysis of the Empirical Research Evidence

    NASA Astrophysics Data System (ADS)

    Dann, Clifford

    An increasingly loud call by parents, school administrators, teachers, and even business leaders for "authentic learning", emphasizing both group-work and problem solving, has led to growing enthusiasm for inquiry-based learning over the past decade. Although "inquiry" can be defined in many ways, a curriculum called "project-based learning" has recently emerged as the inquiry practice-of-choice with roots in the educational constructivism that emerged in the mid-twentieth century. Often, project-based learning is framed as an alternative instructional strategy to direct instruction for maximizing student content knowledge. This study investigates the empirical evidence for such a comparison while also evaluating the overall quality of the available studies in the light of accepted standards for educational research. Specifically, this thesis investigates what the body of quantitative research says about the efficacy of project-based learning vs. direct instruction when considering student acquisition of content knowledge in science classrooms. Further, existing limitations of the research pertaining to project based learning and secondary school education are explored. The thesis concludes with a discussion of where and how we should focus our empirical efforts in the future. The research revealed that the available empirical research contains flaws in both design and instrumentation. In particular, randomization is poor amongst all the studies considered. The empirical evidence indicates that project-based learning curricula improved student content knowledge but that, while the results were statistically significant, increases in raw test scores were marginal.

  8. Evaluation of Physically and Empirically Based Models for the Estimation of Green Roof Evapotranspiration

    NASA Astrophysics Data System (ADS)

    Digiovanni, K. A.; Montalto, F. A.; Gaffin, S.; Rosenzweig, C.

    2010-12-01

    Green roofs and other urban green spaces can provide a variety of valuable benefits including reduction of the urban heat island effect, reduction of stormwater runoff, carbon sequestration, oxygen generation, air pollution mitigation etc. As many of these benefits are directly linked to the processes of evaporation and transpiration, accurate and representative estimation of urban evapotranspiration (ET) is a necessary tool for predicting and quantifying such benefits. However, many common ET estimation procedures were developed for agricultural applications, and thus carry inherent assumptions that may only be rarely applicable to urban green spaces. Various researchers have identified the estimation of expected urban ET rates as critical, yet poorly studied components of urban green space performance prediction and cite that further evaluation is needed to reconcile differences in predictions from varying ET modeling approaches. A small scale green roof lysimeter setup situated on the green roof of the Ethical Culture Fieldston School in the Bronx, NY has been the focus of ongoing monitoring initiated in June 2009. The experimental setup includes a 0.6 m by 1.2 m Lysimeter replicating the anatomy of the 500 m2 green roof of the building, with a roof membrane, drainage layer, 10 cm media depth, and planted with a variety of Sedum species. Soil moisture sensors and qualitative runoff measurements are also recorded in the Lysimeter, while a weather station situated on the rooftop records climatologic data. Direct quantification of actual evapotranspiration (AET) from the green roof weighing lysimeter was achieved through a mass balance approaches during periods absent of precipitation and drainage. A comparison of AET to estimates of potential evapotranspiration (PET) calculated from empirically and physically based ET models was performed in order to evaluate the applicability of conventional ET equations for the estimation of ET from green roofs. Results have

  9. Theoretical results on the tandem junction solar cell based on its Ebers-Moll transistor model

    NASA Technical Reports Server (NTRS)

    Goradia, C.; Vaughn, J.; Baraona, C. R.

    1980-01-01

    A one-dimensional theoretical model of the tandem junction solar cell (TJC) with base resistivity greater than about 1 ohm-cm and under low level injection has been derived. This model extends a previously published conceptual model which treats the TJC as an npn transistor. The model gives theoretical expressions for each of the Ebers-Moll type currents of the illuminated TJC and allows for the calculation of the spectral response, I(sc), V(oc), FF and eta under variation of one or more of the geometrical and material parameters and 1MeV electron fluence. Results of computer calculations based on this model are presented and discussed. These results indicate that for space applications, both a high beginning of life efficiency, greater than 15% AM0, and a high radiation tolerance can be achieved only with thin (less than 50 microns) TJC's with high base resistivity (greater than 10 ohm-cm).

  10. Constructing a Theoretically-Based Set of Measures for Liver Cancer Control Research Studies

    PubMed Central

    Maxwell, Annette E.; Bastani, Roshan; Chen, Moon S.; Nguyen, Tung T.; Stewart, Susan L.; Taylor, Vicky M.

    2009-01-01

    Objective Measurement tools such as surveys assessing knowledge, attitudes and behaviors need to be theoretically consistent with interventions. The purpose of this paper is to describe the first steps in the process of constructing a theoretically-based set of measures that is currently used in three trials to reduce liver cancer disparities. Methods Guided by a common theoretical formulation - the Health Behavior Framework - we identified constructs relevant for liver cancer control research, compiled items from previous studies and constructed new items, and translated and pilot tested items in collaboration with members of the Vietnamese, Korean, and Hmong communities. Results We constructed three questionnaires in Vietnamese, Hmong and Korean language that are slightly different due to cultural and language nuances, but contain a core set of measures assessing identical constructs of the Health Behavior Framework. Initial research demonstrates that items are easily understood and that they are generally related to hepatitis B screening as expected. Conclusions Researchers are encouraged to follow a similar process for creating theory-based assessment tools. Measuring common theoretical constructs can advance liver cancer control and other health research by facilitating a more systematic comparison of findings across different populations and intervention strategies. PMID:19883680

  11. Feasibility of an empirically based program for parents of preschoolers with autism spectrum disorder.

    PubMed

    Dababnah, Sarah; Parish, Susan L

    2016-01-01

    This article reports on the feasibility of implementing an existing empirically based program, The Incredible Years, tailored to parents of young children with autism spectrum disorder. Parents raising preschool-aged children (aged 3-6 years) with autism spectrum disorder (N = 17) participated in a 15-week pilot trial of the intervention. Quantitative assessments of the program revealed fidelity was generally maintained, with the exception of program-specific videos. Qualitative data from individual post-intervention interviews reported parents benefited most from child emotion regulation strategies, play-based child behavior skills, parent stress management, social support, and visual resources. More work is needed to further refine the program to address parent self-care, partner relationships, and the diverse behavioral and communication challenges of children across the autism spectrum. Furthermore, parent access and retention could potentially be increased by providing in-home childcare vouchers and a range of times and locations in which to offer the program. The findings suggest The Incredible Years is a feasible intervention for parents seeking additional support for child- and family-related challenges and offers guidance to those communities currently using The Incredible Years or other related parenting programs with families of children with autism spectrum disorder.

  12. An empirically based steady state friction law and implications for fault stability

    NASA Astrophysics Data System (ADS)

    Spagnuolo, E.; Nielsen, S.; Violay, M.; Di Toro, G.

    2016-04-01

    Empirically based rate-and-state friction laws (RSFLs) have been proposed to model the dependence of friction forces with slip and time. The relevance of the RSFL for earthquake mechanics is that few constitutive parameters define critical conditions for fault stability (i.e., critical stiffness and frictional fault behavior). However, the RSFLs were determined from experiments conducted at subseismic slip rates (V < 1 cm/s), and their extrapolation to earthquake deformation conditions (V > 0.1 m/s) remains questionable on the basis of the experimental evidence of (1) large dynamic weakening and (2) activation of particular fault lubrication processes at seismic slip rates. Here we propose a modified RSFL (MFL) based on the review of a large published and unpublished data set of rock friction experiments performed with different testing machines. The MFL, valid at steady state conditions from subseismic to seismic slip rates (0.1 µm/s < V < 3 m/s), describes the initiation of a substantial velocity weakening in the 1-20 cm/s range resulting in a critical stiffness increase that creates a peak of potential instability in that velocity regime. The MFL leads to a new definition of fault frictional stability with implications for slip event styles and relevance for models of seismic rupture nucleation, propagation, and arrest.

  13. A hybrid filtering method based on a novel empirical mode decomposition for friction signals

    NASA Astrophysics Data System (ADS)

    Li, Chengwei; Zhan, Liwei

    2015-12-01

    During a measurement, the measured signal usually contains noise. To remove the noise and preserve the important feature of the signal, we introduce a hybrid filtering method that uses a new intrinsic mode function (NIMF) and a modified Hausdorff distance. The NIMF is defined as the difference between the noisy signal and each intrinsic mode function (IMF), which is obtained by empirical mode decomposition (EMD), ensemble EMD, complementary ensemble EMD, or complete ensemble EMD with adaptive noise (CEEMDAN). The relevant mode selecting is based on the similarity between the first NIMF and the rest of the NIMFs. With this filtering method, the EMD and improved versions are used to filter the simulation and friction signals. The friction signal between an airplane tire and the runaway is recorded during a simulated airplane touchdown and features spikes of various amplitudes and noise. The filtering effectiveness of the four hybrid filtering methods are compared and discussed. The results show that the filtering method based on CEEMDAN outperforms other signal filtering methods.

  14. An empirically based steady state friction law and implications for fault stability

    PubMed Central

    Nielsen, S.; Violay, M.; Di Toro, G.

    2016-01-01

    Abstract Empirically based rate‐and‐state friction laws (RSFLs) have been proposed to model the dependence of friction forces with slip and time. The relevance of the RSFL for earthquake mechanics is that few constitutive parameters define critical conditions for fault stability (i.e., critical stiffness and frictional fault behavior). However, the RSFLs were determined from experiments conducted at subseismic slip rates (V < 1 cm/s), and their extrapolation to earthquake deformation conditions (V > 0.1 m/s) remains questionable on the basis of the experimental evidence of (1) large dynamic weakening and (2) activation of particular fault lubrication processes at seismic slip rates. Here we propose a modified RSFL (MFL) based on the review of a large published and unpublished data set of rock friction experiments performed with different testing machines. The MFL, valid at steady state conditions from subseismic to seismic slip rates (0.1 µm/s < V < 3 m/s), describes the initiation of a substantial velocity weakening in the 1–20 cm/s range resulting in a critical stiffness increase that creates a peak of potential instability in that velocity regime. The MFL leads to a new definition of fault frictional stability with implications for slip event styles and relevance for models of seismic rupture nucleation, propagation, and arrest.

  15. Going Global: A Model for Evaluating Empirically Supported Family-Based Interventions in New Contexts.

    PubMed

    Sundell, Knut; Ferrer-Wreder, Laura; Fraser, Mark W

    2014-06-01

    The spread of evidence-based practice throughout the world has resulted in the wide adoption of empirically supported interventions (ESIs) and a growing number of controlled trials of imported and culturally adapted ESIs. This article is informed by outcome research on family-based interventions including programs listed in the American Blueprints Model and Promising Programs. Evidence from these controlled trials is mixed and, because it is comprised of both successful and unsuccessful replications of ESIs, it provides clues for the translation of promising programs in the future. At least four explanations appear plausible for the mixed results in replication trials. One has to do with methodological differences across trials. A second deals with ambiguities in the cultural adaptation process. A third explanation is that ESIs in failed replications have not been adequately implemented. A fourth source of variation derives from unanticipated contextual influences that might affect the effects of ESIs when transported to other cultures and countries. This article describes a model that allows for the differential examination of adaptations of interventions in new cultural contexts.

  16. An empirically based steady state friction law and implications for fault stability

    PubMed Central

    Nielsen, S.; Violay, M.; Di Toro, G.

    2016-01-01

    Abstract Empirically based rate‐and‐state friction laws (RSFLs) have been proposed to model the dependence of friction forces with slip and time. The relevance of the RSFL for earthquake mechanics is that few constitutive parameters define critical conditions for fault stability (i.e., critical stiffness and frictional fault behavior). However, the RSFLs were determined from experiments conducted at subseismic slip rates (V < 1 cm/s), and their extrapolation to earthquake deformation conditions (V > 0.1 m/s) remains questionable on the basis of the experimental evidence of (1) large dynamic weakening and (2) activation of particular fault lubrication processes at seismic slip rates. Here we propose a modified RSFL (MFL) based on the review of a large published and unpublished data set of rock friction experiments performed with different testing machines. The MFL, valid at steady state conditions from subseismic to seismic slip rates (0.1 µm/s < V < 3 m/s), describes the initiation of a substantial velocity weakening in the 1–20 cm/s range resulting in a critical stiffness increase that creates a peak of potential instability in that velocity regime. The MFL leads to a new definition of fault frictional stability with implications for slip event styles and relevance for models of seismic rupture nucleation, propagation, and arrest. PMID:27667875

  17. Pseudopotential-based electron quantum transport: Theoretical formulation and application to nanometer-scale silicon nanowire transistors

    NASA Astrophysics Data System (ADS)

    Fang, Jingtian; Vandenberghe, William G.; Fu, Bo; Fischetti, Massimo V.

    2016-01-01

    We present a formalism to treat quantum electronic transport at the nanometer scale based on empirical pseudopotentials. This formalism offers explicit atomistic wavefunctions and an accurate band structure, enabling a detailed study of the characteristics of devices with a nanometer-scale channel and body. Assuming externally applied potentials that change slowly along the electron-transport direction, we invoke the envelope-wavefunction approximation to apply the open boundary conditions and to develop the transport equations. We construct the full-band open boundary conditions (self-energies of device contacts) from the complex band structure of the contacts. We solve the transport equations and present the expressions required to calculate the device characteristics, such as device current and charge density. We apply this formalism to study ballistic transport in a gate-all-around (GAA) silicon nanowire field-effect transistor with a body-size of 0.39 nm, a gate length of 6.52 nm, and an effective oxide thickness of 0.43 nm. Simulation results show that this device exhibits a subthreshold slope (SS) of ˜66 mV/decade and a drain-induced barrier-lowering of ˜2.5 mV/V. Our theoretical calculations predict that low-dimensionality channels in a 3D GAA architecture are able to meet the performance requirements of future devices in terms of SS swing and electrostatic control.

  18. How much does participatory flood management contribute to stakeholders' social capacity building? Empirical findings based on a triangulation of three evaluation approaches

    NASA Astrophysics Data System (ADS)

    Buchecker, M.; Menzel, S.; Home, R.

    2013-06-01

    Recent literature suggests that dialogic forms of risk communication are more effective to build stakeholders' hazard-related social capacities. In spite of the high theoretical expectations, there is a lack of univocal empirical evidence on the relevance of these effects. This is mainly due to the methodological limitations of the existing evaluation approaches. In our paper we aim at eliciting the contribution of participatory river revitalisation projects on stakeholders' social capacity building by triangulating the findings of three evaluation studies that were based on different approaches: a field-experimental, a qualitative long-term ex-post and a cross-sectional household survey approach. The results revealed that social learning and avoiding the loss of trust were more relevant benefits of participatory flood management than acceptance building. The results suggest that stakeholder involvements should be more explicitly designed as tools for long-term social learning.

  19. An Empirical Introduction to the Concept of Chemical Element Based on Van Hiele's Theory of Level Transitions

    ERIC Educational Resources Information Center

    Vogelezang, Michiel; Van Berkel, Berry; Verdonk, Adri

    2015-01-01

    Between 1970 and 1990, the Dutch working group "Empirical Introduction to Chemistry" developed a secondary school chemistry education curriculum based on the educational vision of the mathematicians van Hiele and van Hiele-Geldof. This approach viewed learning as a process in which students must go through discontinuous level transitions…

  20. Empirical Differences in Omission Tendency and Reading Ability in PISA: An Application of Tree-Based Item Response Models

    ERIC Educational Resources Information Center

    Okumura, Taichi

    2014-01-01

    This study examined the empirical differences between the tendency to omit items and reading ability by applying tree-based item response (IRTree) models to the Japanese data of the Programme for International Student Assessment (PISA) held in 2009. For this purpose, existing IRTree models were expanded to contain predictors and to handle…

  1. BODIPY based colorimetric fluorescent probe for selective thiophenol detection: theoretical and experimental studies.

    PubMed

    Kand, Dnyaneshwar; Mishra, Pratyush Kumar; Saha, Tanmoy; Lahiri, Mayurika; Talukdar, Pinaki

    2012-09-01

    A BODIPY-based selective thiophenol probe capable of discriminating aliphatic thiols is reported. The fluorescence off-on effect upon reaction with thiol is elucidated with theoretical calculations. The sensing of thiophenol is associated with a color change from red to yellow and 63-fold enhancement in green fluorescence. Application of the probe for selective thiophenol detection is demonstrated by live cell imaging. PMID:22751002

  2. Empirical likelihood-based confidence intervals for length-biased data

    PubMed Central

    Ning, J.; Qin, J.; Asgharian, M.; Shen, Y.

    2013-01-01

    Logistic or other constraints often preclude the possibility of conducting incident cohort studies. A feasible alternative in such cases is to conduct a cross-sectional prevalent cohort study for which we recruit prevalent cases, i.e. subjects who have already experienced the initiating event, say the onset of a disease. When the interest lies in estimating the lifespan between the initiating event and a terminating event, say death for instance, such subjects may be followed prospectively until the terminating event or loss to follow-up, whichever happens first. It is well known that prevalent cases have, on average, longer lifespans. As such they do not constitute a representative random sample from the target population; they comprise a biased sample. If the initiating events are generated from a stationary Poisson process, the so-called stationarity assumption, this bias is called length bias. The current literature on length-biased sampling lacks a simple method for estimating the margin of errors of commonly used summary statistics. We fill this gap using the empirical likelihood-based confidence intervals by adapting this method to right-censored length-biased survival data. Both large and small sample behaviors of these confidence intervals are studied. We illustrate our method using a set of data on survival with dementia, collected as part of the Canadian Study of Health and Aging. PMID:23027662

  3. Robust multitask learning with three-dimensional empirical mode decomposition-based features for hyperspectral classification

    NASA Astrophysics Data System (ADS)

    He, Zhi; Liu, Lin

    2016-11-01

    Empirical mode decomposition (EMD) and its variants have recently been applied for hyperspectral image (HSI) classification due to their ability to extract useful features from the original HSI. However, it remains a challenging task to effectively exploit the spectral-spatial information by the traditional vector or image-based methods. In this paper, a three-dimensional (3D) extension of EMD (3D-EMD) is proposed to naturally treat the HSI as a cube and decompose the HSI into varying oscillations (i.e. 3D intrinsic mode functions (3D-IMFs)). To achieve fast 3D-EMD implementation, 3D Delaunay triangulation (3D-DT) is utilized to determine the distances of extrema, while separable filters are adopted to generate the envelopes. Taking the extracted 3D-IMFs as features of different tasks, robust multitask learning (RMTL) is further proposed for HSI classification. In RMTL, pairs of low-rank and sparse structures are formulated by trace-norm and l1,2 -norm to capture task relatedness and specificity, respectively. Moreover, the optimization problems of RMTL can be efficiently solved by the inexact augmented Lagrangian method (IALM). Compared with several state-of-the-art feature extraction and classification methods, the experimental results conducted on three benchmark data sets demonstrate the superiority of the proposed methods.

  4. Satellite-based empirical models linking river plume dynamics with hypoxic area and volume

    NASA Astrophysics Data System (ADS)

    Le, Chengfeng; Lehrter, John C.; Hu, Chuanmin; Obenour, Daniel R.

    2016-03-01

    Satellite-based empirical models explaining hypoxic area and volume variation were developed for the seasonally hypoxic (O2 < 2 mg L-1) northern Gulf of Mexico adjacent to the Mississippi River. Annual variations in midsummer hypoxic area and volume were related to Moderate Resolution Imaging Spectroradiometer-derived monthly estimates of river plume area (km2) and average, inner shelf chlorophyll a concentration (Chl a, mg m-3). River plume area in June was negatively related with midsummer hypoxic area (km2) and volume (km3), while July inner shelf Chl a was positively related to hypoxic area and volume. Multiple regression models using river plume area and Chl a as independent variables accounted for most of the variability in hypoxic area (R2 = 0.92) or volume (R2 = 0.89). These models explain more variation in hypoxic area than models using Mississippi River nutrient loads as independent variables. The results here also support a hypothesis that confinement of the river plume to the inner shelf is an important mechanism controlling hypoxia area and volume in this region.

  5. Empirical Study of User Preferences Based on Rating Data of Movies.

    PubMed

    Zhao, YingSi; Shen, Bo

    2016-01-01

    User preference plays a prominent role in many fields, including electronic commerce, social opinion, and Internet search engines. Particularly in recommender systems, it directly influences the accuracy of the recommendation. Though many methods have been presented, most of these have only focused on how to improve the recommendation results. In this paper, we introduce an empirical study of user preferences based on a set of rating data about movies. We develop a simple statistical method to investigate the characteristics of user preferences. We find that the movies have potential characteristics of closure, which results in the formation of numerous cliques with a power-law size distribution. We also find that a user related to a small clique always has similar opinions on the movies in this clique. Then, we suggest a user preference model, which can eliminate the predictions that are considered to be impracticable. Numerical results show that the model can reflect user preference with remarkable accuracy when data elimination is allowed, and random factors in the rating data make prediction error inevitable. In further research, we will investigate many other rating data sets to examine the universality of our findings. PMID:26735847

  6. Empirical prediction of Indian summer monsoon rainfall with different lead periods based on global SST anomalies

    NASA Astrophysics Data System (ADS)

    Pai, D. S.; Rajeevan, M.

    2006-02-01

    The main objective of this study was to develop empirical models with different seasonal lead time periods for the long range prediction of seasonal (June to September) Indian summer monsoon rainfall (ISMR). For this purpose, 13 predictors having significant and stable relationships with ISMR were derived by the correlation analysis of global grid point seasonal Sea-Surface Temperature (SST) anomalies and the tendency in the SST anomalies. The time lags of the seasonal SST anomalies were varied from 1 season to 4 years behind the reference monsoon season. The basic SST data set used was the monthly NOAA Extended Reconstructed Global SST (ERSST) data at 2° × 2° spatial grid for the period 1951 2003. The time lags of the 13 predictors derived from various areas of all three tropical ocean basins (Indian, Pacific and Atlantic Oceans) varied from 1 season to 3 years. Based on these inter-correlated predictors, 3 predictor sub sets A, B and C were formed with prediction lead time periods of 0, 1 and 2 seasons, respectively, from the beginning of the monsoon season. The selected principal components (PCs) of these predictor sets were used as the input parameters for the models A, B and C, respectively. The model development period was 1955 1984. The correct model size was derived using all-possible regressions procedure and Mallow’s “Cp” statistics.

  7. Ship classification using nonlinear features of radiated sound: an approach based on empirical mode decomposition.

    PubMed

    Bao, Fei; Li, Chen; Wang, Xinlong; Wang, Qingfu; Du, Shuanping

    2010-07-01

    Classification for ship-radiated underwater sound is one of the most important and challenging subjects in underwater acoustical signal processing. An approach to ship classification is proposed in this work based on analysis of ship-radiated acoustical noise in subspaces of intrinsic mode functions attained via the ensemble empirical mode decomposition. It is shown that detection and acquisition of stable and reliable nonlinear features become practically feasible by nonlinear analysis of the time series of individual decomposed components, each of which is simple enough and well represents an oscillatory mode of ship dynamics. Surrogate and nonlinear predictability analysis are conducted to probe and measure the nonlinearity and regularity. The results of both methods, which verify each other, substantiate that ship-radiated noises contain components with deterministic nonlinear features well serving for efficient classification of ships. The approach perhaps opens an alternative avenue in the direction toward object classification and identification. It may also import a new view of signals as complex as ship-radiated sound.

  8. Empirical Study of User Preferences Based on Rating Data of Movies

    PubMed Central

    Zhao, YingSi; Shen, Bo

    2016-01-01

    User preference plays a prominent role in many fields, including electronic commerce, social opinion, and Internet search engines. Particularly in recommender systems, it directly influences the accuracy of the recommendation. Though many methods have been presented, most of these have only focused on how to improve the recommendation results. In this paper, we introduce an empirical study of user preferences based on a set of rating data about movies. We develop a simple statistical method to investigate the characteristics of user preferences. We find that the movies have potential characteristics of closure, which results in the formation of numerous cliques with a power-law size distribution. We also find that a user related to a small clique always has similar opinions on the movies in this clique. Then, we suggest a user preference model, which can eliminate the predictions that are considered to be impracticable. Numerical results show that the model can reflect user preference with remarkable accuracy when data elimination is allowed, and random factors in the rating data make prediction error inevitable. In further research, we will investigate many other rating data sets to examine the universality of our findings. PMID:26735847

  9. Seismic facies analysis based on self-organizing map and empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Du, Hao-kun; Cao, Jun-xing; Xue, Ya-juan; Wang, Xing-jian

    2015-01-01

    Seismic facies analysis plays an important role in seismic interpretation and reservoir model building by offering an effective way to identify the changes in geofacies inter wells. The selections of input seismic attributes and their time window have an obvious effect on the validity of classification and require iterative experimentation and prior knowledge. In general, it is sensitive to noise when waveform serves as the input data to cluster analysis, especially with a narrow window. To conquer this limitation, the Empirical Mode Decomposition (EMD) method is introduced into waveform classification based on SOM. We first de-noise the seismic data using EMD and then cluster the data using 1D grid SOM. The main advantages of this method are resolution enhancement and noise reduction. 3D seismic data from the western Sichuan basin, China, are collected for validation. The application results show that seismic facies analysis can be improved and better help the interpretation. The powerful tolerance for noise makes the proposed method to be a better seismic facies analysis tool than classical 1D grid SOM method, especially for waveform cluster with a narrow window.

  10. Pseudo-empirical Likelihood-Based Method Using Calibration for Longitudinal Data with Drop-Out

    PubMed Central

    Chen, Baojiang; Zhou, Xiao-Hua; Chan, Kwun Chuen Gary

    2014-01-01

    Summary In observational studies, interest mainly lies in estimation of the population-level relationship between the explanatory variables and dependent variables, and the estimation is often undertaken using a sample of longitudinal data. In some situations, the longitudinal data sample features biases and loss of estimation efficiency due to non-random drop-out. However, inclusion of population-level information can increase estimation efficiency. In this paper we propose an empirical likelihood-based method to incorporate population-level information in a longitudinal study with drop-out. The population-level information is incorporated via constraints on functions of the parameters, and non-random drop-out bias is corrected by using a weighted generalized estimating equations method. We provide a three-step estimation procedure that makes computation easier. Some commonly used methods are compared in simulation studies, which demonstrate that our proposed method can correct the non-random drop-out bias and increase the estimation efficiency, especially for small sample size or when the missing proportion is high. In some situations, the efficiency improvement is substantial. Finally, we apply this method to an Alzheimer’s disease study. PMID:25587200

  11. Interdigitated silver-polymer-based antibacterial surface system activated by oligodynamic iontophoresis - an empirical characterization study.

    PubMed

    Shirwaiker, Rohan A; Wysk, Richard A; Kariyawasam, Subhashinie; Voigt, Robert C; Carrion, Hector; Nembhard, Harriet Black

    2014-02-01

    There is a pressing need to control the occurrences of nosocomial infections due to their detrimental effects on patient well-being and the rising treatment costs. To prevent the contact transmission of such infections via health-critical surfaces, a prophylactic surface system that consists of an interdigitated array of oppositely charged silver electrodes with polymer separations and utilizes oligodynamic iontophoresis has been recently developed. This paper presents a systematic study that empirically characterizes the effects of the surface system parameters on its antibacterial efficacy, and validates the system's effectiveness. In the first part of the study, a fractional factorial design of experiments (DOE) was conducted to identify the statistically significant system parameters. The data were used to develop a first-order response surface model to predict the system's antibacterial efficacy based on the input parameters. In the second part of the study, the effectiveness of the surface system was validated by evaluating it against four bacterial species responsible for several nosocomial infections - Staphylococcus aureus, Escherichia coli, Pseudomonas aeruginosa, and Enterococcus faecalis - alongside non-antibacterial polymer (acrylic) control surfaces. The system demonstrated statistically significant efficacy against all four bacteria. The results indicate that given a constant total effective surface area, the system designed with micro-scale features (minimum feature width: 20 μm) and activated by 15 μA direct current will provide the most effective antibacterial prophylaxis.

  12. Literature review of theory-based empirical studies examining adolescent tanning practices.

    PubMed

    Reynolds, Diane

    2007-10-01

    Lifetime exposure to ultraviolet radiation is a major risk factor for all types of skin cancer. The purpose of this manuscript is to examine theory-guided empirical studies examining adolescent tanning practices.

  13. A method of 3D reconstruction from multiple views based on graph theoretic segmentation

    NASA Astrophysics Data System (ADS)

    Li, Yi; Hong, Hanyu; Zhang, Xiuhua; Bai, Haoyu

    2013-10-01

    During the process of three-dimensional vision inspection for products, the target objects under the complex background are usually immovable. So the desired three-dimensional reconstruction results can not be able to be obtained because of achieving the targets, which is difficult to be extracted from the images under the complicated and diverse background. Aiming at the problem, a method of three-dimensional reconstruction based on the graph theoretic segmentation and multiple views is proposed in this paper. Firstly, the target objects are segmented from obtained multi-view images by the method based on graph theoretic segmentation and the parameters of all cameras arranged in a linear way are gained by the method of Zhengyou Zhang calibration. Then, combined with Harris corner detection and Difference of Gaussian detection algorithm, the feature points of the images are detected. At last, after matching feature points by the triangle method, the surface of the object is reconstructed by the method of Poisson surface reconstruction. The reconstruction experimental results show that the proposed algorithm segments the target objects in the complex scene accurately and steadily. What's more, the algorithm based on the graph theoretic segmentation solves the problem of object extraction in the complex scene, and the static object surface is reconstructed precisely. The proposed algorithm also provides the crucial technology for the three-dimensional vision inspection and other practical applications.

  14. Theoretical Determination of the pK a Values of Betalamic Acid Related to the Free Radical Scavenger Capacity: Comparison Between Empirical and Quantum Chemical Methods.

    PubMed

    Tutone, Marco; Lauria, Antonino; Almerico, Anna Maria

    2016-06-01

    Health benefits of dietary phytochemicals have been suggested in recent years. Among 1000s of different compounds, Betalains, which occur in vegetables of the Cariophyllalae order (cactus pear fruits and red beet), have been considered because of reducing power and potential to affect redox-modulated cellular processes. The antioxidant power of Betalains is strictly due to the dissociation rate of the acid moieties present in all the molecules of this family of phytochemicals. Experimentally, only the pK a values of betanin were determined. Recently, it was evidenced it was evidenced as the acid dissociation, at different environmental pHs, affects on its electron-donating capacity, and further on its free radical scavenging power. The identical correlation was studied on another Betalains family compound, Betalamic Acid. Experimental evidences showed that the free radical scavenging capacity of this compound drastically decreases at pH > 5, but pK a values were experimentally not measured. With the aim to justify the Betalamic Acid behavior as free radical scavenger, in this paper we tried to predict in silico the pK a values by means different approaches. Starting from the known experimental pK as of acid compounds, both phytochemicals and small organic, two empirical approaches and quantum-mechanical calculation were compared to give reliable prediction of the pK as of Betalamic Acid. Results by means these computational approaches are consistent with the experimental evidences. As shown herein, in silico, the totally dissociated species, at the experimental pH > 5 in solution, is predominant, exploiting the higher electron-donating capability (HOMO energy). Therefore, the computational estimated pK a values of Betalamic Acid resulted very reliable. PMID:26253717

  15. Theoretical study of the adsorption of aromatic units on carbon allotropes including explicit (empirical) DFT dispersion corrections and implicitly dispersion-corrected functionals: the pyridine case.

    PubMed

    Ramos-Berdullas, Nicolás; Pérez-Juste, Ignacio; Van Alsenoy, Christian; Mandado, Marcos

    2015-01-01

    The suitability of implicitly dispersion-corrected functionals, namely the M06-2X, for the determination of interaction energies and electron polarization densities in adsorption studies of aromatic molecules on carbon allotropes surfaces is analysed by comparing the results with those obtained using explicit dispersion through Grimme's empirical corrections. Several models of increasing size for the graphene sheet together with one-dimensional curved carbon structures, (5,5), (6,6) and (7,7) armchair single-walled nanotubes, and two-dimensional curved carbon structures, C60 fullerene, have been considered as substrates in this work, whereas pyridine has been chosen as an example for the adsorbed aromatic molecule. Comparison with recent experimental estimations of the adsorption energy and calculations using periodic boundary conditions on a supercell of 72 carbon atoms indicates that a finite model containing ninety six carbon atoms (C96) approaches quite well the adsorption on a graphene sheet. Analysis of the interaction energy components reveals that the M06-2X functional accounts for most of the dispersion energy implicitly, followed far by wB97X and B3LYP, whereas B97 and BLYP do not differ too much from HF. It has been found that M06-2X corrects only the energy component associated to dispersion and leaves the rest, electrostatic, Pauli and induction "unaltered" with respect to the other DFT functionals investigated. Moreover, only the M06-2X functional reflects the effect of dispersion on the electron polarization density, whereas for the remaining functionals the polarization density does not differ too much from the HF density. This makes the former functional more suitable a priori for the calculation of electron density related properties in these adsorption complexes.

  16. Empirically Supported Family-Based Treatments for Conduct Disorder and Delinquency in Adolescents

    PubMed Central

    Henggeler, Scott W.; Sheidow, Ashli J.

    2011-01-01

    Several family-based treatments of conduct disorder and delinquency in adolescents have emerged as evidence-based and, in recent years, have been transported to more than 800 community practice settings. These models include multisystemic therapy, functional family therapy, multidimensional treatment foster care, and, to a lesser extent, brief strategic family therapy. In addition to summarizing the theoretical and clinical bases of these treatments, their results in efficacy and effectiveness trials are examined with particular emphasis on any demonstrated capacity to achieve favorable outcomes when implemented by real world practitioners in community practice settings. Special attention is also devoted to research on purported mechanisms of change as well as the long-term sustainability of outcomes achieved by these treatment models. Importantly, we note that the developers of each of the models have developed quality assurance systems to support treatment fidelity and youth and family outcomes; and the developers have formed purveyor organizations to facilitate the large scale transport of their respective treatments to community settings nationally and internationally. PMID:22283380

  17. Empirical population and public health ethics: A review and critical analysis to advance robust empirical-normative inquiry.

    PubMed

    Knight, Rod

    2016-05-01

    The field of population and public health ethics (PPHE) has yet to fully embrace the generation of evidence as an important project. This article reviews the philosophical debates related to the 'empirical turn' in clinical bioethics, and critically analyses how PPHE has and can engage with the philosophical implications of generating empirical data within the task of normative inquiry. A set of five conceptual and theoretical issues pertaining to population health that are unresolved and could potentially benefit from empirical PPHE approaches to normative inquiry are discussed. Each issue differs from traditional empirical bioethical approaches, in that they emphasize (1) concerns related to the population, (2) 'upstream' policy-relevant health interventions - within and outside of the health care system and (3) the prevention of illness and disease. Within each theoretical issue, a conceptual example from population and public health approaches to HIV prevention and health promotion is interrogated. Based on the review and critical analysis, this article concludes that empirical-normative approaches to population and public health ethics would be most usefully pursued as an iterative project (rather than as a linear project), in which the normative informs the empirical questions to be asked and new empirical evidence constantly directs conceptualizations of what constitutes morally robust public health practices. Finally, a conceptualization of an empirical population and public health ethics is advanced in order to open up new interdisciplinary 'spaces', in which empirical and normative approaches to ethical inquiry are transparently (and ethically) integrated.

  18. Asynchronous cellular automaton-based neuron: theoretical analysis and on-FPGA learning.

    PubMed

    Matsubara, Takashi; Torikai, Hiroyuki

    2013-05-01

    A generalized asynchronous cellular automaton-based neuron model is a special kind of cellular automaton that is designed to mimic the nonlinear dynamics of neurons. The model can be implemented as an asynchronous sequential logic circuit and its control parameter is the pattern of wires among the circuit elements that is adjustable after implementation in a field-programmable gate array (FPGA) device. In this paper, a novel theoretical analysis method for the model is presented. Using this method, stabilities of neuron-like orbits and occurrence mechanisms of neuron-like bifurcations of the model are clarified theoretically. Also, a novel learning algorithm for the model is presented. An equivalent experiment shows that an FPGA-implemented learning algorithm enables an FPGA-implemented model to automatically reproduce typical nonlinear responses and occurrence mechanisms observed in biological and model neurons.

  19. Security Analysis of Selected AMI Failure Scenarios Using Agent Based Game Theoretic Simulation

    SciTech Connect

    Abercrombie, Robert K; Schlicher, Bob G; Sheldon, Frederick T

    2014-01-01

    Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. We concentrated our analysis on the Advanced Metering Infrastructure (AMI) functional domain which the National Electric Sector Cyber security Organization Resource (NESCOR) working group has currently documented 29 failure scenarios. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain. From these five selected scenarios, we characterize them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrates how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA.

  20. Theoretical model and optimization of a novel temperature sensor based on quartz tuning fork resonators

    NASA Astrophysics Data System (ADS)

    Jun, Xu; Bo, You; Xin, Li; Juan, Cui

    2007-12-01

    To accurately measure temperatures, a novel temperature sensor based on a quartz tuning fork resonator has been designed. The principle of the quartz tuning fork temperature sensor is that the resonant frequency of the quartz resonator changes with the variation in temperature. This type of tuning fork resonator has been designed with a new doubly rotated cut work at flexural vibration mode as temperature sensor. The characteristics of the temperature sensor were evaluated and the results sufficiently met the target of development for temperature sensor. The theoretical model for temperature sensing has been developed and built. The sensor structure was analysed by finite element method (FEM) and optimized, including tuning fork geometry, tine electrode pattern and the sensor's elements size. The performance curve of output versus measured temperature is given. The results from theoretical analysis and experiments indicate that the sensor's sensitivity can reach 60 ppm °C-1 with the measured temperature range varying from 0 to 100 °C.

  1. A Model of the Regulation of Nitrogenase Electron Allocation in Legume Nodules (II. Comparison of Empirical and Theoretical Studies in Soybean).

    PubMed Central

    Moloney, A. H.; Guy, R. D.; Layzell, D. B.

    1994-01-01

    In N2-fixing legumes, the proportion of total electron flow through nitrogenase (total nitrogenase activity, TNA) that is used for N2 fixation is called the electron allocation coefficient (EAC). Previous studies have proposed that EAC is regulated by the competitive inhibition of H2 on N2 fixation and that the degree of H2 inhibition can be affected by a nodule's permeability to gas diffusion. To test this hypothesis, EAC was measured in soybean (Glycine max L. Merr.) nodules exposed to various partial pressures of H2 and N2, with or without changes in TNA or nodule permeability to gas diffusion, and the results were compared with the predictions of a mathematical model that combined equations for gas diffusion and competitive inhibition of N2 fixation (A. Moloney and D.B. Layzell [1993] Plant Physiol 103: 421-428). The empirical data clearly showed that decreases in EAC were associated with increases in external pH2, decreases in external pN2, and decreases in nodule permeability to O2 diffusion. The model predicted similar trends in EAC, and the small deviations that occurred between measured and predicted values could be readily accounted for by altering one or more of the following model assumptions: K1(H2) of nitrogenase (range from 2-4% H2), Km(N2) of nitrogenase (range from 4-5% N2), the allocation of less than 100% of whole-nodule respiration to tissues within the diffusion barrier, and the presence of a diffusion pathway that is open pore versus closed pore. The differences in the open-pore and closed-pore versions of the model suggest that it may be possible to use EAC measurements as a tool for the study of legume nodule diffusion barrier structure and function. The ability of the model to predict EAC provided strong support for the hypothesis that H2 inhibition of N2 fixation plays a major role in the in vivo control of EAC and that the presence of a variable barrier to gas diffusion affects the H2 and N2 concentration in the infected cell and

  2. A comprehensive theoretical model for on-chip microring-based photonic fractional differentiators

    PubMed Central

    Jin, Boyuan; Yuan, Jinhui; Wang, Kuiru; Sang, Xinzhu; Yan, Binbin; Wu, Qiang; Li, Feng; Zhou, Xian; Zhou, Guiyao; Yu, Chongxiu; Lu, Chao; Yaw Tam, Hwa; Wai, P. K. A.

    2015-01-01

    Microring-based photonic fractional differentiators play an important role in the on-chip all-optical signal processing. Unfortunately, the previous works do not consider the time-reversal and the time delay characteristics of the microring-based fractional differentiator. They also do not include the effect of input pulse width on the output. In particular, it cannot explain why the microring-based differentiator with the differentiation order n > 1 has larger output deviation than that with n < 1, and why the microring-based differentiator cannot reproduce the three-peak output waveform of an ideal differentiator with n > 1. In this paper, a comprehensive theoretical model is proposed. The critically-coupled microring resonator is modeled as an ideal first-order differentiator, while the under-coupled and over-coupled resonators are modeled as the time-reversed ideal fractional differentiators. Traditionally, the over-coupled microring resonators are used to form the differentiators with 1 < n < 2. However, we demonstrate that smaller fitting error can be obtained if the over-coupled microring resonator is fitted by an ideal differentiator with n < 1. The time delay of the differentiator is also considered. Finally, the influences of some key factors on the output waveform and deviation are discussed. The proposed theoretical model is beneficial for the design and application of the microring-based fractional differentiators. PMID:26381934

  3. Shape of the self-concept clarity change during group psychotherapy predicts the outcome: an empirical validation of the theoretical model of the self-concept change

    PubMed Central

    Styła, Rafał

    2015-01-01

    Background: Self-Concept Clarity (SCC) describes the extent to which the schemas of the self are internally integrated, well defined, and temporally stable. This article presents a theoretical model that describes how different shapes of SCC change (especially stable increase and “V” shape) observed in the course of psychotherapy are related to the therapy outcome. Linking the concept of Jean Piaget and the dynamic systems theory, the study postulates that a stable SCC increase is needed for the participants with a rather healthy personality structure, while SCC change characterized by a “V” shape or fluctuations is optimal for more disturbed patients. Method: Correlational study in a naturalistic setting with repeated measurements (M = 5.8) was conducted on the sample of 85 patients diagnosed with neurosis and personality disorders receiving intensive eclectic group psychotherapy under routine inpatient conditions. Participants filled in the Self-Concept Clarity Scale (SCCS), Symptoms' Questionnaire KS-II, and Neurotic Personality Questionnaire KON-2006 at the beginning and at the end of the course of psychotherapy. The SCCS was also administered every 2 weeks during psychotherapy. Results: As hypothesized, among the relatively healthiest group of patients the stable SCC increase was related to positive treatment outcome, while more disturbed patients benefited from the fluctuations and “V” shape of SCC change. Conclusions: The findings support the idea that for different personality dispositions either a monotonic increase or transient destabilization of SCC is a sign of a good treatment prognosis. PMID:26579001

  4. Depressive symptoms in the Netherlands 1975-1996: a theoretical framework and an empirical analysis of socio-demographic characteristics, gender differences and changes over time.

    PubMed

    Meertens, Vivian; Scheepers, Peer; Tax, Bert

    2003-03-01

    This article examines the longitudinal trend of depressive symptoms in the Netherlands, using large-scale national data recorded over the period 1975-1996. Our analyses showed fluctuations in the overall longitudinal trend. On the basis of a general theoretical framework, we formulated hypotheses concerning which socio-demographic characteristics determine the likelihood of suffering from depressive symptoms and how these associations might have changed over time. Our results revealed that people on low incomes, unemployed people, unmarried people and those who had given up their church membership were associated with depressive symptoms. Some associations between socio-demographic categories and depressive symptoms have changed over time. Divorced people have become progressively less likely to suffer from depressive symptoms compared with married people, whereas the reverse holds for those who were never married. People on low incomes have become more likely to suffer from depressive symptoms over time in comparison to people with the highest incomes. Gender differences in these associations were also found: educational level and church attendance were more beneficial to women in protecting them from depressive symptoms than they were to men.

  5. The development of a fast radiative transfer model based on an empirical orthogonal functions (EOF) technique

    NASA Astrophysics Data System (ADS)

    Havemann, Stephan

    2006-12-01

    Remote sensing with the new generation of highly spectrally resolving instruments like the Atmospheric Research Interferometer Evaluation System (ARIES) or the assimilation of highly resolved spectra from satellites into Numerical Weather Prediction (NWP) systems requires radiative transfer computations that deliver results essentially instantaneous. This paper reports on the development of such a new fast radiative transfer model. The model is based on an Empirical Orthogonal Functions (EOF) technique. The model can be used for the simulation of sensors with different characteristics and in different spectral ranges from the solar to the infrared. For the purpose of airborne remote sensing, the fast model has been designed to work on any altitude and for slant paths whilst looking down or up. The fast model works for situations with diverse temperature and humidity profiles to an accuracy of better than 0.01K for most of the instrument channels. The EOF fast model works for clear-sky atmospheres and is applicable to atmospheres with scattering layers of aerosols or clouds. The fast model is trained with a large set of diverse atmospheric training profiles. In forward calculations corresponding high resolution spectra are obtained. An EOF analysis is performed on these spectra and only the leading EOF are retained (data compression). When the fast model is applied to a new independent profile, only the weights of the EOF need to be calculated (=predicted). Monochromatic radiances at suitable frequencies are used as predictors. The frequency selection is done by a cluster algorithm, which sorts frequencies with similar characteristics into clusters.

  6. Assessment of diffuse trace metal inputs into surface waters - Combining empirical estimates with process based simulations

    NASA Astrophysics Data System (ADS)

    Schindewolf, Marcus; Steinz, André; Schmidt, Jürgen

    2015-04-01

    As a result of mining activities since the 13th century, surface waters of the German Mulde catchment suffer from deleterious dissolved and sediment attached lead (Pb) and zinc (Zn) inputs. The leaching rate of trace metals with drainage water is a significant criterion for assessing trace metal concentrations of soils and associated risks of ground water pollution. However, the vertical transport rates of trace metals in soils are difficult to quantify. Monitoring is restricted to small lysimeter plots, which limits the transferability of results. Additionally the solid-liquid-transfer conditions in soils are highly variable, primarily due to the fluctuating retention time of percolating soil water. In contrast, lateral sediment attached trace metal inputs are mostly associated with soil erosion and resulting sediment inputs into surface waters. Since soil erosion by water is related to rare single events, monitoring and empirical estimates reveal visible shortcomings. This gap in knowledge can only be closed by process based model calculations. Concerning these calculations it has to be considered, that Pb and Zn are predominantly attached to the fine-grained soil particles (<0.063 mm). The selective nature of soil erosion causes a preferential transport of these fine particles, while less contaminated larger particles remain on site. Consequently trace metals are enriched in the eroded sediment compared to the origin soil. This paper aims to introduce both, a new method that allows the assessment of trace metal leaching rates from contaminated top soils for standardised transfer conditions and a process based modelling approach for sediment attached trace metal inputs into surface waters. Pb and Zn leaching rates amounts to 20 Mg ha-1 yr-1 resp. 114 Mg ha-1 yr-1. Deviations to observed dissolved trace metal yields at the Bad Düben gauging station are caused by plant uptake and subsoil retention. Sediment attached Pb and Zn input rates amounts to 114 Mg ha-1 yr

  7. A theoretical analysis model of realizing wavelength converter based on saturable absorber

    NASA Astrophysics Data System (ADS)

    Zhao, Tonggang; Ren, Jianhua; Zhao, Ronghua; Wang, Lili; Rao, Lan; Lin, Jintong

    2005-02-01

    As the key apparatus, the All optical Wavelength Converter (AOWC) will play an important role in future optical communication and optical signal processing system. In this paper, switching characteristics of wavelength converter based on saturable absorber in semiconductor lasers will be researched. This kind of conversion mechanism possesses some advantage, such as simple structure, low cost, high stability and so on. This paper is organized as follows: Firstly, utilizing rate equations, a new theoretical model on wavelength conversion based on saturable absorber is put forward. Nextly, the frequency modulation response of wavelength conversion will be discussed under the small-signal analysis based on the theoretical model. Lastly, Numerical value solution results will be given out when external signal light injects in saturable absorber region of semiconductor lasers. The characteristics of wavelength conversion are simulated in different optical parameters including the injection current, the input signal optical power and bit rate. Those results are useful to realization and the optimal design of the wavelength converter based on saturable absorber.

  8. Meta-Theoretical Contributions to the Constitution of a Model-Based Didactics of Science

    NASA Astrophysics Data System (ADS)

    Ariza, Yefrin; Lorenzano, Pablo; Adúriz-Bravo, Agustín

    2016-07-01

    There is nowadays consensus in the community of didactics of science (i.e. science education understood as an academic discipline) regarding the need to include the philosophy of science in didactical research, science teacher education, curriculum design, and the practice of science education in all educational levels. Some authors have identified an ever-increasing use of the concept of `theoretical model', stemming from the so-called semantic view of scientific theories. However, it can be recognised that, in didactics of science, there are over-simplified transpositions of the idea of model (and of other meta-theoretical ideas). In this sense, contemporary philosophy of science is often blurred or distorted in the science education literature. In this paper, we address the discussion around some meta-theoretical concepts that are introduced into didactics of science due to their perceived educational value. We argue for the existence of a `semantic family', and we characterise four different versions of semantic views existing within the family. In particular, we seek to contribute to establishing a model-based didactics of science mainly supported in this semantic family.

  9. Mismatch between electrophysiologically defined and ventriculography based theoretical targets for posteroventral pallidotomy in Parkinson's disease

    PubMed Central

    Merello, M; Cammarota, A; Cerquetti, D; Leiguarda, R

    2000-01-01

    OBJECTIVES—Over the past few years many reports have shown that posteroventral pallidotomy is an effective method for treating advanced cases of Parkinson's disease. The main differences with earlier descriptions were the use of standardised evaluation with new high resolution MRI studies and of single cell microrecording which can electrophysiologically define the sensorimotor portion of the internal globus pallidus (GPi). The present study was performed on a consecutive series of 40 patients with Parkinson's disease who underwent posteroventral pallidotomy to determine localisation discrepancies between the ventriculography based theoretical and the electrophysiologically defined target for posteroventral pallidotomy.
METHODS—The tentative location of the posteroventral GPi portion was defined according to the proportional Talairach system. Single cell recording was performed in all patients. The definitive target was chosen according to the feasibility of recording single cells with GPi cell features, including the presence of motor drive and correct identification of the internal capsule and of the optic tract by activity recording and microstimulation.
RESULTS—In all 40 patients the electrophysiologically defined sensorimotor portion of the GPi was lesioned, with significantly improved cardinal Parkinson's disease symptoms as well as levodopa induced dyskinesias, without damage to the internal capsule or optic tract. Significant differences between the localisation of the ventriculography based theoretical versus electrophysiological target were found in depth (p<0.0008) and posteriority (p<0.04). No significant differences were found in laterality between both approaches. Difference ranges were 8 mm for laterality, 6.5 mm for depth, and 10 mm for posteriority.
CONCLUSIONS—Electrophysiologically defined lesion of GPi for posteroventral pallidotomy, shown to be effective for treating Parkinson's disease, is located at a significantly different

  10. The Hedonic Wage Technique as a Tool for Estimating the Costs of School Personnel: A Theoretical Exposition with Implications for Empirical Analysis.

    ERIC Educational Resources Information Center

    Chambers, Jay G.

    Present systems for the apportionment of grants from the state or federal level to local public school districts are based primarily on measures of district wealth as modified by weightings for the characteristics of the student population. Until recently little attention has been given to differences among districts in the costs of providing…

  11. An Empirically-based Steady-state Friction Law and its Implications for Fault Stability

    NASA Astrophysics Data System (ADS)

    Spagnuolo, E.; Nielsen, S. B.; Di Toro, G.; Violay, M.

    2015-12-01

    Empirically-based rate-and-state friction laws (RSFL) have been proposed to model the dependence of friction forces with slip and time. The relevance of the RSFL for earthquakes mechanics is that few constitutive parameters (e.g. A-B= dτ/dlog(V) with τ and V the shear stress and slip rate respectively, allow us to define the stability conditions of a fault. According to RSFL if A-B> 0, τ increases with V (rate-hardening behavior) resulting in an unconditionally stable behavior; if A-B< 0, τ decreases with V (rate-weakening behavior) potentially resulting in an unstable behavior leading to dynamic runaway. Given that τ at steady state conditions allows us also to define a critical fault stiffness, the RSFL determine a condition of stability for faults as their stiffness approaches the critical conditions. However, the conditions of fault stability, determined by the critical stiffness under the assumption of either a rate-weakening or a rate-hardening behavior, might be restrictive given that frictional properties sensibly change as a function of slip or slip rate. Moreover, the RSFL were determined from experiments conducted at sub-seismic slip rates (< 1 cm/s) and their extrapolation to earthquake deformation conditions remains questionable on the basis of the experimental evidence of large dynamic weakening at seismic slip rates and the plethora of slip events which characterize the seismic cycle. Here, we propose a modified RSFL based on the review of a large published and unpublished dataset of rock-friction experiments performed with different testing machines (rotary shear, bi-axial, tri-axial). The modified RSFL is valid at steady-state conditions from sub-seismic to seismic slip rates (0.1 μm/s

  12. Simulation of Long Lived Tracers Using an Improved Empirically Based Two-Dimensional Model Transport Algorithm

    NASA Technical Reports Server (NTRS)

    Fleming, E. L.; Jackman, C. H.; Stolarski, R. S.; Considine, D. B.

    1998-01-01

    We have developed a new empirically-based transport algorithm for use in our GSFC two-dimensional transport and chemistry model. The new algorithm contains planetary wave statistics, and parameterizations to account for the effects due to gravity waves and equatorial Kelvin waves. As such, this scheme utilizes significantly more information compared to our previous algorithm which was based only on zonal mean temperatures and heating rates. The new model transport captures much of the qualitative structure and seasonal variability observed in long lived tracers, such as: isolation of the tropics and the southern hemisphere winter polar vortex; the well mixed surf-zone region of the winter sub-tropics and mid-latitudes; the latitudinal and seasonal variations of total ozone; and the seasonal variations of mesospheric H2O. The model also indicates a double peaked structure in methane associated with the semiannual oscillation in the tropical upper stratosphere. This feature is similar in phase but is significantly weaker in amplitude compared to the observations. The model simulations of carbon-14 and strontium-90 are in good agreement with observations, both in simulating the peak in mixing ratio at 20-25 km, and the decrease with altitude in mixing ratio above 25 km. We also find mostly good agreement between modeled and observed age of air determined from SF6 outside of the northern hemisphere polar vortex. However, observations inside the vortex reveal significantly older air compared to the model. This is consistent with the model deficiencies in simulating CH4 in the northern hemisphere winter high latitudes and illustrates the limitations of the current climatological zonal mean model formulation. The propagation of seasonal signals in water vapor and CO2 in the lower stratosphere showed general agreement in phase, and the model qualitatively captured the observed amplitude decrease in CO2 from the tropics to midlatitudes. However, the simulated seasonal

  13. Patient centredness in integrated care: results of a qualitative study based on a systems theoretical framework

    PubMed Central

    Lüdecke, Daniel

    2014-01-01

    Introduction Health care providers seek to improve patient-centred care. Due to fragmentation of services, this can only be achieved by establishing integrated care partnerships. The challenge is both to control costs while enhancing the quality of care and to coordinate this process in a setting with many organisations involved. The problem is to establish control mechanisms, which ensure sufficiently consideration of patient centredness. Theory and methods Seventeen qualitative interviews have been conducted in hospitals of metropolitan areas in northern Germany. The documentary method, embedded into a systems theoretical framework, was used to describe and analyse the data and to provide an insight into the specific perception of organisational behaviour in integrated care. Results The findings suggest that integrated care partnerships rely on networks based on professional autonomy in the context of reliability. The relationships of network partners are heavily based on informality. This correlates with a systems theoretical conception of organisations, which are assumed autonomous in their decision-making. Conclusion and discussion Networks based on formal contracts may restrict professional autonomy and competition. Contractual bindings that suppress the competitive environment have negative consequences for patient-centred care. Drawbacks remain due to missing self-regulation of the network. To conclude, less regimentation of integrated care partnerships is recommended. PMID:25411573

  14. The neural mediators of kindness-based meditation: a theoretical model

    PubMed Central

    Mascaro, Jennifer S.; Darcher, Alana; Negi, Lobsang T.; Raison, Charles L.

    2015-01-01

    Although kindness-based contemplative practices are increasingly employed by clinicians and cognitive researchers to enhance prosocial emotions, social cognitive skills, and well-being, and as a tool to understand the basic workings of the social mind, we lack a coherent theoretical model with which to test the mechanisms by which kindness-based meditation may alter the brain and body. Here, we link contemplative accounts of compassion and loving-kindness practices with research from social cognitive neuroscience and social psychology to generate predictions about how diverse practices may alter brain structure and function and related aspects of social cognition. Contingent on the nuances of the practice, kindness-based meditation may enhance the neural systems related to faster and more basic perceptual or motor simulation processes, simulation of another’s affective body state, slower and higher-level perspective-taking, modulatory processes such as emotion regulation and self/other discrimination, and combinations thereof. This theoretical model will be discussed alongside best practices for testing such a model and potential implications and applications of future work. PMID:25729374

  15. The neural mediators of kindness-based meditation: a theoretical model.

    PubMed

    Mascaro, Jennifer S; Darcher, Alana; Negi, Lobsang T; Raison, Charles L

    2015-01-01

    Although kindness-based contemplative practices are increasingly employed by clinicians and cognitive researchers to enhance prosocial emotions, social cognitive skills, and well-being, and as a tool to understand the basic workings of the social mind, we lack a coherent theoretical model with which to test the mechanisms by which kindness-based meditation may alter the brain and body. Here, we link contemplative accounts of compassion and loving-kindness practices with research from social cognitive neuroscience and social psychology to generate predictions about how diverse practices may alter brain structure and function and related aspects of social cognition. Contingent on the nuances of the practice, kindness-based meditation may enhance the neural systems related to faster and more basic perceptual or motor simulation processes, simulation of another's affective body state, slower and higher-level perspective-taking, modulatory processes such as emotion regulation and self/other discrimination, and combinations thereof. This theoretical model will be discussed alongside best practices for testing such a model and potential implications and applications of future work.

  16. Universal Design for Instruction in Postsecondary Education: A Systematic Review of Empirically Based Articles

    ERIC Educational Resources Information Center

    Roberts, Kelly D.; Park, Hye Jin; Brown, Steven; Cook, Bryan

    2011-01-01

    Universal Design for Instruction (UDI) in postsecondary education is a relatively new concept/framework that has generated significant support. The purpose of this literature review was to examine existing empirical research, including qualitative, quantitative, and mixed methods, on the use of UDI (and related terms) in postsecondary education.…

  17. Comparisons of experiment with cellulose models based on electronic structure and empirical force field theories

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Studies of cellobiose conformations with HF/6-31G* and B3LYP/6-31+G*quantum theory [1] gave a reference for studies with the much faster empirical methods such as MM3, MM4, CHARMM and AMBER. The quantum studies also enable a substantial reduction in the number of exo-cyclic group orientations that...

  18. Monitoring of Qualifications and Employment in Austria: An Empirical Approach Based on the Labour Force Survey

    ERIC Educational Resources Information Center

    Lassnigg, Lorenz; Vogtenhuber, Stefan

    2011-01-01

    The empirical approach referred to in this article describes the relationship between education and training (ET) supply and employment in Austria; the use of the new ISCED (International Standard Classification of Education) fields of study variable makes this approach applicable abroad. The purpose is to explore a system that produces timely…

  19. Model Selection for Equating Testlet-Based Tests in the NEAT Design: An Empirical Study

    ERIC Educational Resources Information Center

    He, Wei; Li, Feifei; Wolfe, Edward W.; Mao, Xia

    2012-01-01

    For those tests solely composed of testlets, local item independency assumption tends to be violated. This study, by using empirical data from a large-scale state assessment program, was interested in investigates the effects of using different models on equating results under the non-equivalent group anchor-test (NEAT) design. Specifically, the…

  20. Understanding Transactional Distance in Web-Based Learning Environments: An Empirical Study

    ERIC Educational Resources Information Center

    Huang, Xiaoxia; Chandra, Aruna; DePaolo, Concetta A.; Simmons, Lakisha L.

    2016-01-01

    Transactional distance is an important pedagogical theory in distance education that calls for more empirical support. The purpose of this study was to verify the theory by operationalizing and examining the relationship of (1) dialogue, structure and learner autonomy to transactional distance, and (2) environmental factors and learner demographic…

  1. The successful merger of theoretical thermochemistry with fragment-based methods in quantum chemistry.

    PubMed

    Ramabhadran, Raghunath O; Raghavachari, Krishnan

    2014-12-16

    CONSPECTUS: Quantum chemistry and electronic structure theory have proven to be essential tools to the experimental chemist, in terms of both a priori predictions that pave the way for designing new experiments and rationalizing experimental observations a posteriori. Translating the well-established success of electronic structure theory in obtaining the structures and energies of small chemical systems to increasingly larger molecules is an exciting and ongoing central theme of research in quantum chemistry. However, the prohibitive computational scaling of highly accurate ab initio electronic structure methods poses a fundamental challenge to this research endeavor. This scenario necessitates an indirect fragment-based approach wherein a large molecule is divided into small fragments and is subsequently reassembled to compute its energy accurately. In our quest to further reduce the computational expense associated with the fragment-based methods and overall enhance the applicability of electronic structure methods to large molecules, we realized that the broad ideas involved in a different area, theoretical thermochemistry, are transferable to the area of fragment-based methods. This Account focuses on the effective merger of these two disparate frontiers in quantum chemistry and how new concepts inspired by theoretical thermochemistry significantly reduce the total number of electronic structure calculations needed to be performed as part of a fragment-based method without any appreciable loss of accuracy. Throughout, the generalized connectivity based hierarchy (CBH), which we developed to solve a long-standing problem in theoretical thermochemistry, serves as the linchpin in this merger. The accuracy of our method is based on two strong foundations: (a) the apt utilization of systematic and sophisticated error-canceling schemes via CBH that result in an optimal cutting scheme at any given level of fragmentation and (b) the use of a less expensive second

  2. Empirical model of equatorial electrojet based on ground-based magnetometer data during solar minimum in fall

    NASA Astrophysics Data System (ADS)

    Hamid, Nurul Shazana Abdul; Liu, Huixin; Uozumi, Teiji; Yoshikawa, Akimasa

    2015-12-01

    In this study, we constructed an empirical model of the equatorial electrojet (EEJ), including local time and longitudinal dependence, based on simultaneous data from 12 magnetometer stations located in six longitude sectors. An analysis was carried out using the equatorial electrojet index, EUEL, calculated from the geomagnetic northward H component. The magnetic EEJ strength is calculated as the difference between the normalized EUEL index of the magnetic dip equator station and the normalized EUEL index of the off-dip equator station located beyond the EEJ band. Analysis showed that this current is always strongest in the South American sector, regardless of local time (LT), and weakest in the Indian sector during 0900 and 1000 LT, but shifted to the African sector during 1100 to 1400 LT. These longitude variations of EEJ roughly follow variations of the inversed main field strength along the dip equator, except for the Indian and Southeast Asian sectors. The result showed that the EEJ component derived from the model exhibits a similar pattern with measured EEJ from ground data during noontime, mainly before 1300 LT.

  3. Developing Empirically-Based Ground Truth Criteria for Varying Geological Complexity using Regional Seismic Networks

    NASA Astrophysics Data System (ADS)

    Brazier, R. A.; O'Donnell, J.; Boomer, K.; Nyblade, A.; Kokoska, J.; Liu, S.

    2011-12-01

    We have extended the approaches of Bondár et al. (2004), Bondár and McLaughlin (2009), and Boomer et al. (2010) by developing new empirically based ground truth (EBGT) local criteria for a variety of geologic settings for which data sets containing GT0 events (explosions and mine tremors) are available, local crustal structure is well known, and hand-picked arrival times have been obtained. Boomer et al. (2010) describes the development of local criteria for the simple structure of the Archean Kaapvaal Craton in southern Africa. Continuing the development of local criteria in regions of varying geologic complexity, we now have criteria for the Main Ethiopian Rift and for the Tibetan plateau. In the geologically complex region of the Main Ethiopian Rift, we use the 2003 Ethiopia-Afar Geoscientific Lithosphere Experiment (EAGLE; Maguire et al., 2003) data to obtain EBGT595% criteria. Four of the 25 large refraction line shots were used as reference events to develop the criteria; the remainder of the shots are used for verification. We require an event to be recorded on at least 8 stations within the Pg/Pn crossover distance and a network quality metric (Bondár and McLaughlin, 2009) less than 0.43 for an event to be classified as EBGT595%. Using these criteria to identify GT events within the Ethiopian Broadband Seismic Experiment, we have identified ten events to add to the NNSA knowledge database. There are also 196 potential events from the EAGLE dataset from which we expect to yield an additional 20 GT events. The crust and upper mantle structure of the Tibetan plateau is arguably more complicated than for the Kaapvaal Craton and yet less complicated than the Main Ethiopian Rift, and includes a number of prominent suture zones. Five of the 11 larger shots from the International Deep Profiling of Tibet and the Himalaya (INDEPTH III) refraction line were used to develop the criteria. The remaining 6 shots are used to validate the criteria. The criteria for Tibet

  4. Theoretical studies on CO2 capture behavior of quaternary ammonium-based polymeric ionic liquids.

    PubMed

    Wang, Tao; Ge, Kun; Chen, Kexian; Hou, Chenglong; Fang, Mengxiang

    2016-05-14

    Quaternary ammonium-based polymeric ionic liquids (PILs) are novel CO2 sorbents as they have high capacity, high stability and high binding energy. Moreover, the binding energy of ionic pairs to CO2 is tunable by changing the hydration state so that the sorbent can be regenerated through humidity adjustment. In this study, theoretical calculations were conducted to reveal the mechanism of the humidity swing CO2 adsorption, based on model compounds of quaternary ammonium cation and carbonate anions. The electrostatic potential map demonstrates the anion, rather than the cation, is chemically preferential for CO2 adsorption. Further, the proton transfer process from water to carbonate at the sorbent interface is successfully depicted with an intermediate which has a higher energy state. By determining the CO2 adsorption energy and activation energy at different hydration states, it is discovered that water could promote CO2 adsorption by reducing the energy barrier of proton transfer. The adsorption/desorption equilibrium would shift to desorption by adding water, which constitutes the theoretical basis for humidity swing. By analyzing the hydrogen bonding and structure of the water molecules, it is interesting to find that the CO2 adsorption weakens the hydrophilicity of the sorbent and results in release of water. The requirement of latent heat for the phase change of water could significantly reduce the heat of adsorption. The special "self-cooling" effect during gas adsorption can lower the temperature of the sorbent and benefit the adsorption isotherms.

  5. Estimation of daily global solar radiation using wavelet regression, ANN, GEP and empirical models: A comparative study of selected temperature-based approaches

    NASA Astrophysics Data System (ADS)

    Sharifi, Sayed Saber; Rezaverdinejad, Vahid; Nourani, Vahid

    2016-11-01

    Although the sunshine-based models generally have a better performance than temperature-based models for estimating solar radiation, the limited availability of sunshine duration records makes the development of temperature-based methods inevitable. This paper presents a comparative study between Artificial Neural Networks (ANNs), Gene Expression Programming (GEP), Wavelet Regression (WR) and 5 selected temperature-based empirical models for estimating the daily global solar radiation. A new combination of inputs including four readily accessible parameters have been employed: daily mean clearness index (KT), temperature range (ΔT), theoretical sunshine duration (N) and extraterrestrial radiation (Ra). Ten statistical indicators in a form of GPI (Global Performance Indicator) is used to ascertain the suitability of the models. The performance of selected models across the range of solar radiation values, was depicted by the quantile-quantile (Q-Q) plots. Comparing these plots makes it evident that ANNs can cover a broader range of solar radiation values. The results shown indicate that the performance of ANN model was clearly superior to the other models. The findings also demonstrated that WR model performed well and presented high accuracy in estimations of daily global solar radiation.

  6. Comparing and combining physically-based and empirically-based approaches for estimating the hydrology of ungauged catchments

    NASA Astrophysics Data System (ADS)

    Booker, D. J.; Woods, R. A.

    2014-01-01

    Methods for estimating various hydrological indices at ungauged sites were compared.Methods included a TopNet rainfall-runoff model and a Random Forest empirical model.TopNet estimates were improved through correction using Random Forest estimates.Random Forests provided the best estimates of all indices except mean flow.Mean flow was best estimated using an already published empirical method.

  7. A Reliability Test of a Complex System Based on Empirical Likelihood

    PubMed Central

    Zhang, Jun; Hui, Yongchang

    2016-01-01

    To analyze the reliability of a complex system described by minimal paths, an empirical likelihood method is proposed to solve the reliability test problem when the subsystem distributions are unknown. Furthermore, we provide a reliability test statistic of the complex system and extract the limit distribution of the test statistic. Therefore, we can obtain the confidence interval for reliability and make statistical inferences. The simulation studies also demonstrate the theorem results. PMID:27760130

  8. Organizational Learning, Strategic Flexibility and Business Model Innovation: An Empirical Research Based on Logistics Enterprises

    NASA Astrophysics Data System (ADS)

    Bao, Yaodong; Cheng, Lin; Zhang, Jian

    Using the data of 237 Jiangsu logistics firms, this paper empirically studies the relationship among organizational learning capability, business model innovation, strategic flexibility. The results show as follows; organizational learning capability has positive impacts on business model innovation performance; strategic flexibility plays mediating roles on the relationship between organizational learning capability and business model innovation; interaction among strategic flexibility, explorative learning and exploitative learning play significant roles in radical business model innovation and incremental business model innovation.

  9. Deep in Data: Empirical Data Based Software Accuracy Testing Using the Building America Field Data Repository: Preprint

    SciTech Connect

    Neymark, J.; Roberts, D.

    2013-06-01

    An opportunity is available for using home energy consumption and building description data to develop a standardized accuracy test for residential energy analysis tools. That is, to test the ability of uncalibrated simulations to match real utility bills. Empirical data collected from around the United States have been translated into a uniform Home Performance Extensible Markup Language format that may enable software developers to create translators to their input schemes for efficient access to the data. This may facilitate the possibility of modeling many homes expediently, and thus implementing software accuracy test cases by applying the translated data. This paper describes progress toward, and issues related to, developing a usable, standardized, empirical data-based software accuracy test suite.

  10. (E)-2-[(2-hydroxybenzylidene)amino]phenylarsonic acid Schiff base: Synthesis, characterization and theoretical studies

    NASA Astrophysics Data System (ADS)

    Judith Percino, M.; Cerón, Margarita; Castro, María Eugenia; Ramírez, Ricardo; Soriano, Guillermo; Chapela, Víctor M.

    2015-02-01

    The structure of the Schiff base (E)-2-[(2-hydroxybenzylidene)amino]phenylarsonic [(E)-HBAPhAA], synthesized from salicylaldehyde and o-aminophenylarsonic acid in the presence of HCl, was characterized by FTIR, 1H NMR, EI-MS, UV-Vis spectroscopy, and X-ray crystallography. The crystal belonged to the monoclinic space group P21/c. Two molecules formed a dimer via intermolecular interactions due to the attachment of H atoms to O1, O3 and O4 with Osbnd H bond distances within reasonable ranges, ca. 0.84(3) Å. The structure also showed two intramolecular interactions of 2.634(2) and 3.053(2) Å for Nsbnd H⋯O hydrogen bonds, which caused the structures to be almost planar. We performed a theoretical analysis using DFT theory at B3LYP/6-31+G(d,p) level to determine the stability of the E and Z conformers. The geometry analysis of the E- and Z-isomers revealed an interconversion energy barrier between E/Z isomers of 22.72 kcal mol-1. We also theoretically analyzed the keto form of the E-isomer and observed a small energy barrier for the tautomerization of 6.17 kcal mol-1.

  11. Inclusion of persistence length-based secondary structure in replica field theoretic models of heteropolymer freezing

    NASA Astrophysics Data System (ADS)

    Weber, Jeffrey K.; Pande, Vijay S.

    2013-09-01

    The protein folding problem has long represented a "holy grail" in statistical physics due to its physical complexity and its relevance to many human diseases. While past theoretical work has yielded apt descriptions of protein folding landscapes, recent large-scale simulations have provided insights into protein folding that were impractical to obtain from early theories. In particular, the role that non-native contacts play in protein folding, and their relation to the existence of misfolded, β-sheet rich trap states on folding landscapes, has emerged as a topic of interest in the field. In this paper, we present a modified model of heteropolymer freezing that includes explicit secondary structural characteristics which allow observations of "intramolecular amyloid" states to be probed from a theoretical perspective. We introduce a variable persistence length-based energy penalty to a model Hamiltonian, and we illustrate how this modification alters the phase transitions present in the theory. We find, in particular, that inclusion of this variable persistence length increases both generic freezing and folding temperatures in the model, allowing both folding and glass transitions to occur in a more highly optimized fashion. We go on to discuss how these changes might relate to protein evolution, misfolding, and the emergence of intramolecular amyloid states.

  12. Metamaterial-based theoretical description of light scattering by metallic nano-hole array structures

    SciTech Connect

    Singh, Mahi R.; Najiminaini, Mohamadreza; Carson, Jeffrey J. L.; Balakrishnan, Shankar

    2015-05-14

    We have experimentally and theoretically investigated the light-matter interaction in metallic nano-hole array structures. The scattering cross section spectrum was measured for three samples each having a unique nano-hole array radius and periodicity. Each measured spectrum had several peaks due to surface plasmon polaritons. The dispersion relation and the effective dielectric constant of the structure were calculated using transmission line theory and Bloch's theorem. Using the effective dielectric constant and the transfer matrix method, the surface plasmon polariton energies were calculated and found to be quantized. Using these quantized energies, a Hamiltonian for the surface plasmon polaritons was written in the second quantized form. Working with the Hamiltonian, a theory of scattering cross section was developed based on the quantum scattering theory and Green's function method. For both theory and experiment, the location of the surface plasmon polariton spectral peaks was dependant on the array periodicity and radii of the nano-holes. Good agreement was observed between the experimental and theoretical results. It is proposed that the newly developed theory can be used to facilitate optimization of nanosensors for medical and engineering applications.

  13. Information theoretic discrepancy-based iterative reconstruction (IDIR) algorithm for limited angle tomography

    NASA Astrophysics Data System (ADS)

    Jang, Kwang Eun; Lee, Jongha; Lee, Kangui; Sung, Younghun; Lee, SeungDeok

    2012-03-01

    The X-ray tomosynthesis that measures several low dose projections over a limited angular range has been investigated as an alternative method of X-ray mammography for breast cancer screening. An extension of the scan coverage increases the vertical resolution by mitigating the interplane blurring. The implementation of a wide angle tomosynthesis equipment, however, may not be straightforward, mainly due to the image deterioration from the statistical noise in exterior projections. In this paper, we adopt the voltage modulation scheme to enlarge the coverage of the tomosynthesis scan. The higher tube voltages are used for outer angles, which offers the sufficient penetrating power for outlying frames in which the pathway of X-ray photons is elongated. To reconstruct 3D information from voltage modulated projections, we propose a novel algorithm, named information theoretic discrepancy based iterative reconstruction (IDIR) algorithm, which allows to account for the polychromatic acquisition model. The generalized information theoretic discrepancy (GID) is newly employed as the objective function. Using particular features of the GID, the cost function is derived in terms of imaginary variables with energy dependency, which leads to a tractable optimization problem without using the monochromatic approximation. In preliminary experiments using simulated and experimental equipment, the proposed imaging architecture and IDIR algorithm showed superior performances over conventional approaches.

  14. A theoretical and experimental evaluation of imidazolium-based ionic liquids for atmospheric mercury capture.

    PubMed

    Iuga, Cristina; Solís, Corina; Alvarez-Idaboy, J Raúl; Martínez, Miguel Angel; Mondragón, Ma Antonieta; Vivier-Bunge, Annik

    2014-05-01

    In this work, the capacity of three different imidazolium-based ionic liquids (ILs) for atmospheric mercury capture has been evaluated. Theoretical calculations using monomer and dimer models of ILs showed that [BMIM]⁺[SCN]⁻ and [BMIM]⁺[Cl]⁻ ionic liquids capture gaseous Hg⁰, while [BMIM]⁺[PF₆]⁻ shows no ability for this purpose. These findings are supported by experimental data obtained using particle induced X-ray emission (PIXE) trace element analysis. Experimental and theoretical infrared data of the ILs were obtained before and after exposure to Hg. In all cases, no displacement of the bands was observed, indicating that the interaction does not significantly affect the force constants of substrate bonds. This suggests that van der Waals forces are the main forces responsible for mercury capture. Since the anion-absorbate is the driving force of the interaction, the largest charge-volume ratio of [Cl]⁻ could explain the higher affinity for mercury sequestration of the [BMIM]⁺[Cl]⁻ salt. PMID:24781855

  15. Smartphone-Based, Self-Administered Intervention System for Alcohol Use Disorders: Theory and Empirical Evidence Basis

    PubMed Central

    Dulin, Patrick L.; Gonzalez, Vivian M.; King, Diane K.; Giroux, Danielle; Bacon, Samantha

    2013-01-01

    Advances in mobile technology provide an opportunity to deliver in-the-moment interventions to individuals with alcohol use disorders, yet availability of effective “apps” that deliver evidence-based interventions is scarce. We developed an immediately available, portable, smartphone-based intervention system whose purpose is to provide stand-alone, self-administered assessment and intervention. In this paper, we describe how theory and empirical evidence, combined with smartphone functionality contributed to the construction of a user-friendly, engaging alcohol intervention. With translation in mind, we discuss how we selected appropriate intervention components including assessments, feedback and tools, that work together to produce the hypothesized outcomes. PMID:24347811

  16. Smartphone-Based, Self-Administered Intervention System for Alcohol Use Disorders: Theory and Empirical Evidence Basis.

    PubMed

    Dulin, Patrick L; Gonzalez, Vivian M; King, Diane K; Giroux, Danielle; Bacon, Samantha

    2013-01-01

    Advances in mobile technology provide an opportunity to deliver in-the-moment interventions to individuals with alcohol use disorders, yet availability of effective "apps" that deliver evidence-based interventions is scarce. We developed an immediately available, portable, smartphone-based intervention system whose purpose is to provide stand-alone, self-administered assessment and intervention. In this paper, we describe how theory and empirical evidence, combined with smartphone functionality contributed to the construction of a user-friendly, engaging alcohol intervention. With translation in mind, we discuss how we selected appropriate intervention components including assessments, feedback and tools, that work together to produce the hypothesized outcomes. PMID:24347811

  17. Comparison of ensemble post-processing approaches, based on empirical and dynamical error modelisation of rainfall-runoff model forecasts

    NASA Astrophysics Data System (ADS)

    Chardon, J.; Mathevet, T.; Le Lay, M.; Gailhard, J.

    2012-04-01

    In the context of a national energy company (EDF : Electricité de France), hydro-meteorological forecasts are necessary to ensure safety and security of installations, meet environmental standards and improve water ressources management and decision making. Hydrological ensemble forecasts allow a better representation of meteorological and hydrological forecasts uncertainties and improve human expertise of hydrological forecasts, which is essential to synthesize available informations, coming from different meteorological and hydrological models and human experience. An operational hydrological ensemble forecasting chain has been developed at EDF since 2008 and is being used since 2010 on more than 30 watersheds in France. This ensemble forecasting chain is characterized ensemble pre-processing (rainfall and temperature) and post-processing (streamflow), where a large human expertise is solicited. The aim of this paper is to compare 2 hydrological ensemble post-processing methods developed at EDF in order improve ensemble forecasts reliability (similar to Monatanari &Brath, 2004; Schaefli et al., 2007). The aim of the post-processing methods is to dress hydrological ensemble forecasts with hydrological model uncertainties, based on perfect forecasts. The first method (called empirical approach) is based on a statistical modelisation of empirical error of perfect forecasts, by streamflow sub-samples of quantile class and lead-time. The second method (called dynamical approach) is based on streamflow sub-samples of quantile class and streamflow variation, and lead-time. On a set of 20 watersheds used for operational forecasts, results show that both approaches are necessary to ensure a good post-processing of hydrological ensemble, allowing a good improvement of reliability, skill and sharpness of ensemble forecasts. The comparison of the empirical and dynamical approaches shows the limits of the empirical approach which is not able to take into account hydrological

  18. Theoretical study of the nontraditional enol-based photoacidity of firefly oxyluciferin.

    PubMed

    Pinto da Silva, Luís; Esteves da Silva, Joaquim C G

    2015-02-01

    A theoretical analysis of the enol-based photoacidity of oxyluciferin in water is presented. The basis for this phenomenon is found to be the hydrogen-bonding network that involves the conjugated photobase of oxyluciferin. The hydrogen-bonding network involving the enolate thiazole moiety is stronger than that of the benzothiazole phenolate moiety. Therefore, enolate oxyluciferin should be stabilized versus the phenolate anion. This difference in strength is attributed to the fact that the thiazole moiety has more potential hydrogen-bond acceptors near the proton donor atom than the benzothiazole moiety. Moreover, the phenol-based excited-state proton transfer leads to a decrease in the hydrogen-bond acceptor potential of the thiazole atoms. The ground-state enol-based acidity of oxyluciferin is also studied. This phenomenon can be explained by stabilization of the enolate anion through strengthening of a bond between water and the nitrogen atom of the thiazole ring, in an enol-based proton-transfer-dependent way. PMID:25404255

  19. Theoretical study of the nontraditional enol-based photoacidity of firefly oxyluciferin.

    PubMed

    Pinto da Silva, Luís; Esteves da Silva, Joaquim C G

    2015-02-01

    A theoretical analysis of the enol-based photoacidity of oxyluciferin in water is presented. The basis for this phenomenon is found to be the hydrogen-bonding network that involves the conjugated photobase of oxyluciferin. The hydrogen-bonding network involving the enolate thiazole moiety is stronger than that of the benzothiazole phenolate moiety. Therefore, enolate oxyluciferin should be stabilized versus the phenolate anion. This difference in strength is attributed to the fact that the thiazole moiety has more potential hydrogen-bond acceptors near the proton donor atom than the benzothiazole moiety. Moreover, the phenol-based excited-state proton transfer leads to a decrease in the hydrogen-bond acceptor potential of the thiazole atoms. The ground-state enol-based acidity of oxyluciferin is also studied. This phenomenon can be explained by stabilization of the enolate anion through strengthening of a bond between water and the nitrogen atom of the thiazole ring, in an enol-based proton-transfer-dependent way.

  20. Boron based two-dimensional crystals: theoretical design, realization proposal and applications.

    PubMed

    Li, Xian-Bin; Xie, Sheng-Yi; Zheng, Hui; Tian, Wei Quan; Sun, Hong-Bo

    2015-12-01

    The successful realization of free-standing graphene and the various applications of its exotic properties have spurred tremendous research interest for two-dimensional (2D) layered materials. Besides graphene, many other 2D materials have been successfully produced by experiment, such as silicene, monolayer MoS2, few-layer black phosphorus and so on. As a neighbor of carbon in the periodic table, element boron is interesting and many researchers have contributed their efforts to realize boron related 2D structures. These structures may be significant both in fundamental science and future technical applications in nanoelectronics and nanodevices. In this review, we summarize the recent developments of 2D boron based materials. The theoretical design, possible experimental realization strategies and their potential technical applications are presented and discussed. Also, the current challenges and prospects of this area are discussed. PMID:26523799

  1. Theoretical studies of d(A:T)-based parallel-stranded DNA duplexes.

    PubMed

    Cubero, E; Luque, F J; Orozco, M

    2001-12-01

    Poly d(A:T) parallel-stranded DNA duplexes based on the Hoogsteen and reverse Watson-Crick hydrogen bond pairing are studied by means of extensive molecular dynamics (MD) simulations and molecular mechanics coupled to Poisson-Boltzmann (MM-PB/SA) calculations. The structural, flexibility, and reactivity characteristics of Hoogsteen and reverse Watson-Crick parallel duplexes are described from the analysis of the trajectories. Theoretical calculations show that the two parallel duplexes are less stable than the antiparallel Watson-Crick duplex. The difference in stability between antiparallel and parallel duplexes increases steadily as the length of the duplex increases. The reverse Watson-Crick arrangement is slightly more stable than the Hoogsteen duplex, the difference being also increased linearly with the length of the duplex. A subtle balance of intramolecular and solvation terms is responsible for the preference of a given helical structure.

  2. Boron based two-dimensional crystals: theoretical design, realization proposal and applications.

    PubMed

    Li, Xian-Bin; Xie, Sheng-Yi; Zheng, Hui; Tian, Wei Quan; Sun, Hong-Bo

    2015-12-01

    The successful realization of free-standing graphene and the various applications of its exotic properties have spurred tremendous research interest for two-dimensional (2D) layered materials. Besides graphene, many other 2D materials have been successfully produced by experiment, such as silicene, monolayer MoS2, few-layer black phosphorus and so on. As a neighbor of carbon in the periodic table, element boron is interesting and many researchers have contributed their efforts to realize boron related 2D structures. These structures may be significant both in fundamental science and future technical applications in nanoelectronics and nanodevices. In this review, we summarize the recent developments of 2D boron based materials. The theoretical design, possible experimental realization strategies and their potential technical applications are presented and discussed. Also, the current challenges and prospects of this area are discussed.

  3. Boron based two-dimensional crystals: theoretical design, realization proposal and applications

    NASA Astrophysics Data System (ADS)

    Li, Xian-Bin; Xie, Sheng-Yi; Zheng, Hui; Tian, Wei Quan; Sun, Hong-Bo

    2015-11-01

    The successful realization of free-standing graphene and the various applications of its exotic properties have spurred tremendous research interest for two-dimensional (2D) layered materials. Besides graphene, many other 2D materials have been successfully produced by experiment, such as silicene, monolayer MoS2, few-layer black phosphorus and so on. As a neighbor of carbon in the periodic table, element boron is interesting and many researchers have contributed their efforts to realize boron related 2D structures. These structures may be significant both in fundamental science and future technical applications in nanoelectronics and nanodevices. In this review, we summarize the recent developments of 2D boron based materials. The theoretical design, possible experimental realization strategies and their potential technical applications are presented and discussed. Also, the current challenges and prospects of this area are discussed.

  4. Nanoscale deflection detection of a cantilever-based biosensor using MOSFET structure: A theoretical analysis

    NASA Astrophysics Data System (ADS)

    Paryavi, Mohsen; Montazeri, Abbas; Tekieh, Tahereh; Sasanpour, Pezhman

    2016-10-01

    A novel method for detection of biological species based on measurement of cantilever deflection has been proposed and numerically evaluated. Employing the cantilever as a moving gate of a MOSFET structure, its deflection can be analyzed via current characterization of the MOSFET consequently. Locating the cantilever as a suspended gate of a MOSFET on a substrate, the distance between cantilever and oxide layer will change the carrier concentration. Accordingly, it will be resulted in different current voltage characteristics of the device which can be easily measured using simple apparatuses. In order to verify the proposed method, the performance of system has been theoretically analyzed using COMSOL platform. The simulation results have confirmed the performance and sensitivity of the proposed method.

  5. Interracial Marriages: Empirical and Theoretical Considerations

    ERIC Educational Resources Information Center

    Aldridge, Delores P.

    1978-01-01

    This paper summarizes the research which has been done on interracial marriages in areas such as incidence of interracial marriages, causal factors, sociopsychological characteristics, and the problems encountered by the marriage partners and their children. (Author/AM)

  6. Teacher Authenticity: A Theoretical and Empirical Investigation

    ERIC Educational Resources Information Center

    Akoury, Paul N.

    2013-01-01

    This study builds on a small, under-acknowledged body of educational works that speak to the problem of an overly technical focus on teaching, which negates a more authentic consideration of what it means to teach, including an exploration of the spiritual and moral dimensions. A need for educational change and the teacher's authentic way of…

  7. "vocd": A Theoretical and Empirical Evaluation

    ERIC Educational Resources Information Center

    McCarthy, Philip M.; Jarvis, Scott

    2007-01-01

    A reliable index of lexical diversity (LD) has remained stubbornly elusive for over 60 years. Meanwhile, researchers in fields as varied as "stylistics," "neuropathology," "language acquisition," and even "forensics" continue to use flawed LD indices--often ignorant that their results are questionable and in some cases potentially dangerous.…

  8. A new high frequency Earth rotation model based on an empirical ocean tide model from satellite altimetry

    NASA Astrophysics Data System (ADS)

    Madzak, Matthias; Böhm, Sigrid; Böhm, Johannes; Bosch, Wolfgang; Hagedoorn, Jan; Schuh, Harald

    2014-05-01

    A new model for Earth rotation variations based on ocean tide models is highly desirable in order to close the gap between geophysical Earth rotation models and geodetic observations. The current high frequency Earth rotation model mentioned in the IERS Conventions 2010 and thus used by most analysis institutions was developed in 1994. Since then several satellite missions have collected lots of altimetry data and were used to obtain new ocean tide models. Due to the increase of accuracy and resolution of these models, we will develop an improved Earth rotation model for (sub-) daily periods. In order to reduce (hydrodynamic) modeling effects, we use the empirical ocean tide model EOT11a, provided by DGFI, Munich. Global oceanic currents, which are required for ocean tidal angular momentum but not included in empirical models, are obtained using a linearized and simplified Navier-Stokes equation (Ray, 2001). We compare the new model with the model from the IERS Conventions 2010 as well as with an empirical Earth rotation model (Artz et al., 2011) and show the expected differences in the analysis of VLBI observations. For this purpose we use the Vienna VLBI Software (VieVS).

  9. [DGRW-update: neurology--from empirical strategies towards evidence based interventions].

    PubMed

    Schupp, W

    2011-12-01

    Stroke, Multiple Sclerosis (MS), traumatic brain injuries (TBI) and neuropathies are the most important diseases in neurological rehabilitation financed by the German Pension Insurance. The primary goal is vocational (re)integration. Driven by multiple findings of neuroscience research the traditional holistic approach with mainly empirically derived strategies was developed further and improved by new evidence-based interventions. This process had been, and continues to be, necessary to meet the health-economic pressures for ever shorter and more efficient rehab measures. Evidence-based interventions refer to symptom-oriented measures, to team-management concepts, as well as to education and psychosocial interventions. Drug therapy and/or neurophysiological measures can be added to increase neuroregeneration and neuroplasticity. Evidence-based aftercare concepts support sustainability and steadiness of rehab results.Mirror therapy, robot-assisted training, mental training, task-specific training, and above all constraint-induced movement therapy (CIMT) can restore motor arm and hand functions. Treadmill training and robot-assisted training improve stance and gait. Botulinum toxine injections in combination with physical and redressing methods are superior in managing spasticity. Guideline-oriented management of associated pain syndromes (myofascial, neuropathic, complex-regional=dystrophic) improve primary outcome and quality of life. Drug therapy with so-called co-analgetics and physical therapy play an important role in pain management. Swallowing disorders lead to higher mortality and morbidity in the acute phase; stepwise diagnostics (screening, endoscopy, radiology) and specific swallowing therapy can reduce these risks and frequently can restore normal eating und drinking.In our modern industrial societies communicative and cognitive disturbances are more impairing than the above mentioned disorders. Speech and language therapy (SLT) is dominant in

  10. [DGRW-update: neurology--from empirical strategies towards evidence based interventions].

    PubMed

    Schupp, W

    2011-12-01

    Stroke, Multiple Sclerosis (MS), traumatic brain injuries (TBI) and neuropathies are the most important diseases in neurological rehabilitation financed by the German Pension Insurance. The primary goal is vocational (re)integration. Driven by multiple findings of neuroscience research the traditional holistic approach with mainly empirically derived strategies was developed further and improved by new evidence-based interventions. This process had been, and continues to be, necessary to meet the health-economic pressures for ever shorter and more efficient rehab measures. Evidence-based interventions refer to symptom-oriented measures, to team-management concepts, as well as to education and psychosocial interventions. Drug therapy and/or neurophysiological measures can be added to increase neuroregeneration and neuroplasticity. Evidence-based aftercare concepts support sustainability and steadiness of rehab results.Mirror therapy, robot-assisted training, mental training, task-specific training, and above all constraint-induced movement therapy (CIMT) can restore motor arm and hand functions. Treadmill training and robot-assisted training improve stance and gait. Botulinum toxine injections in combination with physical and redressing methods are superior in managing spasticity. Guideline-oriented management of associated pain syndromes (myofascial, neuropathic, complex-regional=dystrophic) improve primary outcome and quality of life. Drug therapy with so-called co-analgetics and physical therapy play an important role in pain management. Swallowing disorders lead to higher mortality and morbidity in the acute phase; stepwise diagnostics (screening, endoscopy, radiology) and specific swallowing therapy can reduce these risks and frequently can restore normal eating und drinking.In our modern industrial societies communicative and cognitive disturbances are more impairing than the above mentioned disorders. Speech and language therapy (SLT) is dominant in

  11. An Empirical Framework for Implementing Lifelong Learning Systems.

    ERIC Educational Resources Information Center

    Law, Song Seng; Low, Sock Hwee

    Based on a literature review of factors that affect the provision of learning opportunities for adults and the experiences of Singapore's Institute of Technical Education (ITE), this paper proposes an empirical framework for developing and implementing lifelong learning systems. Following an introduction, the theoretical foundation for the…

  12. DNA bases assembled on the Au(110)/electrolyte interface: a combined experimental and theoretical study.

    PubMed

    Salvatore, Princia; Nazmutdinov, Renat R; Ulstrup, Jens; Zhang, Jingdong

    2015-02-19

    Among the low-index single-crystal gold surfaces, the Au(110) surface is the most active toward molecular adsorption and the one with fewest electrochemical adsorption data reported. Cyclic voltammetry (CV), electrochemically controlled scanning tunneling microscopy (EC-STM), and density functional theory (DFT) calculations have been employed in the present study to address the adsorption of the four nucleobases adenine (A), cytosine (C), guanine (G), and thymine (T), on the Au(110)-electrode surface. Au(110) undergoes reconstruction to the (1 × 3) surface in electrochemical environment, accompanied by a pair of strong voltammetry peaks in the double-layer region in acid solutions. Adsorption of the DNA bases gives featureless voltammograms with lower double-layer capacitance, suggesting that all the bases are chemisorbed on the Au(110) surface. Further investigation of the surface structures of the adlayers of the four DNA bases by EC-STM disclosed lifting of the Au(110) reconstruction, specific molecular packing in dense monolayers, and pH dependence of the A and G adsorption. DFT computations based on a cluster model for the Au(110) surface were performed to investigate the adsorption energy and geometry of the DNA bases in different adsorbate orientations. The optimized geometry is further used to compute models for STM images which are compared with the recorded STM images. This has provided insight into the physical nature of the adsorption. The specific orientations of A, C, G, and T on Au(110) and the nature of the physical adsorbate/surface interaction based on the combination of the experimental and theoretical studies are proposed, and differences from nucleobase adsorption on Au(111)- and Au(100)-electrode surfaces are discussed.

  13. DNA bases assembled on the Au(110)/electrolyte interface: a combined experimental and theoretical study.

    PubMed

    Salvatore, Princia; Nazmutdinov, Renat R; Ulstrup, Jens; Zhang, Jingdong

    2015-02-19

    Among the low-index single-crystal gold surfaces, the Au(110) surface is the most active toward molecular adsorption and the one with fewest electrochemical adsorption data reported. Cyclic voltammetry (CV), electrochemically controlled scanning tunneling microscopy (EC-STM), and density functional theory (DFT) calculations have been employed in the present study to address the adsorption of the four nucleobases adenine (A), cytosine (C), guanine (G), and thymine (T), on the Au(110)-electrode surface. Au(110) undergoes reconstruction to the (1 × 3) surface in electrochemical environment, accompanied by a pair of strong voltammetry peaks in the double-layer region in acid solutions. Adsorption of the DNA bases gives featureless voltammograms with lower double-layer capacitance, suggesting that all the bases are chemisorbed on the Au(110) surface. Further investigation of the surface structures of the adlayers of the four DNA bases by EC-STM disclosed lifting of the Au(110) reconstruction, specific molecular packing in dense monolayers, and pH dependence of the A and G adsorption. DFT computations based on a cluster model for the Au(110) surface were performed to investigate the adsorption energy and geometry of the DNA bases in different adsorbate orientations. The optimized geometry is further used to compute models for STM images which are compared with the recorded STM images. This has provided insight into the physical nature of the adsorption. The specific orientations of A, C, G, and T on Au(110) and the nature of the physical adsorbate/surface interaction based on the combination of the experimental and theoretical studies are proposed, and differences from nucleobase adsorption on Au(111)- and Au(100)-electrode surfaces are discussed. PMID:25611676

  14. Empirical formula for rates of hot pixel defects based on pixel size, sensor area, and ISO

    NASA Astrophysics Data System (ADS)

    Chapman, Glenn H.; Thomas, Rohit; Koren, Zahava; Koren, Israel

    2013-02-01

    Experimentally, image sensors measurements show a continuous development of in-field permanent hot pixel defects increasing in numbers over time. In our tests we accumulated data on defects in cameras ranging from large area (<300 sq mm) DSLR's, medium sized (~40 sq mm) point and shoot, and small (20 sq mm) cell phone cameras. The results show that the rate of defects depends on the technology (APS or CCD), and on design parameters like imager area, pixel size (from 1.5 to 7 um), and gain (from ISO100 to 1600). Comparing different sensor sizes with similar pixel sizes has shown that defect rates scale linearly with sensor area, suggesting the metric of defects/year/sq mm, which we call defect density. A search was made to model this defect density as a function of the two parameters pixel size and ISO. The best empirical fit was obtained by a power law curve. For CCD imagers, the defect densities are proportional to the pixel size to the power of -2.25 times the ISO to the power of 0.69. For APS (CMOS) sensors the power law had the defect densities proportional to the pixel size to the power of -3.07 times the ISO raised to the power of 0.5. Extending our empirical formula to include ISO allows us to predict the expected defect development rate for a wide set of sensor parameters.

  15. The future scalability of pH-based genome sequencers: A theoretical perspective

    NASA Astrophysics Data System (ADS)

    Go, Jonghyun; Alam, Muhammad A.

    2013-10-01

    Sequencing of human genome is an essential prerequisite for personalized medicine and early prognosis of various genetic diseases. The state-of-art, high-throughput genome sequencing technologies provide improved sequencing; however, their reliance on relatively expensive optical detection schemes has prevented wide-spread adoption of the technology in routine care. In contrast, the recently announced pH-based electronic genome sequencers achieve fast sequencing at low cost because of the compatibility with the current microelectronics technology. While the progress in technology development has been rapid, the physics of the sequencing chips and the potential for future scaling (and therefore, cost reduction) remain unexplored. In this article, we develop a theoretical framework and a scaling theory to explain the principle of operation of the pH-based sequencing chips and use the framework to explore various perceived scaling limits of the technology related to signal to noise ratio, well-to-well crosstalk, and sequencing accuracy. We also address several limitations inherent to the key steps of pH-based genome sequencers, which are widely shared by many other sequencing platforms in the market but remained unexplained properly so far.

  16. Theoretical investigation of acoustic wave devices based on different piezoelectric films deposited on silicon carbide

    NASA Astrophysics Data System (ADS)

    Fan, Li; Zhang, Shu-yi; Ge, Huan; Zhang, Hui

    2013-07-01

    Performances of acoustic wave (AW) devices based on silicon carbide (SiC) substrates are theoretically studied, in which two types of piezoelectric films of ZnO and AlN deposited on 4H-SiC and 3C-SiC substrates are adopted. The phase velocities (PV), electromechanical coupling coefficients (ECC), and temperature coefficients of frequency (TCF) for three AW modes (Rayleigh wave, A0 and S0 modes of Lamb wave) often used in AW devices are calculated based on four types of configurations of interdigital transducers (IDTs). It is found that that the ZnO piezoelectric film is proper for the AW device operating in the low-frequency range because a high ECC can be realized using a thin ZnO film. The AlN piezoelectric film is proper for the device operating in the high-frequency range in virtue of the high PV of AlN, which can increase the finger width of the IDT. Generally, in the low-frequency Lamb wave devices using ZnO piezoelectric films with small normalized thicknesses of films to wavelengths hf/λ, thin SiC substrates can increase ECCs but induce high TCFs simultaneously. In the high-frequency device with a large hf/λ, the S0 mode of Lamb wave based on the AlN piezoelectric film deposited on a thick SiC substrate exhibits high performances by simultaneously considering the PV, ECC, and TCF.

  17. Theoretical study of carbon-based tips for scanning tunnelling microscopy

    NASA Astrophysics Data System (ADS)

    González, C.; Abad, E.; Dappe, Y. J.; Cuevas, J. C.

    2016-03-01

    Motivated by recent experiments, we present here a detailed theoretical analysis of the use of carbon-based conductive tips in scanning tunnelling microscopy. In particular, we employ ab initio methods based on density functional theory to explore a graphitic, an amorphous carbon and two diamond-like tips for imaging with a scanning tunnelling microscope (STM), and we compare them with standard metallic tips made of gold and tungsten. We investigate the performance of these tips in terms of the corrugation of the STM images acquired when scanning a single graphene sheet. Moreover, we analyse the impact of the tip-sample distance and show that it plays a fundamental role in the resolution and symmetry of the STM images. We also explore in depth how the adsorption of single atoms and molecules in the tip apexes modifies the STM images and demonstrate that, in general, it leads to an improved image resolution. The ensemble of our results provides strong evidence that carbon-based tips can significantly improve the resolution of STM images, as compared to more standard metallic tips, which may open a new line of research in scanning tunnelling microscopy.

  18. Theoretical investigation of all-metal-based mushroom plasmonic metamaterial absorbers at infrared wavelengths

    NASA Astrophysics Data System (ADS)

    Ogawa, Shinpei; Fujisawa, Daisuke; Kimata, Masafumi

    2015-12-01

    High-performance wavelength-selective infrared (IR) sensors require small pixel structures, a low-thermal mass, and operation in the middle-wavelength infrared (MWIR) and long-wavelength infrared (LWIR) regions for multicolor IR imaging. All-metal-based mushroom plasmonic metamaterial absorbers (MPMAs) were investigated theoretically and were designed to enhance the performance of wavelength-selective uncooled IR sensors. All components of the MPMAs are based on thin layers of metals such as Au without oxide insulators for increased absorption. The absorption properties of the MPMAs were investigated by rigorous coupled-wave analysis. Strong wavelength-selective absorption is realized over a wide range of MWIR and LWIR wavelengths by the plasmonic resonance of the micropatch and the narrow-gap resonance, without disturbance from the intrinsic absorption of oxide insulators. The absorption wavelength is defined mainly by the micropatch size and is longer than its period. The metal post width has less impact on the absorption properties and can maintain single-mode operation. Through-holes can be formed on the plate area to reduce the thermal mass. A small pixel size with reduced thermal mass and wideband single-mode operation can be realized using all-metal-based MPMAs.

  19. Theoretical Limits of Energy Density in Silicon-Carbon Composite Anode Based Lithium Ion Batteries.

    PubMed

    Dash, Ranjan; Pannala, Sreekanth

    2016-01-01

    Silicon (Si) is under consideration as a potential next-generation anode material for the lithium ion battery (LIB). Experimental reports of up to 40% increase in energy density of Si anode based LIBs (Si-LIBs) have been reported in literature. However, this increase in energy density is achieved when the Si-LIB is allowed to swell (volumetrically expand) more than graphite based LIB (graphite-LIB) and beyond practical limits. The volume expansion of LIB electrodes should be negligible for applications such as automotive or mobile devices. We determine the theoretical bounds of Si composition in a Si-carbon composite (SCC) based anode to maximize the volumetric energy density of a LIB by constraining the external dimensions of the anode during charging. The porosity of the SCC anode is adjusted to accommodate the volume expansion during lithiation. The calculated threshold value of Si was then used to determine the possible volumetric energy densities of LIBs with SCC anode (SCC-LIBs) and the potential improvement over graphite-LIBs. The level of improvement in volumetric and gravimetric energy density of SCC-LIBs with constrained volume is predicted to be less than 10% to ensure the battery has similar power characteristics of graphite-LIBs. PMID:27311811

  20. Theoretical study of carbon-based tips for scanning tunnelling microscopy.

    PubMed

    González, C; Abad, E; Dappe, Y J; Cuevas, J C

    2016-03-11

    Motivated by recent experiments, we present here a detailed theoretical analysis of the use of carbon-based conductive tips in scanning tunnelling microscopy. In particular, we employ ab initio methods based on density functional theory to explore a graphitic, an amorphous carbon and two diamond-like tips for imaging with a scanning tunnelling microscope (STM), and we compare them with standard metallic tips made of gold and tungsten. We investigate the performance of these tips in terms of the corrugation of the STM images acquired when scanning a single graphene sheet. Moreover, we analyse the impact of the tip-sample distance and show that it plays a fundamental role in the resolution and symmetry of the STM images. We also explore in depth how the adsorption of single atoms and molecules in the tip apexes modifies the STM images and demonstrate that, in general, it leads to an improved image resolution. The ensemble of our results provides strong evidence that carbon-based tips can significantly improve the resolution of STM images, as compared to more standard metallic tips, which may open a new line of research in scanning tunnelling microscopy. PMID:26861537

  1. Theoretical Limits of Energy Density in Silicon-Carbon Composite Anode Based Lithium Ion Batteries

    PubMed Central

    Dash, Ranjan; Pannala, Sreekanth

    2016-01-01

    Silicon (Si) is under consideration as a potential next-generation anode material for the lithium ion battery (LIB). Experimental reports of up to 40% increase in energy density of Si anode based LIBs (Si-LIBs) have been reported in literature. However, this increase in energy density is achieved when the Si-LIB is allowed to swell (volumetrically expand) more than graphite based LIB (graphite-LIB) and beyond practical limits. The volume expansion of LIB electrodes should be negligible for applications such as automotive or mobile devices. We determine the theoretical bounds of Si composition in a Si–carbon composite (SCC) based anode to maximize the volumetric energy density of a LIB by constraining the external dimensions of the anode during charging. The porosity of the SCC anode is adjusted to accommodate the volume expansion during lithiation. The calculated threshold value of Si was then used to determine the possible volumetric energy densities of LIBs with SCC anode (SCC-LIBs) and the potential improvement over graphite-LIBs. The level of improvement in volumetric and gravimetric energy density of SCC-LIBs with constrained volume is predicted to be less than 10% to ensure the battery has similar power characteristics of graphite-LIBs. PMID:27311811

  2. Experimental, Theoretical and Computational Studies of Plasma-Based Concepts for Future High Energy Accelerators

    SciTech Connect

    Joshi, Chan; Mori, W.

    2013-10-21

    This is the final report on the DOE grant number DE-FG02-92ER40727 titled, “Experimental, Theoretical and Computational Studies of Plasma-Based Concepts for Future High Energy Accelerators.” During this grant period the UCLA program on Advanced Plasma Based Accelerators, headed by Professor C. Joshi has made many key scientific advances and trained a generation of students, many of whom have stayed in this research field and even started research programs of their own. In this final report however, we will focus on the last three years of the grant and report on the scientific progress made in each of the four tasks listed under this grant. Four tasks are focused on: Plasma Wakefield Accelerator Research at FACET, SLAC National Accelerator Laboratory, In House Research at UCLA’s Neptune and 20 TW Laser Laboratories, Laser-Wakefield Acceleration (LWFA) in Self Guided Regime: Experiments at the Callisto Laser at LLNL, and Theory and Simulations. Major scientific results have been obtained in each of the four tasks described in this report. These have led to publications in the prestigious scientific journals, graduation and continued training of high quality Ph.D. level students and have kept the U.S. at the forefront of plasma-based accelerators research field.

  3. Theoretical Limits of Energy Density in Silicon-Carbon Composite Anode Based Lithium Ion Batteries.

    PubMed

    Dash, Ranjan; Pannala, Sreekanth

    2016-06-17

    Silicon (Si) is under consideration as a potential next-generation anode material for the lithium ion battery (LIB). Experimental reports of up to 40% increase in energy density of Si anode based LIBs (Si-LIBs) have been reported in literature. However, this increase in energy density is achieved when the Si-LIB is allowed to swell (volumetrically expand) more than graphite based LIB (graphite-LIB) and beyond practical limits. The volume expansion of LIB electrodes should be negligible for applications such as automotive or mobile devices. We determine the theoretical bounds of Si composition in a Si-carbon composite (SCC) based anode to maximize the volumetric energy density of a LIB by constraining the external dimensions of the anode during charging. The porosity of the SCC anode is adjusted to accommodate the volume expansion during lithiation. The calculated threshold value of Si was then used to determine the possible volumetric energy densities of LIBs with SCC anode (SCC-LIBs) and the potential improvement over graphite-LIBs. The level of improvement in volumetric and gravimetric energy density of SCC-LIBs with constrained volume is predicted to be less than 10% to ensure the battery has similar power characteristics of graphite-LIBs.

  4. Theoretical Limits of Energy Density in Silicon-Carbon Composite Anode Based Lithium Ion Batteries

    NASA Astrophysics Data System (ADS)

    Dash, Ranjan; Pannala, Sreekanth

    2016-06-01

    Silicon (Si) is under consideration as a potential next-generation anode material for the lithium ion battery (LIB). Experimental reports of up to 40% increase in energy density of Si anode based LIBs (Si-LIBs) have been reported in literature. However, this increase in energy density is achieved when the Si-LIB is allowed to swell (volumetrically expand) more than graphite based LIB (graphite-LIB) and beyond practical limits. The volume expansion of LIB electrodes should be negligible for applications such as automotive or mobile devices. We determine the theoretical bounds of Si composition in a Si-carbon composite (SCC) based anode to maximize the volumetric energy density of a LIB by constraining the external dimensions of the anode during charging. The porosity of the SCC anode is adjusted to accommodate the volume expansion during lithiation. The calculated threshold value of Si was then used to determine the possible volumetric energy densities of LIBs with SCC anode (SCC-LIBs) and the potential improvement over graphite-LIBs. The level of improvement in volumetric and gravimetric energy density of SCC-LIBs with constrained volume is predicted to be less than 10% to ensure the battery has similar power characteristics of graphite-LIBs.

  5. Theoretical study of carbon-based tips for scanning tunnelling microscopy.

    PubMed

    González, C; Abad, E; Dappe, Y J; Cuevas, J C

    2016-03-11

    Motivated by recent experiments, we present here a detailed theoretical analysis of the use of carbon-based conductive tips in scanning tunnelling microscopy. In particular, we employ ab initio methods based on density functional theory to explore a graphitic, an amorphous carbon and two diamond-like tips for imaging with a scanning tunnelling microscope (STM), and we compare them with standard metallic tips made of gold and tungsten. We investigate the performance of these tips in terms of the corrugation of the STM images acquired when scanning a single graphene sheet. Moreover, we analyse the impact of the tip-sample distance and show that it plays a fundamental role in the resolution and symmetry of the STM images. We also explore in depth how the adsorption of single atoms and molecules in the tip apexes modifies the STM images and demonstrate that, in general, it leads to an improved image resolution. The ensemble of our results provides strong evidence that carbon-based tips can significantly improve the resolution of STM images, as compared to more standard metallic tips, which may open a new line of research in scanning tunnelling microscopy.

  6. Empirical evaluation of H.265/HEVC-based dynamic adaptive video streaming over HTTP (HEVC-DASH)

    NASA Astrophysics Data System (ADS)

    Irondi, Iheanyi; Wang, Qi; Grecos, Christos

    2014-05-01

    Real-time HTTP streaming has gained global popularity for delivering video content over Internet. In particular, the recent MPEG-DASH (Dynamic Adaptive Streaming over HTTP) standard enables on-demand, live, and adaptive Internet streaming in response to network bandwidth fluctuations. Meanwhile, emerging is the new-generation video coding standard, H.265/HEVC (High Efficiency Video Coding) promises to reduce the bandwidth requirement by 50% at the same video quality when compared with the current H.264/AVC standard. However, little existing work has addressed the integration of the DASH and HEVC standards, let alone empirical performance evaluation of such systems. This paper presents an experimental HEVC-DASH system, which is a pull-based adaptive streaming solution that delivers HEVC-coded video content through conventional HTTP servers where the client switches to its desired quality, resolution or bitrate based on the available network bandwidth. Previous studies in DASH have focused on H.264/AVC, whereas we present an empirical evaluation of the HEVC-DASH system by implementing a real-world test bed, which consists of an Apache HTTP Server with GPAC, an MP4Client (GPAC) with open HEVC-based DASH client and a NETEM box in the middle emulating different network conditions. We investigate and analyze the performance of HEVC-DASH by exploring the impact of various network conditions such as packet loss, bandwidth and delay on video quality. Furthermore, we compare the Intra and Random Access profiles of HEVC coding with the Intra profile of H.264/AVC when the correspondingly encoded video is streamed with DASH. Finally, we explore the correlation among the quality metrics and network conditions, and empirically establish under which conditions the different codecs can provide satisfactory performance.

  7. Comparisons of ground motions from the 1999 Chi-Chi, earthquake with empirical predictions largely based on data from California

    USGS Publications Warehouse

    Boore, D.M.

    2001-01-01

    This article has the modest goal of comparing the ground motions recorded during the 1999 Chi-Chi, Taiwan, mainshock with predictions from four empirical-based equations commonly used for western North America; these empirical predictions are largely based on data from California. Comparisons are made for peak acceleration and 5%-damped response spectra at periods between 0.1 and 4 sec. The general finding is that the Chi-Chi ground motions are smaller than those predicted from the empirically based equations for periods less than about 1 sec by factors averaging about 0.4 but as small as 0.26 (depending on period, on which equation is used, and on whether the sites are assumed to be rock or soil). There is a trend for the observed motions to approach or even exceed the predicted motions for longer periods. Motions at similar distances (30-60 km) to the east and to the west of the fault differ dramatically at periods between about 2 and 20 sec: Long-duration wave trains are present on the motions to the west, and when normalized to similar amplitudes at short periods, the response spectra of the motions at the western stations are as much as five times larger than those of motions from eastern stations. The explanation for the difference is probably related to site and propagation effects; the western stations are on the Coastal Plain, whereas the eastern stations are at the foot of young and steep mountains, either in the relatively narrow Longitudinal Valley or along the eastern coast-the sediments underlying the eastern stations are probably shallower and have higher velocity than those under the western stations.

  8. Experimentation and theoretic calculation of a BODIPY sensor based on photoinduced electron transfer for ions detection.

    PubMed

    Lu, Hua; Zhang, ShuShu; Liu, HanZhuang; Wang, YanWei; Shen, Zhen; Liu, ChunGen; You, XiaoZeng

    2009-12-24

    A boron-dipyrromethene (BODIPY)-based fluorescence probe with a N,N'-(pyridine-2, 6-diylbis(methylene))-dianiline substituent (1) has been prepared by condensation of 2,6-pyridinedicarboxaldehyde with 8-(4-amino)-4,4-difluoro-1,3,5,7-tetramethyl-4-bora-3a,4a-diaza-s-indacene and reduction by NaBH(4). The sensing properties of compound 1 toward various metal ions are investigated via fluorometric titration in methanol, which show highly selective fluorescent turn-on response in the presence of Hg(2+) over the other metal ions, such as Li(+), Na(+), K(+), Ca(2+), Mg(2+), Pb(2+), Fe(2+), Co(2+), Ni(2+), Cu(2+), Zn(2+), Cd(2+), Ag(+), and Mn(2+). Computational approach has been carried out to investigate the mechanism why compound 1 provides different fluorescent signal for Hg(2+) and other ions. Theoretic calculations of the energy levels show that the quenching of the bright green fluorescence of boradiazaindacene fluorophore is due to the reductive photoinduced electron transfer (PET) from the aniline subunit to the excited state of BODIPY fluorophore. In metal complexes, the frontier molecular orbital energy levels changes greatly. Binding Zn(2+) or Cd(2+) ion leads to significant decreasing of both the HOMO and LUMO energy levels of the receptor, thus inhibit the reductive PET process, whereas an oxidative PET from the excited state fluorophore to the receptor occurs, vice versa, which also quenches the fluorescence. However, for 1-Hg(2+) complex, both the reductive and oxidative PETs are prohibited; therefore, strong fluorescence emission from the fluorophore can be observed experimentally. The agreement of the experimental results and theoretic calculations suggests that our calculation method can be applicable as guidance for the design of new chemosensors for other metal ions. PMID:19950967

  9. Experimentation and Theoretic Calculation of a BODIPY Sensor Based on Photoinduced Electron Transfer for Ions Detection

    NASA Astrophysics Data System (ADS)

    Lu, Hua; Zhang, Shushu; Liu, Hanzhuang; Wang, Yanwei; Shen, Zhen; Liu, Chungen; You, Xiaozeng

    2009-12-01

    A boron-dipyrromethene (BODIPY)-based fluorescence probe with a N,N'-(pyridine-2, 6-diylbis(methylene))-dianiline substituent (1) has been prepared by condensation of 2,6-pyridinedicarboxaldehyde with 8-(4-amino)-4,4-difluoro-1,3,5,7-tetramethyl-4-bora-3a,4a-diaza-s-indacene and reduction by NaBH4. The sensing properties of compound 1 toward various metal ions are investigated via fluorometric titration in methanol, which show highly selective fluorescent turn-on response in the presence of Hg2+ over the other metal ions, such as Li+, Na+, K+, Ca2+, Mg2+, Pb2+, Fe2+, Co2+, Ni2+, Cu2+, Zn2+, Cd2+, Ag+, and Mn2+. Computational approach has been carried out to investigate the mechanism why compound 1 provides different fluorescent signal for Hg2+ and other ions. Theoretic calculations of the energy levels show that the quenching of the bright green fluorescence of boradiazaindacene fluorophore is due to the reductive photoinduced electron transfer (PET) from the aniline subunit to the excited state of BODIPY fluorophore. In metal complexes, the frontier molecular orbital energy levels changes greatly. Binding Zn2+ or Cd2+ ion leads to significant decreasing of both the HOMO and LUMO energy levels of the receptor, thus inhibit the reductive PET process, whereas an oxidative PET from the excited state fluorophore to the receptor occurs, vice versa, which also quenches the fluorescence. However, for 1-Hg2+ complex, both the reductive and oxidative PETs are prohibited; therefore, strong fluorescence emission from the fluorophore can be observed experimentally. The agreement of the experimental results and theoretic calculations suggests that our calculation method can be applicable as guidance for the design of new chemosensors for other metal ions.

  10. Providing a contextual base and a theoretical structure to guide the teaching of science from early years to senior years

    NASA Astrophysics Data System (ADS)

    Stinner, Arthur

    1996-07-01

    this paper addresses the need for and the problem of organizing a science curriculum around contextual settings and science stories that serve to involve and motivate students to develop a scientific understanding of the world (with emphasis on physical science). A program of activities placed around contextual settings, science stories and contemporary issues of interest is recommended in an attempt to move toward a slow and secure abolition of the gulf between scientific knowledge and ‘common sense’ beliefs. A conceptual development is described to guide the connection between theory and evidence on a level appropriate for children, from early years to senior years. For senior years it is also important to connect the activity of teaching to a sound theoretical structure. The theoretical structure must illuminate the status of theory in science, establish what counts as evidence, clarify the relationship between experiment and explanation, and make connections to the history of science. This paper concludes with a proposed program of activities in terms of a sequence of theoretical and empirical activities that involve contextual settings, science stories, large context problems, thematic teaching, and popular science literature teaching.

  11. What 'empirical turn in bioethics'?

    PubMed

    Hurst, Samia

    2010-10-01

    Uncertainty as to how we should articulate empirical data and normative reasoning seems to underlie most difficulties regarding the 'empirical turn' in bioethics. This article examines three different ways in which we could understand 'empirical turn'. Using real facts in normative reasoning is trivial and would not represent a 'turn'. Becoming an empirical discipline through a shift to the social and neurosciences would be a turn away from normative thinking, which we should not take. Conducting empirical research to inform normative reasoning is the usual meaning given to the term 'empirical turn'. In this sense, however, the turn is incomplete. Bioethics has imported methodological tools from empirical disciplines, but too often it has not imported the standards to which researchers in these disciplines are held. Integrating empirical and normative approaches also represents true added difficulties. Addressing these issues from the standpoint of debates on the fact-value distinction can cloud very real methodological concerns by displacing the debate to a level of abstraction where they need not be apparent. Ideally, empirical research in bioethics should meet standards for empirical and normative validity similar to those used in the source disciplines for these methods, and articulate these aspects clearly and appropriately. More modestly, criteria to ensure that none of these standards are completely left aside would improve the quality of empirical bioethics research and partly clear the air of critiques addressing its theoretical justification, when its rigour in the particularly difficult context of interdisciplinarity is what should be at stake.

  12. A method for extracting human gait series from accelerometer signals based on the ensemble empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Fu, Mao-Jing; Zhuang, Jian-Jun; Hou, Feng-Zhen; Zhan, Qing-Bo; Shao, Yi; Ning, Xin-Bao

    2010-05-01

    In this paper, the ensemble empirical mode decomposition (EEMD) is applied to analyse accelerometer signals collected during normal human walking. First, the self-adaptive feature of EEMD is utilised to decompose the accelerometer signals, thus sifting out several intrinsic mode functions (IMFs) at disparate scales. Then, gait series can be extracted through peak detection from the eigen IMF that best represents gait rhythmicity. Compared with the method based on the empirical mode decomposition (EMD), the EEMD-based method has the following advantages: it remarkably improves the detection rate of peak values hidden in the original accelerometer signal, even when the signal is severely contaminated by the intermittent noises; this method effectively prevents the phenomenon of mode mixing found in the process of EMD. And a reasonable selection of parameters for the stop-filtering criteria can improve the calculation speed of the EEMD-based method. Meanwhile, the endpoint effect can be suppressed by using the auto regressive and moving average model to extend a short-time series in dual directions. The results suggest that EEMD is a powerful tool for extraction of gait rhythmicity and it also provides valuable clues for extracting eigen rhythm of other physiological signals.

  13. Inferring causal molecular networks: empirical assessment through a community-based effort.

    PubMed

    Hill, Steven M; Heiser, Laura M; Cokelaer, Thomas; Unger, Michael; Nesser, Nicole K; Carlin, Daniel E; Zhang, Yang; Sokolov, Artem; Paull, Evan O; Wong, Chris K; Graim, Kiley; Bivol, Adrian; Wang, Haizhou; Zhu, Fan; Afsari, Bahman; Danilova, Ludmila V; Favorov, Alexander V; Lee, Wai Shing; Taylor, Dane; Hu, Chenyue W; Long, Byron L; Noren, David P; Bisberg, Alexander J; Mills, Gordon B; Gray, Joe W; Kellen, Michael; Norman, Thea; Friend, Stephen; Qutub, Amina A; Fertig, Elana J; Guan, Yuanfang; Song, Mingzhou; Stuart, Joshua M; Spellman, Paul T; Koeppl, Heinz; Stolovitzky, Gustavo; Saez-Rodriguez, Julio; Mukherjee, Sach

    2016-04-01

    It remains unclear whether causal, rather than merely correlational, relationships in molecular networks can be inferred in complex biological settings. Here we describe the HPN-DREAM network inference challenge, which focused on learning causal influences in signaling networks. We used phosphoprotein data from cancer cell lines as well as in silico data from a nonlinear dynamical model. Using the phosphoprotein data, we scored more than 2,000 networks submitted by challenge participants. The networks spanned 32 biological contexts and were scored in terms of causal validity with respect to unseen interventional data. A number of approaches were effective, and incorporating known biology was generally advantageous. Additional sub-challenges considered time-course prediction and visualization. Our results suggest that learning causal relationships may be feasible in complex settings such as disease states. Furthermore, our scoring approach provides a practical way to empirically assess inferred molecular networks in a causal sense.

  14. Empirical mode decomposition-based facial pose estimation inside video sequences

    NASA Astrophysics Data System (ADS)

    Qing, Chunmei; Jiang, Jianmin; Yang, Zhijing

    2010-03-01

    We describe a new pose-estimation algorithm via integration of the strength in both empirical mode decomposition (EMD) and mutual information. While mutual information is exploited to measure the similarity between facial images to estimate poses, EMD is exploited to decompose input facial images into a number of intrinsic mode function (IMF) components, which redistribute the effect of noise, expression changes, and illumination variations as such that, when the input facial image is described by the selected IMF components, all the negative effects can be minimized. Extensive experiments were carried out in comparisons to existing representative techniques, and the results show that the proposed algorithm achieves better pose-estimation performances with robustness to noise corruption, illumination variation, and facial expressions.

  15. Inferring causal molecular networks: empirical assessment through a community-based effort.

    PubMed

    Hill, Steven M; Heiser, Laura M; Cokelaer, Thomas; Unger, Michael; Nesser, Nicole K; Carlin, Daniel E; Zhang, Yang; Sokolov, Artem; Paull, Evan O; Wong, Chris K; Graim, Kiley; Bivol, Adrian; Wang, Haizhou; Zhu, Fan; Afsari, Bahman; Danilova, Ludmila V; Favorov, Alexander V; Lee, Wai Shing; Taylor, Dane; Hu, Chenyue W; Long, Byron L; Noren, David P; Bisberg, Alexander J; Mills, Gordon B; Gray, Joe W; Kellen, Michael; Norman, Thea; Friend, Stephen; Qutub, Amina A; Fertig, Elana J; Guan, Yuanfang; Song, Mingzhou; Stuart, Joshua M; Spellman, Paul T; Koeppl, Heinz; Stolovitzky, Gustavo; Saez-Rodriguez, Julio; Mukherjee, Sach

    2016-04-01

    It remains unclear whether causal, rather than merely correlational, relationships in molecular networks can be inferred in complex biological settings. Here we describe the HPN-DREAM network inference challenge, which focused on learning causal influences in signaling networks. We used phosphoprotein data from cancer cell lines as well as in silico data from a nonlinear dynamical model. Using the phosphoprotein data, we scored more than 2,000 networks submitted by challenge participants. The networks spanned 32 biological contexts and were scored in terms of causal validity with respect to unseen interventional data. A number of approaches were effective, and incorporating known biology was generally advantageous. Additional sub-challenges considered time-course prediction and visualization. Our results suggest that learning causal relationships may be feasible in complex settings such as disease states. Furthermore, our scoring approach provides a practical way to empirically assess inferred molecular networks in a causal sense. PMID:26901648

  16. Theoretical analysis of transcranial Hall-effect stimulation based on passive cable model

    NASA Astrophysics Data System (ADS)

    Yuan, Yi; Li, Xiao-Li

    2015-12-01

    Transcranial Hall-effect stimulation (THS) is a new stimulation method in which an ultrasonic wave in a static magnetic field generates an electric field in an area of interest such as in the brain to modulate neuronal activities. However, the biophysical basis of simulating the neurons remains unknown. To address this problem, we perform a theoretical analysis based on a passive cable model to investigate the THS mechanism of neurons. Nerve tissues are conductive; an ultrasonic wave can move ions embedded in the tissue in a static magnetic field to generate an electric field (due to Lorentz force). In this study, a simulation model for an ultrasonically induced electric field in a static magnetic field is derived. Then, based on the passive cable model, the analytical solution for the voltage distribution in a nerve tissue is determined. The simulation results showthat THS can generate a voltage to stimulate neurons. Because the THS method possesses a higher spatial resolution and a deeper penetration depth, it shows promise as a tool for treating or rehabilitating neuropsychiatric disorders. Project supported by the National Natural Science Foundation of China (Grant Nos. 61273063 and 61503321), the China Postdoctoral Science Foundation (Grant No. 2013M540215), the Natural Science Foundation of Hebei Province, China (Grant No. F2014203161), and the Youth Research Program of Yanshan University, China (Grant No. 02000134).

  17. Extended charge accumulation in ruthenium-4H-imidazole-based black absorbers: a theoretical design concept.

    PubMed

    Kupfer, Stephan

    2016-05-11

    A theoretical-guided design concept aiming to achieve highly efficient unidirectional charge transfer and multi-charge separation upon successive photoexcitation for light-harvesting dyes in the scope of supramolecular photocatalysts is presented. Four 4H-imidazole-ruthenium(ii) complexes incorporating a biimidazole-based electron-donating ligand sphere have been designed based on the well-known 4H-imidazole-ruthenium(ii) polypyridyl dyes. The quantum chemical evaluation, performed at the density functional and time-dependent density functional level of theory, revealed extraordinary unidirectional charge transfer bands from the near-infrared to the ultraviolet region of the absorption spectrum upon multi-photoexcitation. Spectro-electrochemical simulations modeling photoexcited intermediates determined the outstanding multi-electron storage capacity for this novel class of black dyes. These remarkable photochemical and photophysical properties are found to be preserved upon site-specific protonation rendering 4H-imidazole-ruthenium(ii) biimidazole dyes ideal for light-harvesting applications in the field of solar energy conversion. PMID:27121270

  18. Gold price analysis based on ensemble empirical model decomposition and independent component analysis

    NASA Astrophysics Data System (ADS)

    Xian, Lu; He, Kaijian; Lai, Kin Keung

    2016-07-01

    In recent years, the increasing level of volatility of the gold price has received the increasing level of attention from the academia and industry alike. Due to the complexity and significant fluctuations observed in the gold market, however, most of current approaches have failed to produce robust and consistent modeling and forecasting results. Ensemble Empirical Model Decomposition (EEMD) and Independent Component Analysis (ICA) are novel data analysis methods that can deal with nonlinear and non-stationary time series. This study introduces a new methodology which combines the two methods and applies it to gold price analysis. This includes three steps: firstly, the original gold price series is decomposed into several Intrinsic Mode Functions (IMFs) by EEMD. Secondly, IMFs are further processed with unimportant ones re-grouped. Then a new set of data called Virtual Intrinsic Mode Functions (VIMFs) is reconstructed. Finally, ICA is used to decompose VIMFs into statistically Independent Components (ICs). The decomposition results reveal that the gold price series can be represented by the linear combination of ICs. Furthermore, the economic meanings of ICs are analyzed and discussed in detail, according to the change trend and ICs' transformation coefficients. The analyses not only explain the inner driving factors and their impacts but also conduct in-depth analysis on how these factors affect gold price. At the same time, regression analysis has been conducted to verify our analysis. Results from the empirical studies in the gold markets show that the EEMD-ICA serve as an effective technique for gold price analysis from a new perspective.

  19. Interactions between Antibiotics and Graphene-Based Materials in Water: A Comparative Experimental and Theoretical Investigation.

    PubMed

    Zhang, Xuntong; Shen, Jiachun; Zhuo, Ning; Tian, Ziqi; Xu, Peiran; Yang, Zhen; Yang, Weiben

    2016-09-14

    Complex interactions between antibiotics and graphene-based materials determine both the adsorption performance of graphene-based materials and the transport behaviors of antibiotics in water. In this work, such interactions were investigated through adsorption experiments, instrumental analyses and theoretical DFT calculations. Three typical antibiotics (norfloxacin (NOR), sulfadiazine (SDZ) and tetracycline (TC)) and different graphene-based materials (divided into two groups: graphene oxides-based ones (GOs) and reduced GOs (RGOs)) were employed. Optimal adsorption pHs for NOR, SDZ, and TC are 6.2, 4.0, and 4.0, respectively. At corresponding optimal pHs, NOR favored RGOs (adsorption capability: ∼50 mg/g) while SDZ preferred GOs (∼17 mg/g); All adsorbents exhibited similar uptake of TC (∼70 mg/g). Similar amounts of edge carboxyls of both GOs and RGOs wielded electrostatic attraction with NOR and TC, but not with SDZ. According to DFT-calculated most-stable-conformations of antibiotics-adsorbents complexes, the intrinsic distinction between GOs and RGOs was the different amounts of sp(2) and sp(3) hybridization regions: π-π electron donor-acceptor effect of antibiotic-sp(2)/sp(3) and H-bonds of antibiotic-sp(3) coexisted. Binding energy (BE) of the former was larger for NOR; the latter interaction was stronger for SDZ; two species of TC at the optimal pH, i.e., TC(+) and TC(0), possessed larger BE with sp(3) and sp(2) regions, respectively. PMID:27548426

  20. Information-theoretic discrepancy based iterative reconstructions (IDIR) for polychromatic x-ray tomography

    SciTech Connect

    Jang, Kwang Eun; Lee, Jongha; Sung, Younghun; Lee, SeongDeok

    2013-09-15

    Purpose: X-ray photons generated from a typical x-ray source for clinical applications exhibit a broad range of wavelengths, and the interactions between individual particles and biological substances depend on particles' energy levels. Most existing reconstruction methods for transmission tomography, however, neglect this polychromatic nature of measurements and rely on the monochromatic approximation. In this study, we developed a new family of iterative methods that incorporates the exact polychromatic model into tomographic image recovery, which improves the accuracy and quality of reconstruction.Methods: The generalized information-theoretic discrepancy (GID) was employed as a new metric for quantifying the distance between the measured and synthetic data. By using special features of the GID, the objective function for polychromatic reconstruction which contains a double integral over the wavelength and the trajectory of incident x-rays was simplified to a paraboloidal form without using the monochromatic approximation. More specifically, the original GID was replaced with a surrogate function with two auxiliary, energy-dependent variables. Subsequently, the alternating minimization technique was applied to solve the double minimization problem. Based on the optimization transfer principle, the objective function was further simplified to the paraboloidal equation, which leads to a closed-form update formula. Numerical experiments on the beam-hardening correction and material-selective reconstruction were conducted to compare and assess the performance of conventional methods and the proposed algorithms.Results: The authors found that the GID determines the distance between its two arguments in a flexible manner. In this study, three groups of GIDs with distinct data representations were considered. The authors demonstrated that one type of GIDs that comprises “raw” data can be viewed as an extension of existing statistical reconstructions; under a

  1. Adolescent preventive health and team-games-tournaments: five decades of evidence for an empirically based paradigm.

    PubMed

    Wodarski, John S; Feit, Marvin D

    2011-01-01

    The problematic behaviors of teenagers and the subsequent negative consequences are extensive and well documented: unwanted pregnancy, substance abuse, violent behavior, depression, and social and psychological consequences of unemployment. In this article, the authors review an approach that uses a cooperative learning, empirically based intervention that employs peers as teachers. This intervention of choice is Teams-Games-Tournaments (TGT), a paradigm backed by five decades of empirical support. The application of TGT in preventive health programs incorporates elements in common with other prevention programs that are based on a public health orientation and constitute the essential components of health education, that is, skills training and practice in applying skills. The TGT intervention supports the idea that children and adolescents from various socioeconomic classes, between the ages of 8 and 18 and in classrooms or groups ranging in size from 4 to 17 members, can work together for one another. TGT has been applied successfully in such diverse areas as adolescent development, sexuality education, psychoactive substance abuse education, anger control, coping with depression and suicide, nutrition, comprehensive employment preparation, and family intervention. This article reviews the extensive research on TGT using examples of successful projects in substance abuse, violence, and nutrition. Issues are raised that relate to the implementation of preventive health strategies for adolescents, including cognitive aspects, social and family networks, and intervention components.

  2. Development of the Knowledge-based & Empirical Combined Scoring Algorithm (KECSA) to Score Protein-Ligand Interactions

    PubMed Central

    Zheng, Zheng

    2013-01-01

    We describe a novel knowledge-based protein-ligand scoring function that employs a new definition for the reference state, allowing us to relate a statistical potential to a Lennard-Jones (LJ) potential. In this way, the LJ potential parameters were generated from protein-ligand complex structural data contained in the PDB. Forty-nine types of atomic pairwise interactions were derived using this method, which we call the knowledge-based and empirical combined scoring algorithm (KECSA). Two validation benchmarks were introduced to test the performance of KECSA. The first validation benchmark included two test sets that address the training-set and enthalpy/entropy of KECSA The second validation benchmark suite included two large-scale and five small-scale test sets to compare the reproducibility of KECSA with respect to two empirical score functions previously developed in our laboratory (LISA and LISA+), as well as to other well-known scoring methods. Validation results illustrate that KECSA shows improved performance in all test sets when compared with other scoring methods especially in its ability to minimize the RMSE. LISA and LISA+ displayed similar performance using the correlation coefficient and Kendall τ as the metric of quality for some of the small test sets. Further pathways for improvement are discussed which would KECSA more sensitive to subtle changes in ligand structure. PMID:23560465

  3. A Compound fault diagnosis for rolling bearings method based on blind source separation and ensemble empirical mode decomposition.

    PubMed

    Wang, Huaqing; Li, Ruitong; Tang, Gang; Yuan, Hongfang; Zhao, Qingliang; Cao, Xi

    2014-01-01

    A Compound fault signal usually contains multiple characteristic signals and strong confusion noise, which makes it difficult to separate week fault signals from them through conventional ways, such as FFT-based envelope detection, wavelet transform or empirical mode decomposition individually. In order to improve the compound faults diagnose of rolling bearings via signals' separation, the present paper proposes a new method to identify compound faults from measured mixed-signals, which is based on ensemble empirical mode decomposition (EEMD) method and independent component analysis (ICA) technique. With the approach, a vibration signal is firstly decomposed into intrinsic mode functions (IMF) by EEMD method to obtain multichannel signals. Then, according to a cross correlation criterion, the corresponding IMF is selected as the input matrix of ICA. Finally, the compound faults can be separated effectively by executing ICA method, which makes the fault features more easily extracted and more clearly identified. Experimental results validate the effectiveness of the proposed method in compound fault separating, which works not only for the outer race defect, but also for the rollers defect and the unbalance fault of the experimental system. PMID:25289644

  4. Quantifying Age-based Forest Carbon Dynamics to Estimate Effects of Wildfire on Carbon Balance: a Multi-scale Empirical Approach

    NASA Astrophysics Data System (ADS)

    Raymond, C. L.; McKenzie, D.

    2009-12-01

    Disturbances affect biomass accumulation and net primary productivity (NPP) across large landscapes by altering the age-class distribution of the landscape. For fire disturbances specifically, a theoretical age-class distribution can be analytically derived from the mean fire return interval using a negative exponential model. However, to determine the consequences of these ecosystem-specific fire return intervals for biomass accumulation and NPP, it is necessary to quantify age-based carbon dynamics at a similar scale. We used chronosequences of Forest Inventory and Analysis (FIA) data to fit empirical models of live biomass carbon accumulation and NPP as a function of stand age. Models were fit at both coarse (ecosections) and fine (potential vegetation types) scales for the forested region of Washington, USA. At the ecosection scale, the Western Cascades and the Coastal ecosection had the highest levels of live biomass C (26.8 and 22.0 kg C/m2 respectively). The fitted maximum live biomass C was lower in the Eastern Cascades (12.2 kg C/m2) and lowest in the Okanogan Highlands (7.56 kg C/m2). However, the order of the ecosections differed for the rate at which these maximums were reached. The Coast and Okanogan Highlands reached maximum live biomass more rapidly than the Eastern and Western Cascades. For the fitted NPP models, maximum NPP was highest in the Coastal ecosection (0.699 kg C/m^2/yr), lowest in the Eastern Cascades (0.196 kg C/m^2/yr) and Okanogan Highlands (0.195 kg C/m^2/yr), and intermediate in the Western Cascades (0.397 kg C/m^2/yr). Surprisingly, all ecosections reached maximum NPP at a similar stand age (approximately 80 years). We then developed similar models at the scale of potential vegetation types within ecosections. These age-based patterns of carbon dynamics, in combination with landscape age-class distributions, provide an empirical approach for estimating the impact of wildfire on biomass accumulation and NPP at the ecosystem scale

  5. Relevant modes selection method based on Spearman correlation coefficient for laser signal denoising using empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Duan, Yabo; Song, Chengtian

    2016-10-01

    Empirical mode decomposition (EMD) is a recently proposed nonlinear and nonstationary laser signal denoising method. A noisy signal is broken down using EMD into oscillatory components that are called intrinsic mode functions (IMFs). Thresholding-based denoising and correlation-based partial reconstruction of IMFs are the two main research directions for EMD-based denoising. Similar to other decomposition-based denoising approaches, EMD-based denoising methods require a reliable threshold to determine which IMFs are noise components and which IMFs are noise-free components. In this work, we propose a new approach in which each IMF is first denoised using EMD interval thresholding (EMD-IT), and then a robust thresholding process based on Spearman correlation coefficient is used for relevant modes selection. The proposed method tackles the problem using a thresholding-based denoising approach coupled with partial reconstruction of the relevant IMFs. Other traditional denoising methods, including correlation-based EMD partial reconstruction (EMD-Correlation), discrete Fourier transform and wavelet-based methods, are investigated to provide a comparison with the proposed technique. Simulation and test results demonstrate the superior performance of the proposed method when compared with the other methods.

  6. Theoretical Evaluation of Electroactive Polymer Based Micropump Diaphragm for Air Flow Control

    NASA Technical Reports Server (NTRS)

    Xu, Tian-Bing; Su, Ji; Zhang, Qiming

    2004-01-01

    An electroactive polymer (EAP), high energy electron irradiated poly(vinylidene fluoride-trifluoroethylene) [P(VDFTrFE)] copolymer, based actuation micropump diaphragm (PAMPD) have been developed for air flow control. The displacement strokes and profiles as a function of amplifier and frequency of electric field have been characterized. The volume stroke rates (volume rate) as function of electric field, driving frequency have been theoretically evaluated, too. The PAMPD exhibits high volume rate. It is easily tuned with varying of either amplitude or frequency of the applied electric field. In addition, the performance of the diaphragms were modeled and the agreement between the modeling results and experimental data confirms that the response of the diaphragms follow the design parameters. The results demonstrated that the diaphragm can fit some future aerospace applications to replace the traditional complex mechanical systems, increase the control capability and reduce the weight of the future air dynamic control systems. KEYWORDS: Electroactive polymer (EAP), micropump, diaphragm, actuation, displacement, volume rate, pumping speed, clamping ratio.

  7. Respiratory rate detection algorithm based on RGB-D camera: theoretical background and experimental results

    PubMed Central

    Freddi, Alessandro; Monteriù, Andrea; Longhi, Sauro

    2014-01-01

    Both the theoretical background and the experimental results of an algorithm developed to perform human respiratory rate measurements without any physical contact are presented. Based on depth image sensing techniques, the respiratory rate is derived by measuring morphological changes of the chest wall. The algorithm identifies the human chest, computes its distance from the camera and compares this value with the instantaneous distance, discerning if it is due to the respiratory act or due to a limited movement of the person being monitored. To experimentally validate the proposed algorithm, the respiratory rate measurements coming from a spirometer were taken as a benchmark and compared with those estimated by the algorithm. Five tests were performed, with five different persons sat in front of the camera. The first test aimed to choose the suitable sampling frequency. The second test was conducted to compare the performances of the proposed system with respect to the gold standard in ideal conditions of light, orientation and clothing. The third, fourth and fifth tests evaluated the algorithm performances under different operating conditions. The experimental results showed that the system can correctly measure the respiratory rate, and it is a viable alternative to monitor the respiratory activity of a person without using invasive sensors. PMID:26609383

  8. A game theoretic framework for incentive-based models of intrinsic motivation in artificial systems.

    PubMed

    Merrick, Kathryn E; Shafi, Kamran

    2013-01-01

    An emerging body of research is focusing on understanding and building artificial systems that can achieve open-ended development influenced by intrinsic motivations. In particular, research in robotics and machine learning is yielding systems and algorithms with increasing capacity for self-directed learning and autonomy. Traditional software architectures and algorithms are being augmented with intrinsic motivations to drive cumulative acquisition of knowledge and skills. Intrinsic motivations have recently been considered in reinforcement learning, active learning and supervised learning settings among others. This paper considers game theory as a novel setting for intrinsic motivation. A game theoretic framework for intrinsic motivation is formulated by introducing the concept of optimally motivating incentive as a lens through which players perceive a game. Transformations of four well-known mixed-motive games are presented to demonstrate the perceived games when players' optimally motivating incentive falls in three cases corresponding to strong power, affiliation and achievement motivation. We use agent-based simulations to demonstrate that players with different optimally motivating incentive act differently as a result of their altered perception of the game. We discuss the implications of these results both for modeling human behavior and for designing artificial agents or robots. PMID:24198797

  9. A game theoretic framework for incentive-based models of intrinsic motivation in artificial systems.

    PubMed

    Merrick, Kathryn E; Shafi, Kamran

    2013-01-01

    An emerging body of research is focusing on understanding and building artificial systems that can achieve open-ended development influenced by intrinsic motivations. In particular, research in robotics and machine learning is yielding systems and algorithms with increasing capacity for self-directed learning and autonomy. Traditional software architectures and algorithms are being augmented with intrinsic motivations to drive cumulative acquisition of knowledge and skills. Intrinsic motivations have recently been considered in reinforcement learning, active learning and supervised learning settings among others. This paper considers game theory as a novel setting for intrinsic motivation. A game theoretic framework for intrinsic motivation is formulated by introducing the concept of optimally motivating incentive as a lens through which players perceive a game. Transformations of four well-known mixed-motive games are presented to demonstrate the perceived games when players' optimally motivating incentive falls in three cases corresponding to strong power, affiliation and achievement motivation. We use agent-based simulations to demonstrate that players with different optimally motivating incentive act differently as a result of their altered perception of the game. We discuss the implications of these results both for modeling human behavior and for designing artificial agents or robots.

  10. [Nursing practice based on theoretical models: a qualitative study of nurses' perception].

    PubMed

    Amaducci, Giovanna; Iemmi, Marina; Prandi, Marzia; Saffioti, Angelina; Carpanoni, Marika; Mecugni, Daniela

    2013-01-01

    Many faculty argue that theory and theorizing are closely related to the clinical practice, that the disciplinary knowledge grows, more relevantly, from the specific care context in which it takes place and, moreover, that knowledge does not proceed only by the application of general principles of the grand theories to specific cases. Every nurse, in fact, have  a mental model, of what may or may not be aware, that motivate and substantiate every action and choice of career. The study describes what the nursing theoretical model is; the mental model and the tacit  knowledge underlying it. It identifies the explicit theoretical model of the professional group that rapresents nursing partecipants, aspects of continuity with the theoretical model proposed by this degree course in Nursing.. Methods Four focus groups were made which were attended by a total of 22 nurses, rapresentatives of almost every Unit of Reggio Emilia Hospital's. We argue that the theoretical nursing model of each professional group is the result of tacit knowledge, which help to define the personal mental model, and the theoretical model, which explicitly underlying theoretical content learned applied consciously and reverted to / from nursing practice. Reasoning on the use of theory in practice has allowed us to give visibility to a theoretical model explicitly nursing authentically oriented to the needs of the person, in all its complexity in specific contexts.

  11. Adolescents with attention-deficit/hyperactivity disorder: an overview of empirically based treatments.

    PubMed

    Barkley, Russell A

    2004-01-01

    The author first presents an overview of attention-deficit/hyperactivity disorder (ADHD) as it presents in adolescents. He reviews what is known about the predominantly inattentive subtype in adolescents, the persistence of symptoms into this developmental phase, and comorbid disorders in adolescent patients with ADHD. The author then reviews treatments for adolescents with ADHD for which there is some empirical support in the scientific literature. He first discusses common assumptions concerning the treatment of ADHD and evidence for or against these assumptions. Information on therapies that have been shown to be ineffective or the benefit of which is unproven are then described. These include cognitive-behavioral therapy and social skills training. The author then presents an overview of what is known about the medication treatment of ADHD and discusses how this information is applicable to adolescents with the disorder. Four main classes of drugs are discussed: stimulants, noradrenergic reuptake inhibitors, tricyclic antidepressants, and antihypertensive agents. The author then reviews the use of several psychosocial interventions, including contingency management strategies, parent training in behavior management methods, and teacher training in classroom management, and discusses how these strategies can best be used for adolescents with ADHD. The author then discusses the use of combined treatment with psychosocial interventions and medication. Finally, information on the use of physical exercise as therapy for adolescents with ADHD is discussed.

  12. Cardiopulmonary Resuscitation Pattern Evaluation Based on Ensemble Empirical Mode Decomposition Filter via Nonlinear Approaches

    PubMed Central

    Ma, Matthew Huei-Ming

    2016-01-01

    Good quality cardiopulmonary resuscitation (CPR) is the mainstay of treatment for managing patients with out-of-hospital cardiac arrest (OHCA). Assessment of the quality of the CPR delivered is now possible through the electrocardiography (ECG) signal that can be collected by an automated external defibrillator (AED). This study evaluates a nonlinear approximation of the CPR given to the asystole patients. The raw ECG signal is filtered using ensemble empirical mode decomposition (EEMD), and the CPR-related intrinsic mode functions (IMF) are chosen to be evaluated. In addition, sample entropy (SE), complexity index (CI), and detrended fluctuation algorithm (DFA) are collated and statistical analysis is performed using ANOVA. The primary outcome measure assessed is the patient survival rate after two hours. CPR pattern of 951 asystole patients was analyzed for quality of CPR delivered. There was no significant difference observed in the CPR-related IMFs peak-to-peak interval analysis for patients who are younger or older than 60 years of age, similarly to the amplitude difference evaluation for SE and DFA. However, there is a difference noted for the CI (p < 0.05). The results show that patients group younger than 60 years have higher survival rate with high complexity of the CPR-IMFs amplitude differences. PMID:27529068

  13. Development of Items for a Pedagogical Content Knowledge Test Based on Empirical Analysis of Pupils' Errors

    NASA Astrophysics Data System (ADS)

    Jüttner, Melanie; Neuhaus, Birgit J.

    2012-05-01

    In view of the lack of instruments for measuring biology teachers' pedagogical content knowledge (PCK), this article reports on a study about the development of PCK items for measuring teachers' knowledge of pupils' errors and ways for dealing with them. This study investigated 9th and 10th grade German pupils' (n = 461) drawings in an achievement test about the knee-jerk in biology, which were analysed by using the inductive qualitative analysis of their content. The empirical data were used for the development of the items in the PCK test. The validation of the items was determined with think-aloud interviews of German secondary school teachers (n = 5). If the item was determined, the reliability was tested by the results of German secondary school biology teachers (n = 65) who took the PCK test. The results indicated that these items are satisfactorily reliable (Cronbach's alpha values ranged from 0.60 to 0.65). We suggest a larger sample size and American biology teachers be used in our further studies. The findings of this study about teachers' professional knowledge from the PCK test could provide new information about the influence of teachers' knowledge on their pupils' understanding of biology and their possible errors in learning biology.

  14. Spectral analysis of Hall-effect thruster plasma oscillations based on the empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Kurzyna, J.; Mazouffre, S.; Lazurenko, A.; Albarède, L.; Bonhomme, G.; Makowski, K.; Dudeck, M.; Peradzyński, Z.

    2005-12-01

    Hall-effect thruster plasma oscillations recorded by means of probes located at the channel exit are analyzed using the empirical mode decomposition (EMD) method. This self-adaptive technique permits to decompose a nonstationary signal into a set of intrinsic modes, and acts as a very efficient filter allowing to separate contributions of different underlying physical mechanisms. Applying the Hilbert transform to the whole set of modes allows to identify peculiar events and to assign them a range of instantaneous frequency and power. In addition to 25kHz breathing-type oscillations which are unambiguously identified, the EMD approach confirms the existence of oscillations with instantaneous frequencies in the range of 100-500kHz typical for ion transit-time oscillations. Modeling of high-frequency modes (ν˜10MHz) resulting from EMD of measured wave forms supports the idea that high-frequency plasma oscillations originate from electron-density perturbations propagating azimuthally with the electron drift velocity.

  15. Cardiopulmonary Resuscitation Pattern Evaluation Based on Ensemble Empirical Mode Decomposition Filter via Nonlinear Approaches.

    PubMed

    Sadrawi, Muammar; Sun, Wei-Zen; Ma, Matthew Huei-Ming; Dai, Chun-Yi; Abbod, Maysam F; Shieh, Jiann-Shing

    2016-01-01

    Good quality cardiopulmonary resuscitation (CPR) is the mainstay of treatment for managing patients with out-of-hospital cardiac arrest (OHCA). Assessment of the quality of the CPR delivered is now possible through the electrocardiography (ECG) signal that can be collected by an automated external defibrillator (AED). This study evaluates a nonlinear approximation of the CPR given to the asystole patients. The raw ECG signal is filtered using ensemble empirical mode decomposition (EEMD), and the CPR-related intrinsic mode functions (IMF) are chosen to be evaluated. In addition, sample entropy (SE), complexity index (CI), and detrended fluctuation algorithm (DFA) are collated and statistical analysis is performed using ANOVA. The primary outcome measure assessed is the patient survival rate after two hours. CPR pattern of 951 asystole patients was analyzed for quality of CPR delivered. There was no significant difference observed in the CPR-related IMFs peak-to-peak interval analysis for patients who are younger or older than 60 years of age, similarly to the amplitude difference evaluation for SE and DFA. However, there is a difference noted for the CI (p < 0.05). The results show that patients group younger than 60 years have higher survival rate with high complexity of the CPR-IMFs amplitude differences. PMID:27529068

  16. Spectral analysis of Hall-effect thruster plasma oscillations based on the empirical mode decomposition

    SciTech Connect

    Kurzyna, J.; Mazouffre, S.; Lazurenko, A.; Albarede, L.; Bonhomme, G.; Makowski, K.; Dudeck, M.; Peradzynski, Z.

    2005-12-15

    Hall-effect thruster plasma oscillations recorded by means of probes located at the channel exit are analyzed using the empirical mode decomposition (EMD) method. This self-adaptive technique permits to decompose a nonstationary signal into a set of intrinsic modes, and acts as a very efficient filter allowing to separate contributions of different underlying physical mechanisms. Applying the Hilbert transform to the whole set of modes allows to identify peculiar events and to assign them a range of instantaneous frequency and power. In addition to 25 kHz breathing-type oscillations which are unambiguously identified, the EMD approach confirms the existence of oscillations with instantaneous frequencies in the range of 100-500 kHz typical for ion transit-time oscillations. Modeling of high-frequency modes ({nu}{approx}10 MHz) resulting from EMD of measured wave forms supports the idea that high-frequency plasma oscillations originate from electron-density perturbations propagating azimuthally with the electron drift velocity.

  17. Inferring causal molecular networks: empirical assessment through a community-based effort

    PubMed Central

    Hill, Steven M.; Heiser, Laura M.; Cokelaer, Thomas; Unger, Michael; Nesser, Nicole K.; Carlin, Daniel E.; Zhang, Yang; Sokolov, Artem; Paull, Evan O.; Wong, Chris K.; Graim, Kiley; Bivol, Adrian; Wang, Haizhou; Zhu, Fan; Afsari, Bahman; Danilova, Ludmila V.; Favorov, Alexander V.; Lee, Wai Shing; Taylor, Dane; Hu, Chenyue W.; Long, Byron L.; Noren, David P.; Bisberg, Alexander J.; Mills, Gordon B.; Gray, Joe W.; Kellen, Michael; Norman, Thea; Friend, Stephen; Qutub, Amina A.; Fertig, Elana J.; Guan, Yuanfang; Song, Mingzhou; Stuart, Joshua M.; Spellman, Paul T.; Koeppl, Heinz; Stolovitzky, Gustavo; Saez-Rodriguez, Julio; Mukherjee, Sach

    2016-01-01

    Inferring molecular networks is a central challenge in computational biology. However, it has remained unclear whether causal, rather than merely correlational, relationships can be effectively inferred in complex biological settings. Here we describe the HPN-DREAM network inference challenge that focused on learning causal influences in signaling networks. We used phosphoprotein data from cancer cell lines as well as in silico data from a nonlinear dynamical model. Using the phosphoprotein data, we scored more than 2,000 networks submitted by challenge participants. The networks spanned 32 biological contexts and were scored in terms of causal validity with respect to unseen interventional data. A number of approaches were effective and incorporating known biology was generally advantageous. Additional sub-challenges considered time-course prediction and visualization. Our results constitute the most comprehensive assessment of causal network inference in a mammalian setting carried out to date and suggest that learning causal relationships may be feasible in complex settings such as disease states. Furthermore, our scoring approach provides a practical way to empirically assess the causal validity of inferred molecular networks. PMID:26901648

  18. Empirical application of empathy enhancing program based on movement concept for married couples in conflict

    PubMed Central

    Kim, Soo-Yeon; Kang, Hye-Won; Chung, Yong-Chul; Park, Seungha

    2013-01-01

    In the field of marital therapy, it is known that couple movement program helps married couples faced with conflict situation to rebuild the relationship and to maintain a family homeostasis. The purpose of this study was to configure and apply the kinesthetic empathy program and to assess the effectiveness for married couples in conflict. To achieve the research aims, qualitative research method has been conducted, subjecting three couples, 6 people, who are participating in expressive movement program for this study. The study used focus group interview method for collecting date and employed for the interview method by mixing the semi-structured and unstructured questionnaire. The results were followings. First, through kinesthetic empathy enhancing program, one could develop self-awareness and emotional attunement. Second, the result showed the relationship between intention and empathy. It shows that “knowing spouse’s hidden intention” is significant factors to understand others. Third, kinesthetic empathy program could complement general marriage counseling program. The results of this study provide empirical evidence that movement program functions as an empathy enhancer through the process of perceiving, feeling, thinking, and interacting with others. PMID:24278896

  19. Network-Based Enriched Gene Subnetwork Identification: A Game-Theoretic Approach.

    PubMed

    Razi, Abolfazl; Afghah, Fatemeh; Singh, Salendra; Varadan, Vinay

    2016-01-01

    Identifying subsets of genes that jointly mediate cancer etiology, progression, or therapy response remains a challenging problem due to the complexity and heterogeneity in cancer biology, a problem further exacerbated by the relatively small number of cancer samples profiled as compared with the sheer number of potential molecular factors involved. Pure data-driven methods that merely rely on multiomics data have been successful in discovering potentially functional genes but suffer from high false-positive rates and tend to report subsets of genes whose biological interrelationships are unclear. Recently, integrative data-driven models have been developed to integrate multiomics data with signaling pathway networks in order to identify pathways associated with clinical or biological phenotypes. However, these approaches suffer from an important drawback of being restricted to previously discovered pathway structures and miss novel genomic interactions as well as potential crosstalk among the pathways. In this article, we propose a novel coalition-based game-theoretic approach to overcome the challenge of identifying biologically relevant gene subnetworks associated with disease phenotypes. The algorithm starts from a set of seed genes and traverses a protein-protein interaction network to identify modulated subnetworks. The optimal set of modulated subnetworks is identified using Shapley value that accounts for both individual and collective utility of the subnetwork of genes. The algorithm is applied to two illustrative applications, including the identification of subnetworks associated with (i) disease progression risk in response to platinum-based therapy in ovarian cancer and (ii) immune infiltration in triple-negative breast cancer. The results demonstrate an improved predictive power of the proposed method when compared with state-of-the-art feature selection methods, with the added advantage of identifying novel potentially functional gene subnetworks

  20. Budget impact of rare diseases: proposal for a theoretical framework based on evidence from Bulgaria.

    PubMed

    Iskrov, G; Jessop, E; Miteva-Katrandzhieva, T; Stefanov, R

    2015-05-01

    This study aimed to estimate the impact of rare disease (RD) drugs on Bulgaria's National Health Insurance Fund's (NHIF) total drug budget for 2011-2014. While standard budget impact analysis is usually used in a prospective way, assessing the impact of new health technologies on the health system's sustainability, we adopted a retrospective approach instead. Budget impact was quantified from a NHIF perspective. Descriptive statistics was used to analyse cost details, while dynamics was studied, using chain-linked growth rates (every period preceding the accounting period serves as a base). NHIF costs for RD therapies were expected to increase up to 74.5 million BGN in 2014 (7.8% of NHIF's total pharmaceutical expenditure). Greatest increase in cost per patient and number of patients treated was observed in conditions, for which there were newly approved for funding therapies. While simple cost drivers are well known - number of patients treated and mean cost per patient - in real-world settings these two factors are likely to depend on the availability and accessibility of effective innovative therapies. As RD were historically underdiagnosed, undertreated and underfunded in Bulgaria, improved access to RD drugs will inevitably lead to increasing budget burden for payers. Based on the evidence from this study, we propose a theoretical framework of a budget impact study for RD. First, a retrospective analysis could provide essential health policy insights in terms of impact on accessibility and population health, which are significant benchmarks in shaping funding decisions in healthcare. We suggest an interaction between the classical prospective BIA with the retrospective analysis in order to optimise health policy decision-making. Second, we recommend budget impact studies to focus on RD rather than orphan drugs (OD). In policy context, RD are the public health priority. OD are just one of the tools to address the complex issues of RD. Moreover, OD is a dynamic

  1. Microarray missing data imputation based on a set theoretic framework and biological knowledge.

    PubMed

    Gan, Xiangchao; Liew, Alan Wee-Chung; Yan, Hong

    2006-01-01

    Gene expressions measured using microarrays usually suffer from the missing value problem. However, in many data analysis methods, a complete data matrix is required. Although existing missing value imputation algorithms have shown good performance to deal with missing values, they also have their limitations. For example, some algorithms have good performance only when strong local correlation exists in data while some provide the best estimate when data is dominated by global structure. In addition, these algorithms do not take into account any biological constraint in their imputation. In this paper, we propose a set theoretic framework based on projection onto convex sets (POCS) for missing data imputation. POCS allows us to incorporate different types of a priori knowledge about missing values into the estimation process. The main idea of POCS is to formulate every piece of prior knowledge into a corresponding convex set and then use a convergence-guaranteed iterative procedure to obtain a solution in the intersection of all these sets. In this work, we design several convex sets, taking into consideration the biological characteristic of the data: the first set mainly exploit the local correlation structure among genes in microarray data, while the second set captures the global correlation structure among arrays. The third set (actually a series of sets) exploits the biological phenomenon of synchronization loss in microarray experiments. In cyclic systems, synchronization loss is a common phenomenon and we construct a series of sets based on this phenomenon for our POCS imputation algorithm. Experiments show that our algorithm can achieve a significant reduction of error compared to the KNNimpute, SVDimpute and LSimpute methods.

  2. Network-Based Enriched Gene Subnetwork Identification: A Game-Theoretic Approach.

    PubMed

    Razi, Abolfazl; Afghah, Fatemeh; Singh, Salendra; Varadan, Vinay

    2016-01-01

    Identifying subsets of genes that jointly mediate cancer etiology, progression, or therapy response remains a challenging problem due to the complexity and heterogeneity in cancer biology, a problem further exacerbated by the relatively small number of cancer samples profiled as compared with the sheer number of potential molecular factors involved. Pure data-driven methods that merely rely on multiomics data have been successful in discovering potentially functional genes but suffer from high false-positive rates and tend to report subsets of genes whose biological interrelationships are unclear. Recently, integrative data-driven models have been developed to integrate multiomics data with signaling pathway networks in order to identify pathways associated with clinical or biological phenotypes. However, these approaches suffer from an important drawback of being restricted to previously discovered pathway structures and miss novel genomic interactions as well as potential crosstalk among the pathways. In this article, we propose a novel coalition-based game-theoretic approach to overcome the challenge of identifying biologically relevant gene subnetworks associated with disease phenotypes. The algorithm starts from a set of seed genes and traverses a protein-protein interaction network to identify modulated subnetworks. The optimal set of modulated subnetworks is identified using Shapley value that accounts for both individual and collective utility of the subnetwork of genes. The algorithm is applied to two illustrative applications, including the identification of subnetworks associated with (i) disease progression risk in response to platinum-based therapy in ovarian cancer and (ii) immune infiltration in triple-negative breast cancer. The results demonstrate an improved predictive power of the proposed method when compared with state-of-the-art feature selection methods, with the added advantage of identifying novel potentially functional gene subnetworks

  3. Empirical evidence for identical band gaps in substituted C{sub 60} and C{sub 70} based fullerenes

    SciTech Connect

    Mattias Andersson, L. Tanaka, Hideyuki

    2014-01-27

    Optical absorptance data, and a strong correlation between solar cell open circuit voltages and the ionization potentials of a wide range of differently substituted fullerene acceptors, are presented as empirical evidence for identical, or at least very similar, band gaps in all substituted C{sub 60} and C{sub 70} based fullerenes. Both the number and kind of substituents in this study are sufficiently varied to imply generality. While the band gaps of the fullerenes remain the same for all the different substitutions, their ionization potentials vary greatly in a span of more than 0.4 eV. The merits and drawbacks of using these results together with photoelectron based techniques to determine relative fullerene energy levels for, e.g., organic solar cell applications compared to more direct electrochemical methods are also discussed.

  4. [Removal Algorithm of Power Line Interference in Electrocardiogram Based on Morphological Component Analysis and Ensemble Empirical Mode Decomposition].

    PubMed

    Zhao, Wei; Xiao, Shixiao; Zhang, Baocan; Huang, Xiaojing; You, Rongyi

    2015-12-01

    Electrocardiogram (ECG) signals are susceptible to be disturbed by 50 Hz power line interference (PLI) in the process of acquisition and conversion. This paper, therefore, proposes a novel PLI removal algorithm based on morphological component analysis (MCA) and ensemble empirical mode decomposition (EEMD). Firstly, according to the morphological differences in ECG waveform characteristics, the noisy ECG signal was decomposed into the mutated component, the smooth component and the residual component by MCA. Secondly, intrinsic mode functions (IMF) of PLI was filtered. The noise suppression rate (NSR) and the signal distortion ratio (SDR) were used to evaluate the effect of de-noising algorithm. Finally, the ECG signals were re-constructed. Based on the experimental comparison, it was concluded that the proposed algorithm had better filtering functions than the improved Levkov algorithm, because it could not only effectively filter the PLI, but also have smaller SDR value. PMID:27079083

  5. Theoretical Issues

    SciTech Connect

    Marc Vanderhaeghen

    2007-04-01

    The theoretical issues in the interpretation of the precision measurements of the nucleon-to-Delta transition by means of electromagnetic probes are highlighted. The results of these measurements are confronted with the state-of-the-art calculations based on chiral effective-field theories (EFT), lattice QCD, large-Nc relations, perturbative QCD, and QCD-inspired models. The link of the nucleon-to-Delta form factors to generalized parton distributions (GPDs) is also discussed.

  6. An Empirical Study of Neural Network-Based Audience Response Technology in a Human Anatomy Course for Pharmacy Students.

    PubMed

    Fernández-Alemán, José Luis; López-González, Laura; González-Sequeros, Ofelia; Jayne, Chrisina; López-Jiménez, Juan José; Carrillo-de-Gea, Juan Manuel; Toval, Ambrosio

    2016-04-01

    This paper presents an empirical study of a formative neural network-based assessment approach by using mobile technology to provide pharmacy students with intelligent diagnostic feedback. An unsupervised learning algorithm was integrated with an audience response system called SIDRA in order to generate states that collect some commonality in responses to questions and add diagnostic feedback for guided learning. A total of 89 pharmacy students enrolled on a Human Anatomy course were taught using two different teaching methods. Forty-four students employed intelligent SIDRA (i-SIDRA), whereas 45 students received the same training but without using i-SIDRA. A statistically significant difference was found between the experimental group (i-SIDRA) and the control group (traditional learning methodology), with T (87) = 6.598, p < 0.001. In four MCQs tests, the difference between the number of correct answers in the first attempt and in the last attempt was also studied. A global effect size of 0.644 was achieved in the meta-analysis carried out. The students expressed satisfaction with the content provided by i-SIDRA and the methodology used during the process of learning anatomy (M = 4.59). The new empirical contribution presented in this paper allows instructors to perform post hoc analyses of each particular student's progress to ensure appropriate training. PMID:26815339

  7. An Empirical Study of Neural Network-Based Audience Response Technology in a Human Anatomy Course for Pharmacy Students.

    PubMed

    Fernández-Alemán, José Luis; López-González, Laura; González-Sequeros, Ofelia; Jayne, Chrisina; López-Jiménez, Juan José; Carrillo-de-Gea, Juan Manuel; Toval, Ambrosio

    2016-04-01

    This paper presents an empirical study of a formative neural network-based assessment approach by using mobile technology to provide pharmacy students with intelligent diagnostic feedback. An unsupervised learning algorithm was integrated with an audience response system called SIDRA in order to generate states that collect some commonality in responses to questions and add diagnostic feedback for guided learning. A total of 89 pharmacy students enrolled on a Human Anatomy course were taught using two different teaching methods. Forty-four students employed intelligent SIDRA (i-SIDRA), whereas 45 students received the same training but without using i-SIDRA. A statistically significant difference was found between the experimental group (i-SIDRA) and the control group (traditional learning methodology), with T (87) = 6.598, p < 0.001. In four MCQs tests, the difference between the number of correct answers in the first attempt and in the last attempt was also studied. A global effect size of 0.644 was achieved in the meta-analysis carried out. The students expressed satisfaction with the content provided by i-SIDRA and the methodology used during the process of learning anatomy (M = 4.59). The new empirical contribution presented in this paper allows instructors to perform post hoc analyses of each particular student's progress to ensure appropriate training.

  8. Combining Empirical Relationships with Data Based Mechanistic Modeling to Inform Solute Tracer Investigations across Stream Orders

    NASA Astrophysics Data System (ADS)

    Herrington, C.; Gonzalez-Pinzon, R.; Covino, T. P.; Mortensen, J.

    2015-12-01

    Solute transport studies in streams and rivers often begin with the introduction of conservative and reactive tracers into the water column. Information on the transport of these substances is then captured within tracer breakthrough curves (BTCs) and used to estimate, for instance, travel times and dissolved nutrient and carbon dynamics. Traditionally, these investigations have been limited to systems with small discharges (< 200 L/s) and with small reach lengths (< 500 m), partly due to the need for a priori information of the reach's hydraulic characteristics (e.g., channel geometry, resistance and dispersion coefficients) to predict arrival times, times to peak concentrations of the solute and mean travel times. Current techniques to acquire these channel characteristics through preliminary tracer injections become cost prohibitive at higher stream orders and the use of semi-continuous water quality sensors for collecting real-time information may be affected from erroneous readings that are masked by high turbidity (e.g., nitrate signals with SUNA instruments or fluorescence measures) and/or high total dissolved solids (e.g., making prohibitively expensive the use of salt tracers such as NaCl) in larger systems. Additionally, a successful time-of-travel study is valuable for only a single discharge and river stage. We have developed a method to predict tracer BTCs to inform sampling frequencies at small and large stream orders using empirical relationships developed from multiple tracer injections spanning several orders of magnitude in discharge and reach length. This method was successfully tested in 1st to 8th order systems along the Middle Rio Grande River Basin in New Mexico, USA.

  9. Ensemble Empirical Mode Decomposition based methodology for ultrasonic testing of coarse grain austenitic stainless steels.

    PubMed

    Sharma, Govind K; Kumar, Anish; Jayakumar, T; Purnachandra Rao, B; Mariyappa, N

    2015-03-01

    A signal processing methodology is proposed in this paper for effective reconstruction of ultrasonic signals in coarse grained high scattering austenitic stainless steel. The proposed methodology is comprised of the Ensemble Empirical Mode Decomposition (EEMD) processing of ultrasonic signals and application of signal minimisation algorithm on selected Intrinsic Mode Functions (IMFs) obtained by EEMD. The methodology is applied to ultrasonic signals obtained from austenitic stainless steel specimens of different grain size, with and without defects. The influence of probe frequency and data length of a signal on EEMD decomposition is also investigated. For a particular sampling rate and probe frequency, the same range of IMFs can be used to reconstruct the ultrasonic signal, irrespective of the grain size in the range of 30-210 μm investigated in this study. This methodology is successfully employed for detection of defects in a 50mm thick coarse grain austenitic stainless steel specimens. Signal to noise ratio improvement of better than 15 dB is observed for the ultrasonic signal obtained from a 25 mm deep flat bottom hole in 200 μm grain size specimen. For ultrasonic signals obtained from defects at different depths, a minimum of 7 dB extra enhancement in SNR is achieved as compared to the sum of selected IMF approach. The application of minimisation algorithm with EEMD processed signal in the proposed methodology proves to be effective for adaptive signal reconstruction with improved signal to noise ratio. This methodology was further employed for successful imaging of defects in a B-scan.

  10. Holding-based network of nations based on listed energy companies: An empirical study on two-mode affiliation network of two sets of actors

    NASA Astrophysics Data System (ADS)

    Li, Huajiao; Fang, Wei; An, Haizhong; Gao, Xiangyun; Yan, Lili

    2016-05-01

    Economic networks in the real world are not homogeneous; therefore, it is important to study economic networks with heterogeneous nodes and edges to simulate a real network more precisely. In this paper, we present an empirical study of the one-mode derivative holding-based network constructed by the two-mode affiliation network of two sets of actors using the data of worldwide listed energy companies and their shareholders. First, we identify the primitive relationship in the two-mode affiliation network of the two sets of actors. Then, we present the method used to construct the derivative network based on the shareholding relationship between two sets of actors and the affiliation relationship between actors and events. After constructing the derivative network, we analyze different topological features on the node level, edge level and entire network level and explain the meanings of the different values of the topological features combining the empirical data. This study is helpful for expanding the usage of complex networks to heterogeneous economic networks. For empirical research on the worldwide listed energy stock market, this study is useful for discovering the inner relationships between the nations and regions from a new perspective.

  11. Theoretical spectroscopic study of seven zinc(II) complex with macrocyclic Schiff-base ligand.

    PubMed

    Sayin, Koray; Kariper, Sultan Erkan; Sayin, Tuba Alagöz; Karakaş, Duran

    2014-12-10

    Seven zinc complexes, which are [ZnL(1)](2+), [ZnL(2)](2+), [ZnL(3)](2+), [ZnL(4)](2+), [ZnL(5)](2+), [ZnL(6)](2+) and [ZnL(7)](2+), are studied as theoretically. Structural parameters, vibration frequencies, electronic absorption spectra and (1)H and (13)C NMR spectra are obtained for Zn(II) complexes of macrocyclic penta and heptaaza Schiff-base ligand. Vibration spectra of Zn(II) complexes are studied by using Density Functional Theory (DFT) calculations at the B3LYP/LANL2DZ. The UV-VIS and NMR spectra of the zinc complexes are obtained by using Time Dependent-Density Functional Theory (TD-DFT) method and Giao method, respectively. The agreements are found between experimental data of [ZnL(5)](2+), [ZnL(6)](2+) and [ZnL(7)](2+) complex ions and their calculated results. The geometries of complexes are found as distorted pentagonal planar for [ZnL(1)](2+), [ZnL(2)](2+) and [ZnL(3)](2+) complex ions, distorted tetrahedral for [ZnL(4)](2+) complex ion and distorted pentagonal bipyramidal for [ZnL(5)](2+), [ZnL(6)](2+) and [ZnL(7)](2+) complex ions. Ranking of biological activity is determined by using quantum chemical parameters and this ranking is found as: [ZnL(7)](2+)>[ZnL(6)](2+)>[ZnL(5)](2+)>[ZnL(3)](2+)>[ZnL(2)](2+)>[ZnL(1)](2+). PMID:24967540

  12. Cell death following BNCT: a theoretical approach based on Monte Carlo simulations.

    PubMed

    Ballarini, F; Bakeine, J; Bortolussi, S; Bruschi, P; Cansolino, L; Clerici, A M; Ferrari, C; Protti, N; Stella, S; Zonta, A; Zonta, C; Altieri, S

    2011-12-01

    In parallel to boron measurements and animal studies, investigations on radiation-induced cell death are also in progress in Pavia, with the aim of better characterisation of the effects of a BNCT treatment down to the cellular level. Such studies are being carried out not only experimentally but also theoretically, based on a mechanistic model and a Monte Carlo code. Such model assumes that: (1) only clustered DNA strand breaks can lead to chromosome aberrations; (2) only chromosome fragments within a certain threshold distance can undergo misrejoining; (3) the so-called "lethal aberrations" (dicentrics, rings and large deletions) lead to cell death. After applying the model to normal cells exposed to monochromatic fields of different radiation types, the irradiation section of the code was purposely extended to mimic the cell exposure to a mixed radiation field produced by the (10)B(n,α) (7)Li reaction, which gives rise to alpha particles and Li ions of short range and high biological effectiveness, and by the (14)N(n,p)(14)C reaction, which produces 0.58 MeV protons. Very good agreement between model predictions and literature data was found for human and animal cells exposed to X- or gamma-rays, protons and alpha particles, thus allowing to validate the model for cell death induced by monochromatic radiation fields. The model predictions showed good agreement also with experimental data obtained by our group exposing DHD cells to thermal neutrons in the TRIGA Mark II reactor of the University of Pavia; this allowed to validate the model also for a BNCT exposure scenario, providing a useful predictive tool to bridge the gap between irradiation and cell death. PMID:21481595

  13. Multiscale Detrended Cross-Correlation Analysis of Traffic Time Series Based on Empirical Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Yin, Yi; Shang, Pengjian

    2015-04-01

    In this paper, we propose multiscale detrended cross-correlation analysis (MSDCCA) to detect the long-range power-law cross-correlation of considered signals in the presence of nonstationarity. For improving the performance and getting better robustness, we further introduce the empirical mode decomposition (EMD) to eliminate the noise effects and propose MSDCCA method combined with EMD, which is called MS-EDXA method, then systematically investigate the multiscale cross-correlation structure of the real traffic signals. We apply the MSDCCA and MS-EDXA methods to study the cross-correlations in three situations: velocity and volume on one lane, velocities on the present and the next moment and velocities on the adjacent lanes, and further compare their spectrums respectively. When the difference between the spectrums of MSDCCA and MS-EDXA becomes unobvious, there is a crossover which denotes the turning point of difference. The crossover results from the competition between the noise effects in the original signals and the intrinsic fluctuation of traffic signals and divides the plot of spectrums into two regions. In all the three case, MS-EDXA method makes the average of local scaling exponents increased and the standard deviation decreased and provides a relative stable persistent scaling cross-correlated behavior which gets the analysis more precise and more robust and improves the performance after noises being removed. Applying MS-EDXA method avoids the inaccurate characteristics of multiscale cross-correlation structure at the short scale including the spectrum minimum, the range for the spectrum fluctuation and general trend, which are caused by the noise in the original signals. We get the conclusions that the traffic velocity and volume are long-range cross-correlated, which is accordant to their actual evolution, while velocities on the present and the next moment and velocities on adjacent lanes reflect the strong cross-correlations both in temporal and

  14. An Empirical Study of Instructor Adoption of Web-Based Learning Systems

    ERIC Educational Resources Information Center

    Wang, Wei-Tsong; Wang, Chun-Chieh

    2009-01-01

    For years, web-based learning systems have been widely employed in both educational and non-educational institutions. Although web-based learning systems are emerging as a useful tool for facilitating teaching and learning activities, the number of users is not increasing as fast as expected. This study develops an integrated model of instructor…

  15. Meta-Analysis of Group Learning Activities: Empirically Based Teaching Recommendations

    ERIC Educational Resources Information Center

    Tomcho, Thomas J.; Foels, Rob

    2012-01-01

    Teaching researchers commonly employ group-based collaborative learning approaches in Teaching of Psychology teaching activities. However, the authors know relatively little about the effectiveness of group-based activities in relation to known psychological processes associated with group dynamics. Therefore, the authors conducted a meta-analytic…

  16. Constructions, Semantic Compatibility, and Coercion: An Empirical Usage-Based Approach

    ERIC Educational Resources Information Center

    Yoon, Soyeon

    2012-01-01

    This study investigates the nature of semantic compatibility between constructions and lexical items that occur in them in relation with language use, and the related concept, coercion, based on a usage-based approach to language, in which linguistic knowledge (grammar) is grounded in language use. This study shows that semantic compatibility…

  17. Empirically Supported Family-Based Treatments for Conduct Disorder and Delinquency in Adolescents

    ERIC Educational Resources Information Center

    Henggeler, Scott W.; Sheidow, Ashli J.

    2012-01-01

    Several family-based treatments of conduct disorder and delinquency in adolescents have emerged as evidence-based and, in recent years, have been transported to more than 800 community practice settings. These models include multisystemic therapy, functional family therapy, multidimensional treatment foster care, and, to a lesser extent, brief…

  18. Formula-Based Public School Funding System in Victoria: An Empirical Analysis of Equity

    ERIC Educational Resources Information Center

    Bandaranayake, Bandara

    2013-01-01

    This article explores the formula-based school funding system in the state of Victoria, Australia, where state funds are directly allocated to schools based on a range of equity measures. The impact of Victoria' funding system for education in terms of alleviating inequality and disadvantage is contentious, to say the least. It is difficult…

  19. Theoretical and experimental investigations in characterizing and developing multiplexed diamond-based neutron spectrometers

    NASA Astrophysics Data System (ADS)

    Lukosi, Eric

    In this work a novel technique of multiplexing diamond is presented where electronic grade diamond plates are connected electrically in series and in parallel to increase the overall detection efficiency of diamond-based neutron detection systems. Theoretical results utilizing MCNPX indicate that further development in this simulation software is required to accurately predict the response of diamond to various interrogating neutron energies. However, the results were accurate enough to indicate that an equivalent diamond plate 1cm thick only lowers the energy resolution of the 12 C(n,αo)9Be peak from a 14.1 MeV interrogating neutron reference field by a factor of two compared to a single diamond plate 0.5mm thick while increasing the detection efficiency from 1.34 percent for a single diamond plate to 25.4 percent for the 1cm thick diamond plate. Further, the number of secondary neutron interactions is minimal, approximately 5.3 percent, with a detection medium this size. It is also shown that photons can interfere with lower energy neutron signals when multiplexing is used, especially at lower impinging photon energies, although the full energy peak still does not dominantly present itself in the pulse height spectrum for multiplexed arrays approaching 1cm with respect to the interrogating neutron reference field vector. Experimental results indicate that series multiplexing is not capable for use as a means of increasing the active detection volume of a diamond-based neutron spectrometer because of the interaction of diamond detection mediums in series with each other and the input capacitor of a charge sensitive preamplifier, where severe signal degradation is seen due to the equal impedances of the single crystal diamond plates. However, parallel multiplexing is shown to have great promise, although there are limitations to this technique due to the large capacitance at the preamplifier input for a large parallel multiplexed array. Still, the latter

  20. Cultivating mindfulness in health care professionals: a review of empirical studies of mindfulness-based stress reduction (MBSR).

    PubMed

    Irving, Julie Anne; Dobkin, Patricia L; Park, Jeeseon

    2009-05-01

    Demands faced by health care professionals include heavy caseloads, limited control over the work environment, long hours, as well as organizational structures and systems in transition. Such conditions have been directly linked to increased stress and symptoms of burnout, which in turn, have adverse consequences for clinicians and the quality of care that is provided to patients. Consequently, there exists an impetus for the development of curriculum aimed at fostering wellness and the necessary self-care skills for clinicians. This review will examine the potential benefits of mindfulness-based stress reduction (MBSR) programs aimed at enhancing well-being and coping with stress in this population. Empirical evidence indicates that participation in MBSR yields benefits for clinicians in the domains of physical and mental health. Conceptual and methodological limitations of the existing studies and suggestions for future research are discussed.

  1. Re-reading nursing and re-writing practice: towards an empirically based reformulation of the nursing mandate.

    PubMed

    Allen, Davina

    2004-12-01

    This article examines field studies of nursing work published in the English language between 1993 and 2003 as the first step towards an empirically based reformulation of the nursing mandate. A decade of ethnographic research reveals that, contrary to contemporary theories which promote an image of nursing work centred on individualised unmediated caring relationships, in real-life practice the core nursing contribution is that of the healthcare mediator. Eight bundles of activity that comprise this intermediary role are described utilising evidence from the literature. The mismatch between nursing's culture and ideals and the structure and constraints of the work setting is a chronic source of practitioner dissatisfaction. It is argued that the profession has little to gain by pursuing an agenda of holistic patient care centred on emotional intimacy and that an alternative occupational mandate focused on the healthcare mediator function might make for more humane health services and a more viable professional future.

  2. Gyroscope-driven mouse pointer with an EMOTIV® EEG headset and data analysis based on Empirical Mode Decomposition.

    PubMed

    Rosas-Cholula, Gerardo; Ramirez-Cortes, Juan Manuel; Alarcon-Aquino, Vicente; Gomez-Gil, Pilar; Rangel-Magdaleno, Jose de Jesus; Reyes-Garcia, Carlos

    2013-08-14

    This paper presents a project on the development of a cursor control emulating the typical operations of a computer-mouse, using gyroscope and eye-blinking electromyographic signals which are obtained through a commercial 16-electrode wireless headset, recently released by Emotiv. The cursor position is controlled using information from a gyroscope included in the headset. The clicks are generated through the user's blinking with an adequate detection procedure based on the spectral-like technique called Empirical Mode Decomposition (EMD). EMD is proposed as a simple and quick computational tool, yet effective, aimed to artifact reduction from head movements as well as a method to detect blinking signals for mouse control. Kalman filter is used as state estimator for mouse position control and jitter removal. The detection rate obtained in average was 94.9%. Experimental setup and some obtained results are presented.

  3. Analysis of Vibration and Noise of Construction Machinery Based on Ensemble Empirical Mode Decomposition and Spectral Correlation Analysis Method

    NASA Astrophysics Data System (ADS)

    Chen, Yuebiao; Zhou, Yiqi; Yu, Gang; Lu, Dan

    In order to analyze the effect of engine vibration on cab noise of construction machinery in multi-frequency bands, a new method based on ensemble empirical mode decomposition (EEMD) and spectral correlation analysis is proposed. Firstly, the intrinsic mode functions (IMFs) of vibration and noise signals were obtained by EEMD method, and then the IMFs which have the same frequency bands were selected. Secondly, we calculated the spectral correlation coefficients between the selected IMFs, getting the main frequency bands in which engine vibration has significant impact on cab noise. Thirdly, the dominated frequencies were picked out and analyzed by spectral analysis method. The study result shows that the main frequency bands and dominated frequencies in which engine vibration have serious impact on cab noise can be identified effectively by the proposed method, which provides effective guidance to noise reduction of construction machinery.

  4. Gyroscope-driven mouse pointer with an EMOTIV® EEG headset and data analysis based on Empirical Mode Decomposition.

    PubMed

    Rosas-Cholula, Gerardo; Ramirez-Cortes, Juan Manuel; Alarcon-Aquino, Vicente; Gomez-Gil, Pilar; Rangel-Magdaleno, Jose de Jesus; Reyes-Garcia, Carlos

    2013-01-01

    This paper presents a project on the development of a cursor control emulating the typical operations of a computer-mouse, using gyroscope and eye-blinking electromyographic signals which are obtained through a commercial 16-electrode wireless headset, recently released by Emotiv. The cursor position is controlled using information from a gyroscope included in the headset. The clicks are generated through the user's blinking with an adequate detection procedure based on the spectral-like technique called Empirical Mode Decomposition (EMD). EMD is proposed as a simple and quick computational tool, yet effective, aimed to artifact reduction from head movements as well as a method to detect blinking signals for mouse control. Kalman filter is used as state estimator for mouse position control and jitter removal. The detection rate obtained in average was 94.9%. Experimental setup and some obtained results are presented. PMID:23948873

  5. Empirically Supported Treatments in Psychotherapy: Towards an Evidence-Based or Evidence-Biased Psychology in Clinical Settings?

    PubMed Central

    Castelnuovo, Gianluca

    2010-01-01

    The field of research and practice in psychotherapy has been deeply influenced by two different approaches: the empirically supported treatments (ESTs) movement, linked with the evidence-based medicine (EBM) perspective and the “Common Factors” approach, typically connected with the “Dodo Bird Verdict”. About the first perspective, since 1998 a list of ESTs has been established in mental health field. Criterions for “well-established” and “probably efficacious” treatments have arisen. The development of these kinds of paradigms was motivated by the emergence of a “managerial” approach and related systems for remuneration also for mental health providers and for insurance companies. In this article ESTs will be presented underlining also some possible criticisms. Finally complementary approaches, that could add different evidence in the psychotherapy research in comparison with traditional EBM approach, are presented. PMID:21833197

  6. A beginner's guide to writing the nursing conceptual model-based theoretical rationale.

    PubMed

    Gigliotti, Eileen; Manister, Nancy N

    2012-10-01

    Writing the theoretical rationale for a study can be a daunting prospect for novice researchers. Nursing's conceptual models provide excellent frameworks for placement of study variables, but moving from the very abstract concepts of the nursing model to the less abstract concepts of the study variables is difficult. Similar to the five-paragraph essay used by writing teachers to assist beginning writers to construct a logical thesis, the authors of this column present guidelines that beginners can follow to construct their theoretical rationale. This guide can be used with any nursing conceptual model but Neuman's model was chosen here as the exemplar.

  7. An Empirical Study of Univariate and Genetic Algorithm-Based Feature Selection in Binary Classification with Microarray Data

    PubMed Central

    Lecocke, Michael; Hess, Kenneth

    2007-01-01

    Background We consider both univariate- and multivariate-based feature selection for the problem of binary classification with microarray data. The idea is to determine whether the more sophisticated multivariate approach leads to better misclassification error rates because of the potential to consider jointly significant subsets of genes (but without overfitting the data). Methods We present an empirical study in which 10-fold cross-validation is applied externally to both a univariate-based and two multivariate- (genetic algorithm (GA)-) based feature selection processes. These procedures are applied with respect to three supervised learning algorithms and six published two-class microarray datasets. Results Considering all datasets, and learning algorithms, the average 10-fold external cross-validation error rates for the univariate-, single-stage GA-, and two-stage GA-based processes are 14.2%, 14.6%, and 14.2%, respectively. We also find that the optimism bias estimates from the GA analyses were half that of the univariate approach, but the selection bias estimates from the GA analyses were 2.5 times that of the univariate results. Conclusions We find that the 10-fold external cross-validation misclassification error rates were very comparable. Further, we find that a two-stage GA approach did not demonstrate a significant advantage over a 1-stage approach. We also find that the univariate approach had higher optimism bias and lower selection bias compared to both GA approaches. PMID:19458774

  8. Process-based models not always better than empirical models for simulating budburst of Norway spruce and birch in Europe.

    PubMed

    Olsson, Cecilia; Jönsson, Anna Maria

    2014-11-01

    Budburst models have mainly been developed to capture the processes of individual trees, and vary in their complexity and plant physiological realism. We evaluated how well eleven models capture the variation in budburst of birch and Norway spruce in Germany, Austria, the United Kingdom and Finland. The comparison was based on the models performance in relation to their underlying physiological assumptions with four different calibration schemes. The models were not able to accurately simulate the timing of budburst. In general the models overestimated the temperature effect, thereby the timing of budburst was simulated too early in the United Kingdom and too late in Finland. Among the better performing models were three models based on the growing degree day concept, with or without day length or chilling, and an empirical model based on spring temperatures. These models were also the models least influenced by the calibration data. For birch the best calibration scheme was based on multiple sites in either Germany or Europe, and for Norway spruce the best scheme included multiple sites in Germany or cold years of all sites. Most model and calibration combinations indicated greater bias with higher spring temperatures, mostly simulating earlier than observed budburst.

  9. Performance-based management and quality of work: an empirical assessment.

    PubMed

    Falzon, Pierre; Nascimento, Adelaide; Gaudart, Corinne; Piney, Cécile; Dujarier, Marie-Anne; Germe, Jean-François

    2012-01-01

    In France, in the private sector as in the public sector, performance-based management tends to become a norm. Performance-based management is supposed to improve service quality, productivity and efficiency, transparency of allotted means and achieved results, and to better focus the activity of employees and of the whole organization. This text reports a study conducted for the French Ministry of Budget by a team of researchers in ergonomics, sociology and management science, in order to assess the impact of performance-based management on employees, on teams and on work organization. About 100 interviews were conducted with employees of all categories and 6 working groups were set up in order to discuss and validate or amend our first analyses. Results concern several aspects: workload and work intensification, indicators and performance management and the transformation of jobs induced by performance management. PMID:22317310

  10. Accurate Young's modulus measurement based on Rayleigh wave velocity and empirical Poisson's ratio

    NASA Astrophysics Data System (ADS)

    Li, Mingxia; Feng, Zhihua

    2016-07-01

    This paper presents a method for Young's modulus measurement based on Rayleigh wave speed. The error in Poisson's ratio has weak influence on the measurement of Young's modulus based on Rayleigh wave speed, and Poisson's ratio minimally varies in a certain material; thus, we can accurately estimate Young's modulus with surface wave speed and a rough Poisson's ratio. We numerically analysed three methods using Rayleigh, longitudinal, and transversal wave speed, respectively, and the error in Poisson's ratio shows the least influence on the result in the method involving Rayleigh wave speed. An experiment was performed and has proved the feasibility of this method. Device for speed measuring could be small, and no sample pretreatment is needed. Hence, developing a portable instrument based on this method is possible. This method makes a good compromise between usability and precision.

  11. A theoretical individual-based model of Brown Ring Disease in Manila clams, Venerupis philippinarum

    NASA Astrophysics Data System (ADS)

    Paillard, Christine; Jean, Fred; Ford, Susan E.; Powell, Eric N.; Klinck, John M.; Hofmann, Eileen E.; Flye-Sainte-Marie, Jonathan

    2014-08-01

    An individual-based mathematical model was developed to investigate the biological and environmental interactions that influence the prevalence and intensity of Brown Ring Disease (BRD), a disease, caused by the bacterial pathogen, Vibrio tapetis, in the Manila clam (Venerupis (= Tapes, = Ruditapes) philippinarum). V. tapetis acts as an external microparasite, adhering at the surface of the mantle edge and its secretion, the periostracal lamina, causing the symptomatic brown deposit. Brown Ring Disease is atypical in that it leaves a shell scar that provides a unique tool for diagnosis of either live or dead clams. The model was formulated using laboratory and field measurements of BRD development in Manila clams, physiological responses of the clam to the pathogen, and the physiology of V. tapetis, as well as theoretical understanding of bacterial disease progression in marine shellfish. The simulation results obtained for an individual Manila clam were expanded to cohorts and populations using a probability distribution that prescribed a range of variability for parameters in a three dimensional framework; assimilation rate, clam hemocyte activity rate (the number of bacteria ingested per hemocyte per day), and clam calcification rate (a measure of the ability to recover by covering over the symptomatic brown ring deposit), which sensitivity studies indicated to be processes important in determining BRD prevalence and intensity. This approach allows concurrent simulation of individuals with a variety of different physiological capabilities (phenotypes) and hence by implication differing genotypic composition. Different combinations of the three variables provide robust estimates for the fate of individuals with particular characteristics in a population that consists of mixtures of all possible combinations. The BRD model was implemented using environmental observations from sites in Brittany, France, where Manila clams routinely exhibit BRD signs. The simulated

  12. Theoretical and experimental analysis of optical gyroscopes based on fiber ring resonators

    NASA Astrophysics Data System (ADS)

    Liu, Yao-ying; Xue, Chen-yang; Cui, Xiao-wen; Cui, Dan-feng; Wei, Li-ping; Wang, Yong-hua; Li, Yan-na

    2014-12-01

    The research on gyroscopes has lasted for a long time, but there is not a thorough analysis of them. In this paper, a detailed theoretical analysis of fiber ring gyroscope and its gyroscope effect were presented, the performance characteristics of optical resonator gyroscope ranging from transmission function Tfrr, Finesse, Q-factor, the gyro sensitivity, signal noise ratio, random walk to dynamic range are all deduced in detail. In addition, a large number of experiments have been done to verify the deduced theoretical results. Simulating the relevance of dQ and turn number of fiber ring, analyzing the frequency difference of two counter transmitted waves (CW and CCW) of the rotated system, make the conclusion that with the increase of turn number of ring, the resonance depth increased while the dQ value decreased, obtain a high sensitivity of 0.210/h, random walk of 0.00350/√h, and Q factor of 8×106. Moreover, in the digital frequency locked dual rotation gyro experiments, obvious step effect was observed. And the experimental line of frequency difference is very agreement with the theoretical line. The research provides a good theoretical and experimental basis for the study of gyroscopes.

  13. Observations on the theoretical bases for seclusion of the psychiatric inpatient.

    PubMed

    Gutheil, T G

    1978-03-01

    Contemporary controversy concerning the seclusion of psychiatric inpatients is focused primarily on the issues of civil rights and behavior control. The author believes that we are in danger of losing sight of the therapeutic aspects of this treatment modality. He offers a brief review of the theoretical and clinical rationale for seclusion.

  14. Theoretical Bases for Service-Learning: Implications for Program Design and Effectiveness

    ERIC Educational Resources Information Center

    Permaul, Jane Szutu

    2009-01-01

    Background: Service-learning as pedagogy and a curricular consideration has been introduced to tertiary education of the Asia-Pacific Region. Theoretical framework and research in this area is still in its infancy. However, much can be learned from related theories and concepts, on which service-learning research can be conducted. Aims: This paper…

  15. Empirical Investigation into Motives for Choosing Web-Based Distance Learning Programs

    ERIC Educational Resources Information Center

    Alkhattabi, Mona

    2016-01-01

    Today, in association with rapid social and economic changes, there is an increasing level of demand for distance and online learning programs. This study will focus on identifying the main motivational factors for choosing a web-based distance-learning program. Moreover, it will investigate how these factors relate to age, gender, marital status…

  16. Inequality of Higher Education in China: An Empirical Test Based on the Perspective of Relative Deprivation

    ERIC Educational Resources Information Center

    Hou, Liming

    2014-01-01

    The primary goal of this paper is to examine what makes Chinese college students dissatisfied with entrance opportunities for higher education. Based on the author's survey data, we test two parameters which could be a potential cause of this dissatisfaction: 1) distributive inequality, which emphasizes the individual's dissatisfaction caused by…

  17. Young Readers' Narratives Based on a Picture Book: Model Readers and Empirical Readers

    ERIC Educational Resources Information Center

    Hoel, Trude

    2015-01-01

    The article present parts of a research project where the aim is to investigate six- to seven-year-old children's language use in storytelling. The children's oral texts are based on the wordless picture book "Frog, Where Are You?" Which has been, and still remains, a frequent tool for collecting narratives from children. The Frog story…

  18. Teaching Standards-Based Group Work Competencies to Social Work Students: An Empirical Examination

    ERIC Educational Resources Information Center

    Macgowan, Mark J.; Vakharia, Sheila P.

    2012-01-01

    Objectives: Accreditation standards and challenges in group work education require competency-based approaches in teaching social work with groups. The Association for the Advancement of Social Work with Groups developed Standards for Social Work Practice with Groups, which serve as foundation competencies for professional practice. However, there…

  19. Perceptions of the Effectiveness of System Dynamics-Based Interactive Learning Environments: An Empirical Study

    ERIC Educational Resources Information Center

    Qudrat-Ullah, Hassan

    2010-01-01

    The use of simulations in general and of system dynamics simulation based interactive learning environments (SDILEs) in particular is well recognized as an effective way of improving users' decision making and learning in complex, dynamic tasks. However, the effectiveness of SDILEs in classrooms has rarely been evaluated. This article describes…

  20. An Adaptive E-Learning System Based on Students' Learning Styles: An Empirical Study

    ERIC Educational Resources Information Center

    Drissi, Samia; Amirat, Abdelkrim

    2016-01-01

    Personalized e-learning implementation is recognized as one of the most interesting research areas in the distance web-based education. Since the learning style of each learner is different one must fit e-learning with the different needs of learners. This paper presents an approach to integrate learning styles into adaptive e-learning hypermedia.…

  1. Introducing Evidence-Based Principles to Guide Collaborative Approaches to Evaluation: Results of an Empirical Process

    ERIC Educational Resources Information Center

    Shulha, Lyn M.; Whitmore, Elizabeth; Cousins, J. Bradley; Gilbert, Nathalie; al Hudib, Hind

    2016-01-01

    This article introduces a set of evidence-based principles to guide evaluation practice in contexts where evaluation knowledge is collaboratively produced by evaluators and stakeholders. The data from this study evolved in four phases: two pilot phases exploring the desirability of developing a set of principles; an online questionnaire survey…

  2. Journeys into Inquiry-Based Elementary Science: Literacy Practices, Questioning, and Empirical Study

    ERIC Educational Resources Information Center

    Howes, Elaine V.; Lim, Miyoun; Campos, Jaclyn

    2009-01-01

    Teaching literacy in inquiry-based science-teaching settings has recently become a focus of research in science education. Because professional scientists' uses of reading, writing, and speaking are foundational to their work, as well as to nonscientists' comprehension of it , it follows that literacy practices should also be central to science…

  3. Empirical evidence for site coefficients in building code provisions

    USGS Publications Warehouse

    Borcherdt, R.D.

    2002-01-01

    Site-response coefficients, Fa and Fv, used in U.S. building code provisions are based on empirical data for motions up to 0.1 g. For larger motions they are based on theoretical and laboratory results. The Northridge earthquake of 17 January 1994 provided a significant new set of empirical data up to 0.5 g. These data together with recent site characterizations based on shear-wave velocity measurements provide empirical estimates of the site coefficients at base accelerations up to 0.5 g for Site Classes C and D. These empirical estimates of Fa and Fnu; as well as their decrease with increasing base acceleration level are consistent at the 95 percent confidence level with those in present building code provisions, with the exception of estimates for Fa at levels of 0.1 and 0.2 g, which are less than the lower confidence bound by amounts up to 13 percent. The site-coefficient estimates are consistent at the 95 percent confidence level with those of several other investigators for base accelerations greater than 0.3 g. These consistencies and present code procedures indicate that changes in the site coefficients are not warranted. Empirical results for base accelerations greater than 0.2 g confirm the need for both a short- and a mid- or long-period site coefficient to characterize site response for purposes of estimating site-specific design spectra.

  4. Wind-blown Sand Electrification Inspired Triboelectric Energy Harvesting Based on Homogeneous Inorganic Materials Contact: A Theoretical Study and Prediction

    PubMed Central

    Hu, Wenwen; Wu, Weiwei; Zhou, Hao-miao

    2016-01-01

    Triboelectric nanogenerator (TENG) based on contact electrification between heterogeneous materials has been widely studied. Inspired from wind-blown sand electrification, we design a novel kind of TENG based on size dependent electrification using homogeneous inorganic materials. Based on the asymmetric contact theory between homogeneous material surfaces, a calculation of surface charge density has been carried out. Furthermore, the theoretical output of homogeneous material based TENG has been simulated. Therefore, this work may pave the way of fabricating TENG without the limitation of static sequence. PMID:26817411

  5. Specification-based software sizing: An empirical investigation of function metrics

    NASA Technical Reports Server (NTRS)

    Jeffery, Ross; Stathis, John

    1993-01-01

    For some time the software industry has espoused the need for improved specification-based software size metrics. This paper reports on a study of nineteen recently developed systems in a variety of application domains. The systems were developed by a single software services corporation using a variety of languages. The study investigated several metric characteristics. It shows that: earlier research into inter-item correlation within the overall function count is partially supported; a priori function counts, in themself, do not explain the majority of the effort variation in software development in the organization studied; documentation quality is critical to accurate function identification; and rater error is substantial in manual function counting. The implication of these findings for organizations using function based metrics are explored.

  6. Conventional empirical law reverses in the phase transitions of 122-type iron-based superconductors

    PubMed Central

    Yu, Zhenhai; Wang, Lin; Wang, Luhong; Liu, Haozhe; Zhao, Jinggeng; Li, Chunyu; Sinogeikin, Stanislav; Wu, Wei; Luo, Jianlin; Wang, Nanlin; Yang, Ke; Zhao, Yusheng; Mao, Ho-kwang

    2014-01-01

    Phase transition of solid-state materials is a fundamental research topic in condensed matter physics, materials science and geophysics. It has been well accepted and widely proven that isostructural compounds containing different cations undergo same pressure-induced phase transitions but at progressively lower pressures as the cation radii increases. However, we discovered that this conventional law reverses in the structural transitions in 122-type iron-based superconductors. In this report, a combined low temperature and high pressure X-ray diffraction (XRD) measurement has identified the phase transition curves among the tetragonal (T), orthorhombic (O) and the collapsed-tetragonal (cT) phases in the structural phase diagram of the iron-based superconductor AFe2As2 (A = Ca, Sr, Eu, and Ba). The cation radii dependence of the phase transition pressure (T → cT) shows an opposite trend in which the compounds with larger ambient radii cations have a higher transition pressure. PMID:25417655

  7. Conventional empirical law reverses in the phase transitions of 122-type iron-based superconductors.

    PubMed

    Yu, Zhenhai; Wang, Lin; Wang, Luhong; Liu, Haozhe; Zhao, Jinggeng; Li, Chunyu; Sinogeikin, Stanislav; Wu, Wei; Luo, Jianlin; Wang, Nanlin; Yang, Ke; Zhao, Yusheng; Mao, Ho-kwang

    2014-01-01

    Phase transition of solid-state materials is a fundamental research topic in condensed matter physics, materials science and geophysics. It has been well accepted and widely proven that isostructural compounds containing different cations undergo same pressure-induced phase transitions but at progressively lower pressures as the cation radii increases. However, we discovered that this conventional law reverses in the structural transitions in 122-type iron-based superconductors. In this report, a combined low temperature and high pressure X-ray diffraction (XRD) measurement has identified the phase transition curves among the tetragonal (T), orthorhombic (O) and the collapsed-tetragonal (cT) phases in the structural phase diagram of the iron-based superconductor AFe2As2 (A = Ca, Sr, Eu, and Ba). The cation radii dependence of the phase transition pressure (T → cT) shows an opposite trend in which the compounds with larger ambient radii cations have a higher transition pressure. PMID:25417655

  8. Evidence-based architectural and space design supports Magnet® empirical outcomes.

    PubMed

    Ecoff, Laurie; Brown, Caroline E

    2010-12-01

    This department expands nursing leaders' knowledge and competencies in health facility design. The editor of this department, Dr Jaynelle Stichler, asked guest authors, Drs Ecoff and Brown, to describe the process of using the conceptual models of a nursing evidence-based practice model and the Magnet Recognition Program® as a structured process to lead decision making in the planning and design processes and to achieve desired outcomes in hospital design.

  9. Patients’ Acceptance towards a Web-Based Personal Health Record System: An Empirical Study in Taiwan

    PubMed Central

    Liu, Chung-Feng; Tsai, Yung-Chieh; Jang, Fong-Lin

    2013-01-01

    The health care sector has become increasingly interested in developing personal health record (PHR) systems as an Internet-based telehealthcare implementation to improve the quality and decrease the cost of care. However, the factors that influence patients’ intention to use PHR systems remain unclear. Based on physicians’ therapeutic expertise, we implemented a web-based infertile PHR system and proposed an extended Technology Acceptance Model (TAM) that integrates the physician-patient relationship (PPR) construct into TAM’s original perceived ease of use (PEOU) and perceived usefulness (PU) constructs to explore which factors will influence the behavioral intentions (BI) of infertile patients to use the PHR. From ninety participants from a medical center, 50 valid responses to a self-rating questionnaire were collected, yielding a response rate of 55.56%. The partial least squares (PLS) technique was used to assess the causal relationships that were hypothesized in the extended model. The results indicate that infertile patients expressed a moderately high intention to use the PHR system. The PPR and PU of patients had significant effects on their BI to use PHR, whereas the PEOU indirectly affected the patients’ BI through the PU. This investigation confirms that PPR can have a critical role in shaping patients’ perceptions of the use of healthcare information technologies. Hence, we suggest that hospitals should promote the potential usefulness of PHR and improve the quality of the physician-patient relationship to increase patients’ intention of using PHR. PMID:24142185

  10. Anchor person shot detection for news video indexing based on graph-theoretical clustering and fuzzy if-then rules

    NASA Astrophysics Data System (ADS)

    Gao, Xinbo; Li, Qi; Li, Jie

    2003-09-01

    Anchorperson shot detection is of significance for video shot semantic parsing and indexing clues extraction in content-based news video indexing and retrieval system. This paper presents a model-free anchorperson shot detection scheme based on the graph-theoretical clustering and fuzzy interference. First, a news video is segmented into video shots with any an effective video syntactic parsing algorithm. For each shot, one frame is extracted from the frame sequence as a representative key frame. Then the graph-theoretical clustering algorithm is performed on the key frames to identify the anchorperson frames. The anchorperson frames are further refined based on face detection and fuzzy interference with if-then rules. The proposed scheme achieves a precision of 98.40% and a recall of over 97.69% in the anchorperson shot detection experiment.

  11. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User’s Head Movement

    PubMed Central

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-01-01

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user’s head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest. PMID:27589768

  12. Empirical force field for cisplatin based on quantum dynamics data: case study of new parameterization scheme for coordination compounds.

    PubMed

    Yesylevskyy, S; Cardey, Bruno; Kraszewski, S; Foley, Sarah; Enescu, Mironel; da Silva, Antônio M; Dos Santos, Hélio F; Ramseyer, Christophe

    2015-10-01

    Parameterization of molecular complexes containing a metallic compound, such as cisplatin, is challenging due to the unconventional coordination nature of the bonds which involve platinum atoms. In this work, we develop a new methodology of parameterization for such compounds based on quantum dynamics (QD) calculations. We show that the coordination bonds and angles are more flexible than in normal covalent compounds. The influence of explicit solvent is also shown to be crucial to determine the flexibility of cisplatin in quantum dynamics simulations. Two empirical topologies of cisplatin were produced by fitting its atomic fluctuations against QD in vacuum and QD with explicit first solvation shell of water molecules respectively. A third topology built in a standard way from the static optimized structure was used for comparison. The later one leads to an excessively rigid molecule and exhibits much smaller fluctuations of the bonds and angles than QD reveals. It is shown that accounting for the high flexibility of cisplatin molecule is needed for adequate description of its first hydration shell. MD simulations with flexible QD-based topology also reveal a significant decrease of the barrier of passive diffusion of cisplatin accross the model lipid bilayer. These results confirm that flexibility of organometallic compounds is an important feature to be considered in classical molecular dynamics topologies. Proposed methodology based on QD simulations provides a systematic way of building such topologies.

  13. BOLD-based Techniques for Quantifying Brain Hemodynamic and Metabolic Properties – Theoretical Models and Experimental Approaches

    PubMed Central

    Yablonskiy, Dmitriy A.; Sukstanskii, Alexander L.; He, Xiang

    2012-01-01

    Quantitative evaluation of brain hemodynamics and metabolism, particularly the relationship between brain function and oxygen utilization, is important for understanding normal human brain operation as well as pathophysiology of neurological disorders. It can also be of great importance for evaluation of hypoxia within tumors of the brain and other organs. A fundamental discovery by Ogawa and co-workers of the BOLD (Blood Oxygenation Level Dependent) contrast opened a possibility to use this effect to study brain hemodynamic and metabolic properties by means of MRI measurements. Such measurements require developing theoretical models connecting MRI signal to brain structure and functioning and designing experimental techniques allowing MR measurements of salient features of theoretical models. In our review we discuss several such theoretical models and experimental methods for quantification brain hemodynamic and metabolic properties. Our review aims mostly at methods for measuring oxygen extraction fraction, OEF, based on measuring blood oxygenation level. Combining measurement of OEF with measurement of CBF allows evaluation of oxygen consumption, CMRO2. We first consider in detail magnetic properties of blood – magnetic susceptibility, MR relaxation and theoretical models of intravascular contribution to MR signal under different experimental conditions. Then, we describe a “through-space” effect – the influence of inhomogeneous magnetic fields, created in the extravascular space by intravascular deoxygenated blood, on the MR signal formation. Further we describe several experimental techniques taking advantage of these theoretical models. Some of these techniques - MR susceptometry, and T2-based quantification of oxygen OEF – utilize intravascular MR signal. Another technique – qBOLD – evaluates OEF by making use of through-space effects. In this review we targeted both scientists just entering the MR field and more experienced MR researchers

  14. An empirical analysis of exposure-based regulation to abate toxic air pollution

    SciTech Connect

    Marakovits, D.M.; Considine, T.J.

    1996-11-01

    Title III of the 1990 Clean Air Act Amendments requires the Environmental Protection Agency to regulate 189 air toxics, including emissions from by-product coke ovens. Economists criticize the inefficiency of uniform standards, but Title III makes no provision for flexible regulatory instruments. Environmental health scientists suggest that population exposure, not necessarily ambient air quality, should motivate environmental air pollution policies. Using an engineering-economic model of the United States steel industry, we estimate that an exposure-based policy can achieve the same level of public health as coke oven emissions standards and can reduce compliance costs by up to 60.0%. 18 refs., 3 figs., 1 tab.

  15. Conventional empirical law reverses in the phase transitions of 122-type iron-based superconductors

    SciTech Connect

    Yu, Zhenhai; Wang, Lin; Wang, Luhong; Liu, Haozhe; Zhao, Jinggeng; Li, Chunyu; Sinogeikin, Stanislav; Wu, Wei; Luo, Jianlin; Wang, Nanlin; Yang, Ke; Zhao, Yusheng; Mao, Ho -kwang

    2014-11-24

    Phase transition of solid-state materials is a fundamental research topic in condensed matter physics, materials science and geophysics. It has been well accepted and widely proven that isostructural compounds containing different cations undergo same pressure-induced phase transitions but at progressively lower pressures as the cation radii increases. However, we discovered that this conventional law reverses in the structural transitions in 122-type iron-based superconductors. In this report, a combined low temperature and high pressure X-ray diffraction (XRD) measurement has identified the phase transition curves among the tetragonal (T), orthorhombic (O) and the collapsed-tetragonal (cT) phases in the structural phase diagram of the iron-based superconductor AFe2As2 (A = Ca, Sr, Eu, and Ba). As a result, the cation radii dependence of the phase transition pressure (T → cT) shows an opposite trend in which the compounds with larger ambient radii cations have a higher transition pressure.

  16. Competence-based demands made of senior physicians: an empirical study to evaluate leadership competencies.

    PubMed

    Lehr, Bosco; Ostermann, Herwig; Schubert, Harald

    2011-01-01

    As a result of more economising in German hospitals, changes evolve in organising the deployment of senior medical staff. New demands are made of senior hospital management. Leadership competencies in the training and development of physicians are of prime importance to the successful perception of managerial responsibilities. The present study investigates the actual and targeted demands of leadership made of senior medical staff in terms of how these demands are perceived. To this end, the demands of leadership were surveyed using a competence-based questionnaire and investigated with a view to potentials in professional development by way of example of the senior management of psychiatric hospitals in Germany. In all, the results show high ratings in personal performance, the greatest significance being attributed to value-oriented competence in the actual assessment of demands on leadership. Besides gender-specific differences in the actual assessments of single fields of competence, the greatest differences between the targeted and the actual demands are, in all, shown to be in the competencies of self-management and communication. Competence-based core areas in leadership can be demonstrated for the professional development of physicians and an adaptive mode of procedure deduced.

  17. Conventional empirical law reverses in the phase transitions of 122-type iron-based superconductors

    DOE PAGESBeta

    Yu, Zhenhai; Wang, Lin; Wang, Luhong; Liu, Haozhe; Zhao, Jinggeng; Li, Chunyu; Sinogeikin, Stanislav; Wu, Wei; Luo, Jianlin; Wang, Nanlin; et al

    2014-11-24

    Phase transition of solid-state materials is a fundamental research topic in condensed matter physics, materials science and geophysics. It has been well accepted and widely proven that isostructural compounds containing different cations undergo same pressure-induced phase transitions but at progressively lower pressures as the cation radii increases. However, we discovered that this conventional law reverses in the structural transitions in 122-type iron-based superconductors. In this report, a combined low temperature and high pressure X-ray diffraction (XRD) measurement has identified the phase transition curves among the tetragonal (T), orthorhombic (O) and the collapsed-tetragonal (cT) phases in the structural phase diagram ofmore » the iron-based superconductor AFe2As2 (A = Ca, Sr, Eu, and Ba). As a result, the cation radii dependence of the phase transition pressure (T → cT) shows an opposite trend in which the compounds with larger ambient radii cations have a higher transition pressure.« less

  18. Structural characterisation of some vanillic Mannich bases: Experimental and theoretical study

    NASA Astrophysics Data System (ADS)

    Petrović, Vladimir P.; Simijonović, Dušica; Novaković, Sladjana B.; Bogdanović, Goran A.; Marković, Svetlana; Petrović, Zorica D.

    2015-10-01

    In this paper, synthesis and structural determination of 2-[1-(N-4-fluorophenylamino)-1-(4-hydroxy-3-methoxyphenyl)]methylcyclohexanone (MB-F) is presented. To determine the structure of this new compound, IR and NMR spectral characterisation was performed experimentally and theoretically. Simulation of spectral data was carried out using three functionals: B3LYP, B3LYP-D2, and M06-2X. The results obtained for MB-F were compared to those attained for similar, known compound 2-[1-(N-phenylamino)-1-(4-hydroxy-3-methoxyphenyl)]methylcyclohexanone (MB-H), whose crystal structure is presented here. Taking into account all experimental and theoretical findings, the structure of MB-F was proposed.

  19. Psychological first aid: a consensus-derived, empirically supported, competency-based training model.

    PubMed

    McCabe, O Lee; Everly, George S; Brown, Lisa M; Wendelboe, Aaron M; Abd Hamid, Nor Hashidah; Tallchief, Vicki L; Links, Jonathan M

    2014-04-01

    Surges in demand for professional mental health services occasioned by disasters represent a major public health challenge. To build response capacity, numerous psychological first aid (PFA) training models for professional and lay audiences have been developed that, although often concurring on broad intervention aims, have not systematically addressed pedagogical elements necessary for optimal learning or teaching. We describe a competency-based model of PFA training developed under the auspices of the Centers for Disease Control and Prevention and the Association of Schools of Public Health. We explain the approach used for developing and refining the competency set and summarize the observable knowledge, skills, and attitudes underlying the 6 core competency domains. We discuss the strategies for model dissemination, validation, and adoption in professional and lay communities. PMID:23865656

  20. An empirical study of the mechanisms of mindfulness in a mindfulness-based stress reduction program.

    PubMed

    Carmody, James; Baer, Ruth A; L B Lykins, Emily; Olendzki, Nicholas

    2009-06-01

    S. L. Shapiro and colleagues (2006) have described a testable theory of the mechanisms of mindfulness and how it affects positive change. They describe a model in which mindfulness training leads to a fundamental change in relationship to experience (reperceiving), which leads to changes in self-regulation, values clarification, cognitive and behavioral flexibility, and exposure. These four variables, in turn, result in salutogenic outcomes. Analyses of responses from participants in a mindfulness-based stress-reduction program did not support the mediating effect of changes in reperceiving on the relationship of mindfulness with those four variables. However, when mindfulness and reperceiving scores were combined, partial support was found for the mediating effect of the four variables on measures of psychological distress. Issues arising in attempts to test the proposed theory are discussed, including the description of the model variables and the challenges to their assessment.

  1. Psychological First Aid: A Consensus-Derived, Empirically Supported, Competency-Based Training Model

    PubMed Central

    Everly, George S.; Brown, Lisa M.; Wendelboe, Aaron M.; Abd Hamid, Nor Hashidah; Tallchief, Vicki L.; Links, Jonathan M.

    2014-01-01

    Surges in demand for professional mental health services occasioned by disasters represent a major public health challenge. To build response capacity, numerous psychological first aid (PFA) training models for professional and lay audiences have been developed that, although often concurring on broad intervention aims, have not systematically addressed pedagogical elements necessary for optimal learning or teaching. We describe a competency-based model of PFA training developed under the auspices of the Centers for Disease Control and Prevention and the Association of Schools of Public Health. We explain the approach used for developing and refining the competency set and summarize the observable knowledge, skills, and attitudes underlying the 6 core competency domains. We discuss the strategies for model dissemination, validation, and adoption in professional and lay communities. PMID:23865656

  2. Behavioral Modeling Based on Probabilistic Finite Automata: An Empirical Study †

    PubMed Central

    Tîrnăucă, Cristina; Montaña, José L.; Ontañón, Santiago; González, Avelino J.; Pardo, Luis M.

    2016-01-01

    Imagine an agent that performs tasks according to different strategies. The goal of Behavioral Recognition (BR) is to identify which of the available strategies is the one being used by the agent, by simply observing the agent’s actions and the environmental conditions during a certain period of time. The goal of Behavioral Cloning (BC) is more ambitious. In this last case, the learner must be able to build a model of the behavior of the agent. In both settings, the only assumption is that the learner has access to a training set that contains instances of observed behavioral traces for each available strategy. This paper studies a machine learning approach based on Probabilistic Finite Automata (PFAs), capable of achieving both the recognition and cloning tasks. We evaluate the performance of PFAs in the context of a simulated learning environment (in this case, a virtual Roomba vacuum cleaner robot), and compare it with a collection of other machine learning approaches. PMID:27347956

  3. An Empirical Pixel-Based CTE Correction for ACS/WFC

    NASA Astrophysics Data System (ADS)

    Anderson, Jay

    2010-07-01

    This presentation summarizes a paper that has been recently published in PASP, Anderson & Bedin (2010). The paper describes our pixel-based approach to correcting ACS data for imperfect CTE (charge-transfer efficiency). We developed the approach by characterizing the size and profiles of trails behind warm pixels in dark exposures. We found an algorithm that simulates the way imperfect CTE impacts the readout process. To correct images for imperfect CTE, we use a forwardmodeling procedure to determine the likely original distribution of charge, given the distribution that was read out. We applied this CTE-reconstruction algorithm to science images and found that the fluxes, positions and shapes of stars were restored to high fidelity. The ACS team is currently working to make this correction available to the public; they are also running tests to determine whether and how best to implement it in the pipeline.

  4. Psychological first aid: a consensus-derived, empirically supported, competency-based training model.

    PubMed

    McCabe, O Lee; Everly, George S; Brown, Lisa M; Wendelboe, Aaron M; Abd Hamid, Nor Hashidah; Tallchief, Vicki L; Links, Jonathan M

    2014-04-01

    Surges in demand for professional mental health services occasioned by disasters represent a major public health challenge. To build response capacity, numerous psychological first aid (PFA) training models for professional and lay audiences have been developed that, although often concurring on broad intervention aims, have not systematically addressed pedagogical elements necessary for optimal learning or teaching. We describe a competency-based model of PFA training developed under the auspices of the Centers for Disease Control and Prevention and the Association of Schools of Public Health. We explain the approach used for developing and refining the competency set and summarize the observable knowledge, skills, and attitudes underlying the 6 core competency domains. We discuss the strategies for model dissemination, validation, and adoption in professional and lay communities.

  5. Empirical estimation of consistency parameter in intertemporal choice based on Tsallis’ statistics

    NASA Astrophysics Data System (ADS)

    Takahashi, Taiki; Oono, Hidemi; Radford, Mark H. B.

    2007-07-01

    Impulsivity and inconsistency in intertemporal choice have been attracting attention in econophysics and neuroeconomics. Although loss of self-control by substance abusers is strongly related to their inconsistency in intertemporal choice, researchers in neuroeconomics and psychopharmacology have usually studied impulsivity in intertemporal choice using a discount rate (e.g. hyperbolic k), with little effort being expended on parameterizing subject's inconsistency in intertemporal choice. Recent studies using Tsallis’ statistics-based econophysics have found a discount function (i.e. q-exponential discount function), which may continuously parameterize a subject's consistency in intertemporal choice. In order to examine the usefulness of the consistency parameter (0⩽q⩽1) in the q-exponential discounting function in behavioral studies, we experimentally estimated the consistency parameter q in Tsallis’ statistics-based discounting function by assessing the points of subjective equality (indifference points) at seven delays (1 week-25 years) in humans (N=24). We observed that most (N=19) subjects’ intertemporal choice was completely inconsistent ( q=0, i.e. hyperbolic discounting), the mean consistency (0⩽q⩽1) was smaller than 0.5, and only one subject had a completely consistent intertemporal choice ( q=1, i.e. exponential discounting). There was no significant correlation between impulsivity and inconsistency parameters. Our results indicate that individual differences in consistency in intertemporal choice can be parameterized by introducing a q-exponential discount function and most people discount delayed rewards hyperbolically, rather than exponentially (i.e. mean q is smaller than 0.5). Further, impulsivity and inconsistency in intertemporal choice can be considered as separate behavioral tendencies. The usefulness of the consistency parameter q in psychopharmacological studies of addictive behavior was demonstrated in the present study.

  6. A theoretical framework for whole-plant carbon assimilation efficiency based on metabolic scaling theory: a test case using Picea seedlings.

    PubMed

    Wang, Zhiqiang; Ji, Mingfei; Deng, Jianming; Milne, Richard I; Ran, Jinzhi; Zhang, Qiang; Fan, Zhexuan; Zhang, Xiaowei; Li, Jiangtao; Huang, Heng; Cheng, Dongliang; Niklas, Karl J

    2015-06-01

    Simultaneous and accurate measurements of whole-plant instantaneous carbon-use efficiency (ICUE) and annual total carbon-use efficiency (TCUE) are difficult to make, especially for trees. One usually estimates ICUE based on the net photosynthetic rate or the assumed proportional relationship between growth efficiency and ICUE. However, thus far, protocols for easily estimating annual TCUE remain problematic. Here, we present a theoretical framework (based on the metabolic scaling theory) to predict whole-plant annual TCUE by directly measuring instantaneous net photosynthetic and respiratory rates. This framework makes four predictions, which were evaluated empirically using seedlings of nine Picea taxa: (i) the flux rates of CO(2) and energy will scale isometrically as a function of plant size, (ii) whole-plant net and gross photosynthetic rates and the net primary productivity will scale isometrically with respect to total leaf mass, (iii) these scaling relationships will be independent of ambient temperature and humidity fluctuations (as measured within an experimental chamber) regardless of the instantaneous net photosynthetic rate or dark respiratory rate, or overall growth rate and (iv) TCUE will scale isometrically with respect to instantaneous efficiency of carbon use (i.e., the latter can be used to predict the former) across diverse species. These predictions were experimentally verified. We also found that the ranking of the nine taxa based on net photosynthetic rates differed from ranking based on either ICUE or TCUE. In addition, the absolute values of ICUE and TCUE significantly differed among the nine taxa, with both ICUE and temperature-corrected ICUE being highest for Picea abies and lowest for Picea schrenkiana. Nevertheless, the data are consistent with the predictions of our general theoretical framework, which can be used to access annual carbon-use efficiency of different species at the level of an individual plant based on simple, direct

  7. [Research on ECG de-noising method based on ensemble empirical mode decomposition and wavelet transform using improved threshold function].

    PubMed

    Ye, Linlin; Yang, Dan; Wang, Xu

    2014-06-01

    A de-noising method for electrocardiogram (ECG) based on ensemble empirical mode decomposition (EEMD) and wavelet threshold de-noising theory is proposed in our school. We decomposed noised ECG signals with the proposed method using the EEMD and calculated a series of intrinsic mode functions (IMFs). Then we selected IMFs and reconstructed them to realize the de-noising for ECG. The processed ECG signals were filtered again with wavelet transform using improved threshold function. In the experiments, MIT-BIH ECG database was used for evaluating the performance of the proposed method, contrasting with de-noising method based on EEMD and wavelet transform with improved threshold function alone in parameters of signal to noise ratio (SNR) and mean square error (MSE). The results showed that the ECG waveforms de-noised with the proposed method were smooth and the amplitudes of ECG features did not attenuate. In conclusion, the method discussed in this paper can realize the ECG denoising and meanwhile keep the characteristics of original ECG signal. PMID:25219236

  8. A UNIFIED EMPIRICAL MODEL FOR INFRARED GALAXY COUNTS BASED ON THE OBSERVED PHYSICAL EVOLUTION OF DISTANT GALAXIES

    SciTech Connect

    Bethermin, Matthieu; Daddi, Emanuele; Sargent, Mark T.; Elbaz, David; Mullaney, James; Pannella, Maurilio; Hezaveh, Yashar; Le Borgne, Damien; Buat, Veronique; Charmandaris, Vassilis; Lagache, Guilaine; Scott, Douglas

    2012-10-01

    We reproduce the mid-infrared to radio galaxy counts with a new empirical model based on our current understanding of the evolution of main-sequence (MS) and starburst (SB) galaxies. We rely on a simple spectral energy distribution (SED) library based on Herschel observations: a single SED for the MS and another one for SB, getting warmer with redshift. Our model is able to reproduce recent measurements of galaxy counts performed with Herschel, including counts per redshift slice. This agreement demonstrates the power of our 2-Star-Formation Modes (2SFM) decomposition in describing the statistical properties of infrared sources and their evolution with cosmic time. We discuss the relative contribution of MS and SB galaxies to the number counts at various wavelengths and flux densities. We also show that MS galaxies are responsible for a bump in the 1.4 GHz radio counts around 50 {mu}Jy. Material of the model (predictions, SED library, mock catalogs, etc.) is available online.

  9. An Empirical Orthogonal Function-Based Algorithm for Estimating Terrestrial Latent Heat Flux from Eddy Covariance, Meteorological and Satellite Observations.

    PubMed

    Feng, Fei; Li, Xianglan; Yao, Yunjun; Liang, Shunlin; Chen, Jiquan; Zhao, Xiang; Jia, Kun; Pintér, Krisztina; McCaughey, J Harry

    2016-01-01

    Accurate estimation of latent heat flux (LE) based on remote sensing data is critical in characterizing terrestrial ecosystems and modeling land surface processes. Many LE products were released during the past few decades, but their quality might not meet the requirements in terms of data consistency and estimation accuracy. Merging multiple algorithms could be an effective way to improve the quality of existing LE products. In this paper, we present a data integration method based on modified empirical orthogonal function (EOF) analysis to integrate the Moderate Resolution Imaging Spectroradiometer (MODIS) LE product (MOD16) and the Priestley-Taylor LE algorithm of Jet Propulsion Laboratory (PT-JPL) estimate. Twenty-two eddy covariance (EC) sites with LE observation were chosen to evaluate our algorithm, showing that the proposed EOF fusion method was capable of integrating the two satellite data sets with improved consistency and reduced uncertainties. Further efforts were needed to evaluate and improve the proposed algorithm at larger spatial scales and time periods, and over different land cover types. PMID:27472383

  10. Chemical and physical influences on aerosol activation in liquid clouds: an empirical study based on observations from the Jungfraujoch, Switzerland

    NASA Astrophysics Data System (ADS)

    Hoyle, C. R.; Webster, C. S.; Rieder, H. E.; Hammer, E.; Gysel, M.; Bukowiecki, N.; Weingartner, E.; Steinbacher, M.; Baltensperger, U.

    2015-06-01

    A simple empirical model to predict the number of aerosols which activate to form cloud droplets in a warm, free tropospheric cloud has been established, based on data from four summertime Cloud and Aerosol Characterisation Experiments (CLACE) campaigns at the Jungfraujoch (JFJ). It is shown that 76% of the observed variance in droplet numbers can be represented by a model accounting only for the number of potential CCN (defined as number of particles larger than 90 nm in diameter), while the mean errors in the model representation may be reduced by the addition of further explanatory variables, such as the mixing ratios of O3, CO and the height of the measurements above cloud base. The model has similar ability to represent the observed droplet numbers in each of the individual years, as well as for the two predominant local wind directions at the JFJ (north west and south east). Given the central European location of the JFJ, with air masses in summer being representative of the free troposphere with regular boundary layer in-mixing via convection, we expect that this model is applicable to warm, free tropospheric clouds over the European continent.

  11. An Empirical Orthogonal Function-Based Algorithm for Estimating Terrestrial Latent Heat Flux from Eddy Covariance, Meteorological and Satellite Observations.

    PubMed

    Feng, Fei; Li, Xianglan; Yao, Yunjun; Liang, Shunlin; Chen, Jiquan; Zhao, Xiang; Jia, Kun; Pintér, Krisztina; McCaughey, J Harry

    2016-01-01

    Accurate estimation of latent heat flux (LE) based on remote sensing data is critical in characterizing terrestrial ecosystems and modeling land surface processes. Many LE products were released during the past few decades, but their quality might not meet the requirements in terms of data consistency and estimation accuracy. Merging multiple algorithms could be an effective way to improve the quality of existing LE products. In this paper, we present a data integration method based on modified empirical orthogonal function (EOF) analysis to integrate the Moderate Resolution Imaging Spectroradiometer (MODIS) LE product (MOD16) and the Priestley-Taylor LE algorithm of Jet Propulsion Laboratory (PT-JPL) estimate. Twenty-two eddy covariance (EC) sites with LE observation were chosen to evaluate our algorithm, showing that the proposed EOF fusion method was capable of integrating the two satellite data sets with improved consistency and reduced uncertainties. Further efforts were needed to evaluate and improve the proposed algorithm at larger spatial scales and time periods, and over different land cover types.

  12. [Research on ECG de-noising method based on ensemble empirical mode decomposition and wavelet transform using improved threshold function].

    PubMed

    Ye, Linlin; Yang, Dan; Wang, Xu

    2014-06-01

    A de-noising method for electrocardiogram (ECG) based on ensemble empirical mode decomposition (EEMD) and wavelet threshold de-noising theory is proposed in our school. We decomposed noised ECG signals with the proposed method using the EEMD and calculated a series of intrinsic mode functions (IMFs). Then we selected IMFs and reconstructed them to realize the de-noising for ECG. The processed ECG signals were filtered again with wavelet transform using improved threshold function. In the experiments, MIT-BIH ECG database was used for evaluating the performance of the proposed method, contrasting with de-noising method based on EEMD and wavelet transform with improved threshold function alone in parameters of signal to noise ratio (SNR) and mean square error (MSE). The results showed that the ECG waveforms de-noised with the proposed method were smooth and the amplitudes of ECG features did not attenuate. In conclusion, the method discussed in this paper can realize the ECG denoising and meanwhile keep the characteristics of original ECG signal.

  13. An Empirical Orthogonal Function-Based Algorithm for Estimating Terrestrial Latent Heat Flux from Eddy Covariance, Meteorological and Satellite Observations

    PubMed Central

    Feng, Fei; Li, Xianglan; Yao, Yunjun; Liang, Shunlin; Chen, Jiquan; Zhao, Xiang; Jia, Kun; Pintér, Krisztina; McCaughey, J. Harry

    2016-01-01

    Accurate estimation of latent heat flux (LE) based on remote sensing data is critical in characterizing terrestrial ecosystems and modeling land surface processes. Many LE products were released during the past few decades, but their quality might not meet the requirements in terms of data consistency and estimation accuracy. Merging multiple algorithms could be an effective way to improve the quality of existing LE products. In this paper, we present a data integration method based on modified empirical orthogonal function (EOF) analysis to integrate the Moderate Resolution Imaging Spectroradiometer (MODIS) LE product (MOD16) and the Priestley-Taylor LE algorithm of Jet Propulsion Laboratory (PT-JPL) estimate. Twenty-two eddy covariance (EC) sites with LE observation were chosen to evaluate our algorithm, showing that the proposed EOF fusion method was capable of integrating the two satellite data sets with improved consistency and reduced uncertainties. Further efforts were needed to evaluate and improve the proposed algorithm at larger spatial scales and time periods, and over different land cover types. PMID:27472383

  14. A Cutting Pattern Recognition Method for Shearers Based on Improved Ensemble Empirical Mode Decomposition and a Probabilistic Neural Network

    PubMed Central

    Xu, Jing; Wang, Zhongbin; Tan, Chao; Si, Lei; Liu, Xinhua

    2015-01-01

    In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD) and Probabilistic Neural Network (PNN) is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF) components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method. PMID:26528985

  15. Evaluation of empirical rule of linearly correlated peptide selection (ERLPS) for proteotypic peptide-based quantitative proteomics.

    PubMed

    Liu, Kehui; Zhang, Jiyang; Fu, Bin; Xie, Hongwei; Wang, Yingchun; Qian, Xiaohong

    2014-07-01

    Precise protein quantification is essential in comparative proteomics. Currently, quantification bias is inevitable when using proteotypic peptide-based quantitative proteomics strategy for the differences in peptides measurability. To improve quantification accuracy, we proposed an "empirical rule for linearly correlated peptide selection (ERLPS)" in quantitative proteomics in our previous work. However, a systematic evaluation on general application of ERLPS in quantitative proteomics under diverse experimental conditions needs to be conducted. In this study, the practice workflow of ERLPS was explicitly illustrated; different experimental variables, such as, different MS systems, sample complexities, sample preparations, elution gradients, matrix effects, loading amounts, and other factors were comprehensively investigated to evaluate the applicability, reproducibility, and transferability of ERPLS. The results demonstrated that ERLPS was highly reproducible and transferable within appropriate loading amounts and linearly correlated response peptides should be selected for each specific experiment. ERLPS was used to proteome samples from yeast to mouse and human, and in quantitative methods from label-free to O18/O16-labeled and SILAC analysis, and enabled accurate measurements for all proteotypic peptide-based quantitative proteomics over a large dynamic range.

  16. An empirical comparison of different LDA methods in fMRI-based brain states decoding.

    PubMed

    Xia, Maogeng; Song, Sutao; Yao, Li; Long, Zhiying

    2015-01-01

    Decoding brain states from response patterns with multivariate pattern recognition techniques is a popular method for detecting multivoxel patterns of brain activation. These patterns are informative with respect to a subject's perceptual or cognitive states. Linear discriminant analysis (LDA) cannot be directly applied to fMRI data analysis because of the "few samples and large features" nature of functional magnetic resonance imaging (fMRI) data. Although several improved LDA methods have been used in fMRI-based decoding, little is known regarding the relative performance of different LDA classifiers on fMRI data. In this study, we compared five LDA classifiers using both simulated data with varied noise levels and real fMRI data. The compared LDA classifiers include LDA combined with PCA (LDA-PCA), LDA with three types of regularizations (identity matrix, diagonal matrix and scaled identity matrix) and LDA with optimal-shrinkage covariance estimator using Ledoit and Wolf lemma (LDA-LW). The results indicated that LDA-LW was the most robust to noises. Moreover, LDA-LW and LDA with scaled identity matrix showed better stability and classification accuracy than the other methods. LDA-LW demonstrated the best overall performance. PMID:26405876

  17. An empirical comparison of different LDA methods in fMRI-based brain states decoding.

    PubMed

    Xia, Maogeng; Song, Sutao; Yao, Li; Long, Zhiying

    2015-01-01

    Decoding brain states from response patterns with multivariate pattern recognition techniques is a popular method for detecting multivoxel patterns of brain activation. These patterns are informative with respect to a subject's perceptual or cognitive states. Linear discriminant analysis (LDA) cannot be directly applied to fMRI data analysis because of the "few samples and large features" nature of functional magnetic resonance imaging (fMRI) data. Although several improved LDA methods have been used in fMRI-based decoding, little is known regarding the relative performance of different LDA classifiers on fMRI data. In this study, we compared five LDA classifiers using both simulated data with varied noise levels and real fMRI data. The compared LDA classifiers include LDA combined with PCA (LDA-PCA), LDA with three types of regularizations (identity matrix, diagonal matrix and scaled identity matrix) and LDA with optimal-shrinkage covariance estimator using Ledoit and Wolf lemma (LDA-LW). The results indicated that LDA-LW was the most robust to noises. Moreover, LDA-LW and LDA with scaled identity matrix showed better stability and classification accuracy than the other methods. LDA-LW demonstrated the best overall performance.

  18. Temporal asymmetries in Interbank Market: an empirically grounded Agent-Based Model

    NASA Astrophysics Data System (ADS)

    Zlatic, Vinko; Popovic, Marko; Abraham, Hrvoje; Caldarelli, Guido; Iori, Giulia

    2014-03-01

    We analyse the changes in the topology of the structure of the E-mid interbank market in the period from September 1st 1999 to September 1st 2009. We uncover a type of temporal irreversibility in the growth of the largest component of the interbank trading network, which is not common to any of the usual network growth models. Such asymmetry, which is also detected on the growth of the clustering and reciprocity coefficient, reveals that the trading mechanism is driven by different dynamics at the beginning and at the end of the day. We are able to recover the complexity of the system by means of a simple Agent Based Model in which the probability of matching between counter parties depends on a time varying vertex fitness (or attractiveness) describing banks liquidity needs. We show that temporal irreversibility is associated with heterogeneity in the banking system and emerges when the distribution of liquidity shocks across banks is broad. We acknowledge support from FET project FOC-II.

  19. Empirical modeling of the fine particle fraction for carrier-based pulmonary delivery formulations.

    PubMed

    Pacławski, Adam; Szlęk, Jakub; Lau, Raymond; Jachowicz, Renata; Mendyk, Aleksander

    2015-01-01

    In vitro study of the deposition of drug particles is commonly used during development of formulations for pulmonary delivery. The assay is demanding, complex, and depends on: properties of the drug and carrier particles, including size, surface characteristics, and shape; interactions between the drug and carrier particles and assay conditions, including flow rate, type of inhaler, and impactor. The aerodynamic properties of an aerosol are measured in vitro using impactors and in most cases are presented as the fine particle fraction, which is a mass percentage of drug particles with an aerodynamic diameter below 5 μm. In the present study, a model in the form of a mathematical equation was developed for prediction of the fine particle fraction. The feature selection was performed using the R-environment package "fscaret". The input vector was reduced from a total of 135 independent variables to 28. During the modeling stage, techniques like artificial neural networks, genetic programming, rule-based systems, and fuzzy logic systems were used. The 10-fold cross-validation technique was used to assess the generalization ability of the models created. The model obtained had good predictive ability, which was confirmed by a root-mean-square error and normalized root-mean-square error of 4.9 and 11%, respectively. Moreover, validation of the model using external experimental data was performed, and resulted in a root-mean-square error and normalized root-mean-square error of 3.8 and 8.6%, respectively.

  20. Empirically based recommendations to support parents facing the dilemma of paediatric cadaver organ donation.

    PubMed

    Bellali, T; Papazoglou, I; Papadatou, D

    2007-08-01

    The aim of the study was to describe the challenges donor and non-donor parents encounter before, during, and after the organ donation decision, and to identify parents' needs and expectations from health care professionals. A further aim was to propose evidence-based recommendations for effectively introducing the option of donation, and supporting families through the grieving process. This study was undertaken as part of a larger research project investigating the experiences of Greek parents who consented or declined organ and tissue donation, using a qualitative methodology for data collection and analysis. The experiences of 22 Greek bereaved parents of 14 underage brain dead children were studied through semi-structured interviews. Parents' decision-making process was described as challenging and fraught with difficulties both before and after the donation period. Identified challenges were clustered into: (a) personal challenges, (b) conditions of organ request, and (c) interpersonal challenges. Parents' main concern following donation was the lack of information about transplantation outcomes. Findings led to a list of recommendations for nurses and other health professionals for approaching and supporting parents in making choices about paediatric organ donation that are appropriate to them, and for facilitating their adjustment to the sudden death of their underage child. PMID:17475498

  1. Photothermal Deflection Experiments: Comparison of Existing Theoretical Models and Their Applications to Characterization of -Based Thin Films

    NASA Astrophysics Data System (ADS)

    Korte, Dorota; Franko, Mladen

    2014-12-01

    A method for determination of thermooptical, transport, and structural parameters of -based thin films is presented. The measurements were conducted using beam deflection spectroscopy (BDS) and supporting theoretical analysis performed in the framework of complex geometrical optics providing a novel method of BDS data modeling. It was observed that the material's thermal parameters strongly depend on sample properties determining its photocatalytic activity such as the energy bandgap, carrier lifetime, surface structure, or porosity. Because of that, the fitting procedure of the theoretical dependence into experimental data was developed to determine the sample's thermal parameters, on the basis of which the information about its structure was further found. The obtained results were compared to those based on geometrical and wave optics approaches that are currently widely used for that purpose. It was demonstrated that the choice of the proper model for data modeling is a crucial point when performing such a type of analysis.

  2. An empirical approach to predicting long term behavior of metal particle based recording media

    NASA Technical Reports Server (NTRS)

    Hadad, Allan S.

    1991-01-01

    Alpha iron particles used for magnetic recording are prepared through a series of dehydration and reduction steps of alpha-Fe2O3-H2O resulting in acicular, polycrystalline, body centered cubic (bcc) alpha-Fe particles that are single magnetic domains. Since fine iron particles are pyrophoric by nature, stabilization processes had to be developed in order for iron particles to be considered as a viable recording medium for long term archival (i.e., 25+ years) information storage. The primary means of establishing stability is through passivation or controlled oxidation of the iron particle's surface. Since iron particles used for magnetic recording are small, additional oxidation has a direct impact on performance especially where archival storage of recorded information for long periods of time is important. Further stabilization chemistry/processes had to be developed to guarantee that iron particles could be considered as a viable long term recording medium. In an effort to retard the diffusion of iron ions through the oxide layer, other elements such as silicon, aluminum, and chromium have been added to the base iron to promote more dense scale formation or to alleviate some of the non-stoichiometric behavior of the oxide or both. The presence of water vapor has been shown to disrupt the passive layer, subsequently increasing the oxidation rate of the iron. A study was undertaken to examine the degradation in magnetic properties as a function of both temperature and humidity on silicon-containing iron particles between 50-120 deg C and 3-89 percent relative humidity. The methodology to which experimental data was collected and analyzed leading to predictive capability is discussed.

  3. GIS-based analysis and modelling with empirical and remotely-sensed data on coastline advance and retreat

    NASA Astrophysics Data System (ADS)

    Ahmad, Sajid Rashid

    With the understanding that far more research remains to be done on the development and use of innovative and functional geospatial techniques and procedures to investigate coastline changes this thesis focussed on the integration of remote sensing, geographical information systems (GIS) and modelling techniques to provide meaningful insights on the spatial and temporal dynamics of coastline changes. One of the unique strengths of this research was the parameterization of the GIS with long-term empirical and remote sensing data. Annual empirical data from 1941--2007 were analyzed by the GIS, and then modelled with statistical techniques. Data were also extracted from Landsat TM and ETM+ images. The band ratio method was used to extract the coastlines. Topographic maps were also used to extract digital map data. All data incorporated into ArcGIS 9.2 were analyzed with various modules, including Spatial Analyst, 3D Analyst, and Triangulated Irregular Networks. The Digital Shoreline Analysis System was used to analyze and predict rates of coastline change. GIS results showed the spatial locations along the coast that will either advance or retreat over time. The linear regression results highlighted temporal changes which are likely to occur along the coastline. Box-Jenkins modelling procedures were utilized to determine statistical models which best described the time series (1941--2007) of coastline change data. After several iterations and goodness-of-fit tests, second-order spatial cyclic autoregressive models, first-order autoregressive models and autoregressive moving average models were identified as being appropriate for describing the deterministic and random processes operating in Guyana's coastal system. The models highlighted not only cyclical patterns in advance and retreat of the coastline, but also the existence of short and long-term memory processes. Long-term memory processes could be associated with mudshoal propagation and stabilization while short

  4. Developing Theoretically Based and Culturally Appropriate Interventions to Promote Hepatitis B Testing in 4 Asian American Populations, 2006–2011

    PubMed Central

    Bastani, Roshan; Glenn, Beth A.; Taylor, Victoria M.; Nguyen, Tung T.; Stewart, Susan L.; Burke, Nancy J.; Chen, Moon S.

    2014-01-01

    Introduction Hepatitis B infection is 5 to 12 times more common among Asian Americans than in the general US population and is the leading cause of liver disease and liver cancer among Asians. The purpose of this article is to describe the step-by-step approach that we followed in community-based participatory research projects in 4 Asian American groups, conducted from 2006 through 2011 in California and Washington state to develop theoretically based and culturally appropriate interventions to promote hepatitis B testing. We provide examples to illustrate how intervention messages addressing identical theoretical constructs of the Health Behavior Framework were modified to be culturally appropriate for each community. Methods Intervention approaches included mass media in the Vietnamese community, small-group educational sessions at churches in the Korean community, and home visits by lay health workers in the Hmong and Cambodian communities. Results Use of the Health Behavior Framework allowed a systematic approach to intervention development across populations, resulting in 4 different culturally appropriate interventions that addressed the same set of theoretical constructs. Conclusions The development of theory-based health promotion interventions for different populations will advance our understanding of which constructs are critical to modify specific health behaviors. PMID:24784908

  5. Theoretical, statistical, and practical perspectives on pattern-based classification approaches to the analysis of functional neuroimaging data.

    PubMed

    O'Toole, Alice J; Jiang, Fang; Abdi, Hervé; Pénard, Nils; Dunlop, Joseph P; Parent, Marc A

    2007-11-01

    The goal of pattern-based classification of functional neuroimaging data is to link individual brain activation patterns to the experimental conditions experienced during the scans. These "brain-reading" analyses advance functional neuroimaging on three fronts. From a technical standpoint, pattern-based classifiers overcome fatal f laws in the status quo inferential and exploratory multivariate approaches by combining pattern-based analyses with a direct link to experimental variables. In theoretical terms, the results that emerge from pattern-based classifiers can offer insight into the nature of neural representations. This shifts the emphasis in functional neuroimaging studies away from localizing brain activity toward understanding how patterns of brain activity encode information. From a practical point of view, pattern-based classifiers are already well established and understood in many areas of cognitive science. These tools are familiar to many researchers and provide a quantitatively sound and qualitatively satisfying answer to most questions addressed in functional neuroimaging studies. Here, we examine the theoretical, statistical, and practical underpinnings of pattern-based classification approaches to functional neuroimaging analyses. Pattern-based classification analyses are well positioned to become the standard approach to analyzing functional neuroimaging data.

  6. Theoretical study on a tunable directional coupler filter based optical bistable device and its possible applications

    NASA Astrophysics Data System (ADS)

    Podoleanu, Adrian G.; Sala, Anca-Liliana; Ionescu, Liviu G.

    1994-04-01

    We theoretically analyze the behavior of a hybrid optical bistable device that uses a tunable directional coupler filter as a modulator. The device is shown to have a great potential for applications in optical computing and optical communications. The output intensity dependencies on different input parameters are plotted and their basic features are exploited in imaging applications such as optical logical gates and other optical circuits. The spectral dependence of the pulse response of the bistable device is emphasized, suggesting the design of a very sensitive wavelength sensor.

  7. Empirical assessment of theory for bankfull characteristics of alluvial channels

    NASA Astrophysics Data System (ADS)

    Trampush, S. M.; Huzurbazar, S.; McElroy, B.

    2014-12-01

    We compiled a data set of 541 bankfull measurements of alluvial rivers (see supporting information) and used Bayesian linear regression to examine empirical and theoretical support for the hypothesis that alluvial channels adjust to a predictable condition of basal shear stress as a function of sediment transport mode. An empirical closure based on channel slope, bankfull channel depth, and median grain size is proposed and results in the scaling of bankfull Shields stress with the inverse square root of particle Reynolds number. The empirical relationship is sufficient for purposes of quantifying paleohydraulic conditions in ancient alluvial channels. However, it is not currently appropriate for application to alluvial channels on extraterrestrial bodies because it depends on constant-valued, Earth-based coefficients.

  8. Psychoanalysis and empirical research: the example of alexithymia.

    PubMed

    Taylor, Graeme J; Bagby, R Michael

    2013-02-01

    An extensive body of research on the alexithymia construct is reviewed to show how various empirical methodologies can be used to evaluate the validity and increase our understanding of theoretical and clinically derived psychoanalytic concepts. The historical background of alexithymia and the theoretical framework in which the construct was formulated are presented, after which measurement- and experiment-based approaches to construct validation are described. This is followed by a review of empirical investigations that have yielded evidence that alexithymia is a dimensional personality trait associated with several illnesses of interest to psychoanalysts. Empirical research also supports clinical observations and impressions that individuals with high degrees of alexithymia principally employ primitive defenses, have a limited capacity for empathy, exhibit deficits in mentalization, and do not respond well to traditional interpretive psychotherapies. Also reviewed is empirical research that implicates genetic and environmental/developmental factors in the etiology of alexithymia, in particular childhood trauma and insecure attachments, factors generally associated with deficits in affect development and affect regulation. The clinical relevance of the empirical research findings is discussed in the final section. PMID:23343505

  9. Intelligence in Bali--A Case Study on Estimating Mean IQ for a Population Using Various Corrections Based on Theory and Empirical Findings

    ERIC Educational Resources Information Center

    Rindermann, Heiner; te Nijenhuis, Jan

    2012-01-01

    A high-quality estimate of the mean IQ of a country requires giving a well-validated test to a nationally representative sample, which usually is not feasible in developing countries. So, we used a convenience sample and four corrections based on theory and empirical findings to arrive at a good-quality estimate of the mean IQ in Bali. Our study…

  10. Theoretical performance of solar cell based on mini-bands quantum dots

    SciTech Connect

    Aly, Abou El-Maaty M. E-mail: ashraf.nasr@gmail.com; Nasr, A. E-mail: ashraf.nasr@gmail.com

    2014-03-21

    The tremendous amount of research in solar energy is directed toward intermediate band solar cell for its advantages compared with the conventional solar cell. The latter has lower efficiency because the photons have lower energy than the bandgap energy and cannot excite mobile carriers from the valence band to the conduction band. On the other hand, if mini intermediate band is introduced between the valence and conduction bands, then the smaller energy photons can be used to promote charge carriers transfer to the conduction band and thereby the total current increases while maintaining a large open circuit voltage. In this article, the influence of the new band on the power conversion efficiency for structure of quantum dots intermediate band solar cell is theoretically investigated and studied. The time-independent Schrödinger equation is used to determine the optimum width and location of the intermediate band. Accordingly, achievement of a maximum efficiency by changing the width of quantum dots and barrier distances is studied. Theoretical determination of the power conversion efficiency under the two different ranges of QD width is presented. From the obtained results, the maximum power conversion efficiency is about 70.42%. It is carried out for simple cubic quantum dot crystal under fully concentrated light. It is strongly dependent on the width of quantum dots and barrier distances.

  11. How does change occur following a theoretically based self-management intervention for type 2 diabetes.

    PubMed

    Steed, Liz; Barnard, Maria; Hurel, Steven; Jenkins, Catherine; Newman, Stanton

    2014-01-01

    The purpose of this study was to test the extent that constructs from two theoretical models (self-regulatory theory and social cognitive theory) mediated change in outcomes following a self-management intervention. One hundred and twenty four individuals with type 2 diabetes who had participated in a randomised controlled trial of a diabetes self-management programme were analysed for the extent that illness beliefs and self-efficacy mediated change in self-management behaviours and illness specific quality of life. Exercise specific self-efficacy significantly mediated change in exercise at three months (B = .03; .01, p < .05) while monitoring specific self-efficacy mediated change in monitoring behaviour at both three (B = .04; .01, p < .01) and nine months follow-up (B = 5.97; 1.01, p < .01). Belief in control over diabetes mediated change in illness specific quality of life at three months (B = -.07; .28, p < .05) and nine months (B = .79; .28, p < .01) follow-ups, as well as change in exercise behaviour at immediately post-intervention (B = -.12; .17, p < .05). Behaviour-specific self-efficacy may have a stronger role in mediating self-management behaviours than illness beliefs; however, belief in control over diabetes may be important to manipulate for change in quality of life. This suggests different theoretical constructs may mediate change dependent on outcome.

  12. Similarity and rules: distinct? Exhaustive? Empirically distinguishable?

    PubMed

    Hahn, U; Chater, N

    1998-01-01

    The distinction between rule-based and similarity-based processes in cognition is of fundamental importance for cognitive science, and has been the focus of a large body of empirical research. However, intuitive uses of the distinction are subject to theoretical difficulties and their relation to empirical evidence is not clear. We propose a 'core' distinction between rule- and similarity-based processes, in terms of the way representations of stored information are 'matched' with the representation of a novel item. This explication captures the intuitively clear-cut cases of processes of each type, and resolves apparent problems with the rule/similarity distinction. Moreover, it provides a clear target for assessing the psychological and AI literatures. We show that many lines of psychological evidence are less conclusive than sometimes assumed, but suggest that converging lines of evidence may be persuasive. We then argue that the AI literature suggests that approaches which combine rules and similarity are an important new focus for empirical work.

  13. Empirical metallicity-dependent calibrations of effective temperature against colours for dwarfs and giants based on interferometric data

    NASA Astrophysics Data System (ADS)

    Huang, Y.; Liu, X.-W.; Yuan, H.-B.; Xiang, M.-S.; Chen, B.-Q.; Zhang, H.-W.

    2015-12-01

    We present empirical metallicity-dependent calibrations of effective temperature against colours for dwarfs of luminosity classes IV and V and for giants of luminosity classes II and III, based on a collection from the literature of about two hundred nearby stars with direct effective temperature measurements of better than 2.5 per cent. The calibrations are valid for an effective temperature range 3100-10 000 K for dwarfs of spectral types M5 to A0 and 3100-5700 K for giants of spectral types K5 to G5. A total of 21 colours for dwarfs and 18 colours for giants of bands of four photometric systems, i.e. the Johnson (UBVRJIJJHK), the Cousins (RCIC), the Sloan Digital Sky Survey (gr) and the Two Micron All Sky Survey (JHKs), have been calibrated. Restricted by the metallicity range of the current sample, the calibrations are mainly applicable for disc stars ([Fe/H] ≳ - 1.0). The normalized percentage residuals of the calibrations are typically 2.0 and 1.5 per cent for dwarfs and giants, respectively. Some systematic discrepancies at various levels are found between the current scales and those available in the literature (e.g. those based on the infrared flux method or spectroscopy). Based on the current calibrations, we have re-determined the colours of the Sun. We have also investigated the systematic errors in effective temperatures yielded by the current on-going large-scale low- to intermediate-resolution stellar spectroscopic surveys. We show that the calibration of colour (g - Ks) presented in this work provides an invaluable tool for the estimation of stellar effective temperature for those on-going or upcoming surveys.

  14. The dappled nature of causes of psychiatric illness: replacing the organic–functional/hardware–software dichotomy with empirically based pluralism

    PubMed Central

    Kendler, KS

    2012-01-01

    Our tendency to see the world of psychiatric illness in dichotomous and opposing terms has three major sources: the philosophy of Descartes, the state of neuropathology in late nineteenth century Europe (when disorders were divided into those with and without demonstrable pathology and labeled, respectively, organic and functional), and the influential concept of computer functionalism wherein the computer is viewed as a model for the human mind–brain system (brain = hardware, mind = software). These mutually re-enforcing dichotomies, which have had a pernicious influence on our field, make a clear prediction about how ‘difference-makers’ (aka causal risk factors) for psychiatric disorders should be distributed in nature. In particular, are psychiatric disorders like our laptops, which when they dysfunction, can be cleanly divided into those with software versus hardware problems? I propose 11 categories of difference-makers for psychiatric illness from molecular genetics through culture and review their distribution in schizophrenia, major depression and alcohol dependence. In no case do these distributions resemble that predicted by the organic–functional/hardware–software dichotomy. Instead, the causes of psychiatric illness are dappled, distributed widely across multiple categories. We should abandon Cartesian and computer-functionalism-based dichotomies as scientifically inadequate and an impediment to our ability to integrate the diverse information about psychiatric illness our research has produced. Empirically based pluralism provides a rigorous but dappled view of the etiology of psychiatric illness. Critically, it is based not on how we wish the world to be but how the difference-makers for psychiatric illness are in fact distributed. PMID:22230881

  15. The dappled nature of causes of psychiatric illness: replacing the organic-functional/hardware-software dichotomy with empirically based pluralism.

    PubMed

    Kendler, K S

    2012-04-01

    Our tendency to see the world of psychiatric illness in dichotomous and opposing terms has three major sources: the philosophy of Descartes, the state of neuropathology in late nineteenth century Europe (when disorders were divided into those with and without demonstrable pathology and labeled, respectively, organic and functional), and the influential concept of computer functionalism wherein the computer is viewed as a model for the human mind-brain system (brain=hardware, mind=software). These mutually re-enforcing dichotomies, which have had a pernicious influence on our field, make a clear prediction about how 'difference-makers' (aka causal risk factors) for psychiatric disorders should be distributed in nature. In particular, are psychiatric disorders like our laptops, which when they dysfunction, can be cleanly divided into those with software versus hardware problems? I propose 11 categories of difference-makers for psychiatric illness from molecular genetics through culture and review their distribution in schizophrenia, major depression and alcohol dependence. In no case do these distributions resemble that predicted by the organic-functional/hardware-software dichotomy. Instead, the causes of psychiatric illness are dappled, distributed widely across multiple categories. We should abandon Cartesian and computer-functionalism-based dichotomies as scientifically inadequate and an impediment to our ability to integrate the diverse information about psychiatric illness our research has produced. Empirically based pluralism provides a rigorous but dappled view of the etiology of psychiatric illness. Critically, it is based not on how we wish the world to be but how the difference-makers for psychiatric illness are in fact distributed.

  16. In silico structure-based screening of versatile P-glycoprotein inhibitors using polynomial empirical scoring functions.

    PubMed

    Shityakov, Sergey; Förster, Carola

    2014-01-01

    P-glycoprotein (P-gp) is an ATP (adenosine triphosphate)-binding cassette transporter that causes multidrug resistance of various chemotherapeutic substances by active efflux from mammalian cells. P-gp plays a pivotal role in limiting drug absorption and distribution in different organs, including the intestines and brain. Thus, the prediction of P-gp-drug interactions is of vital importance in assessing drug pharmacokinetic and pharmacodynamic properties. To find the strongest P-gp blockers, we performed an in silico structure-based screening of P-gp inhibitor library (1,300 molecules) by the gradient optimization method, using polynomial empirical scoring (POLSCORE) functions. We report a strong correlation (r (2)=0.80, F=16.27, n=6, P<0.0157) of inhibition constants (Kiexp or pKiexp; experimental Ki or negative decimal logarithm of Kiexp) converted from experimental IC50 (half maximal inhibitory concentration) values with POLSCORE-predicted constants (KiPOLSCORE or pKiPOLSCORE), using a linear regression fitting technique. The hydrophobic interactions between P-gp and selected drug substances were detected as the main forces responsible for the inhibition effect. The results showed that this scoring technique might be useful in the virtual screening and filtering of databases of drug-like compounds at the early stage of drug development processes. PMID:24711707

  17. Combined magnetic and kinetic control of advanced tokamak steady state scenarios based on semi-empirical modelling

    NASA Astrophysics Data System (ADS)

    Moreau, D.; Artaud, J. F.; Ferron, J. R.; Holcomb, C. T.; Humphreys, D. A.; Liu, F.; Luce, T. C.; Park, J. M.; Prater, R.; Turco, F.; Walker, M. L.

    2015-06-01

    This paper shows that semi-empirical data-driven models based on a two-time-scale approximation for the magnetic and kinetic control of advanced tokamak (AT) scenarios can be advantageously identified from simulated rather than real data, and used for control design. The method is applied to the combined control of the safety factor profile, q(x), and normalized pressure parameter, βN, using DIII-D parameters and actuators (on-axis co-current neutral beam injection (NBI) power, off-axis co-current NBI power, electron cyclotron current drive power, and ohmic coil). The approximate plasma response model was identified from simulated open-loop data obtained using a rapidly converging plasma transport code, METIS, which includes an MHD equilibrium and current diffusion solver, and combines plasma transport nonlinearity with 0D scaling laws and 1.5D ordinary differential equations. The paper discusses the results of closed-loop METIS simulations, using the near-optimal ARTAEMIS control algorithm (Moreau D et al 2013 Nucl. Fusion 53 063020) for steady state AT operation. With feedforward plus feedback control, the steady state target q-profile and βN are satisfactorily tracked with a time scale of about 10 s, despite large disturbances applied to the feedforward powers and plasma parameters. The robustness of the control algorithm with respect to disturbances of the H&CD actuators and of plasma parameters such as the H-factor, plasma density and effective charge, is also shown.

  18. In silico structure-based screening of versatile P-glycoprotein inhibitors using polynomial empirical scoring functions.

    PubMed

    Shityakov, Sergey; Förster, Carola

    2014-01-01

    P-glycoprotein (P-gp) is an ATP (adenosine triphosphate)-binding cassette transporter that causes multidrug resistance of various chemotherapeutic substances by active efflux from mammalian cells. P-gp plays a pivotal role in limiting drug absorption and distribution in different organs, including the intestines and brain. Thus, the prediction of P-gp-drug interactions is of vital importance in assessing drug pharmacokinetic and pharmacodynamic properties. To find the strongest P-gp blockers, we performed an in silico structure-based screening of P-gp inhibitor library (1,300 molecules) by the gradient optimization method, using polynomial empirical scoring (POLSCORE) functions. We report a strong correlation (r (2)=0.80, F=16.27, n=6, P<0.0157) of inhibition constants (Kiexp or pKiexp; experimental Ki or negative decimal logarithm of Kiexp) converted from experimental IC50 (half maximal inhibitory concentration) values with POLSCORE-predicted constants (KiPOLSCORE or pKiPOLSCORE), using a linear regression fitting technique. The hydrophobic interactions between P-gp and selected drug substances were detected as the main forces responsible for the inhibition effect. The results showed that this scoring technique might be useful in the virtual screening and filtering of databases of drug-like compounds at the early stage of drug development processes.

  19. Multi-fault diagnosis for rolling element bearings based on ensemble empirical mode decomposition and optimized support vector machines

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoyuan; Zhou, Jianzhong

    2013-12-01

    This study presents a novel procedure based on ensemble empirical mode decomposition (EEMD) and optimized support vector machine (SVM) for multi-fault diagnosis of rolling element bearings. The vibration signal is adaptively decomposed into a number of intrinsic mode functions (IMFs) by EEMD. Two types of features, the EEMD energy entropy and singular values of the matrix whose rows are IMFs, are extracted. EEMD energy entropy is used to specify whether the bearing has faults or not. If the bearing has faults, singular values are input to multi-class SVM optimized by inter-cluster distance in the feature space (ICDSVM) to specify the fault type. The proposed method was tested on a system with an electric motor which has two rolling bearings with 8 normal working conditions and 48 fault working conditions. Five groups of experiments were done to evaluate the effectiveness of the proposed method. The results show that the proposed method outperforms other methods both mentioned in this paper and published in other literatures.

  20. Empirically-Based Crop Insurance for China: A Pilot Study in the Down-middle Yangtze River Area of China

    NASA Astrophysics Data System (ADS)

    Wang, Erda; Yu, Yang; Little, Bertis B.; Chen, Zhongxin; Ren, Jianqiang

    Factors that caused slow growth in crop insurance participation and its ultimate failure in China were multi-faceted including high agricultural production risk, low participation rate, inadequate public awareness, high loss ratio, insufficient and interrupted government financial support. Thus, a clear and present need for data driven analyses and empirically-based risk management exists in China. In the present investigation, agricultural production data for two crops (corn, rice) in five counties in Jiangxi Province and Hunan province for design of a pilot crop insurance program in China. A crop insurance program was designed which (1) provides 75% coverage, (2) a 55% premium rate reduction for the farmer compared to catastrophic coverage most recently offered, and uses the currently approved governmental premium subsidy level. Thus a safety net for Chinese farmers that help maintain agricultural production at a level of self-sufficiency that costs less than half the current plans requires one change to the program: ≥80% of producers must participate in an area.