Science.gov

Sample records for empirically based theoretical

  1. Distributed optical fiber-based theoretical and empirical methods monitoring hydraulic engineering subjected to seepage velocity

    NASA Astrophysics Data System (ADS)

    Su, Huaizhi; Tian, Shiguang; Cui, Shusheng; Yang, Meng; Wen, Zhiping; Xie, Wei

    2016-09-01

    In order to systematically investigate the general principle and method of monitoring seepage velocity in the hydraulic engineering, the theoretical analysis and physical experiment were implemented based on distributed fiber-optic temperature sensing (DTS) technology. During the coupling influence analyses between seepage field and temperature field in the embankment dam or dike engineering, a simplified model was constructed to describe the coupling relationship of two fields. Different arrangement schemes of optical fiber and measuring approaches of temperature were applied on the model. The inversion analysis idea was further used. The theoretical method of monitoring seepage velocity in the hydraulic engineering was finally proposed. A new concept, namely the effective thermal conductivity, was proposed referring to the thermal conductivity coefficient in the transient hot-wire method. The influence of heat conduction and seepage could be well reflected by this new concept, which was proved to be a potential approach to develop an empirical method monitoring seepage velocity in the hydraulic engineering.

  2. Outcome (competency) based education: an exploration of its origins, theoretical basis, and empirical evidence.

    PubMed

    Morcke, Anne Mette; Dornan, Tim; Eika, Berit

    2013-10-01

    Outcome based or competency based education (OBE) is so firmly established in undergraduate medical education that it might not seem necessary to ask why it was included in recommendations for the future, like the Flexner centenary report. Uncritical acceptance may not, however, deliver its greatest benefits. Our aim was to explore the underpinnings of OBE: its historical origins, theoretical basis, and empirical evidence of its effects in order to answer the question: How can predetermined learning outcomes influence undergraduate medical education? This literature review had three components: A review of historical landmarks in the evolution of OBE; a review of conceptual frameworks and theories; and a systematic review of empirical publications from 1999 to 2010 that reported data concerning the effects of learning outcomes on undergraduate medical education. OBE had its origins in behaviourist theories of learning. It is tightly linked to the assessment and regulation of proficiency, but less clearly linked to teaching and learning activities. Over time, there have been cycles of advocacy for, then criticism of, OBE. A recurring critique concerns the place of complex personal and professional attributes as "competencies". OBE has been adopted by consensus in the face of weak empirical evidence. OBE, which has been advocated for over 50 years, can contribute usefully to defining requisite knowledge and skills, and blueprinting assessments. Its applicability to more complex aspects of clinical performance is not clear. OBE, we conclude, provides a valuable approach to some, but not all, important aspects of undergraduate medical education.

  3. A Review of Theoretical and Empirical Advancements

    ERIC Educational Resources Information Center

    Wang, Mo; Henkens, Kene; van Solinge, Hanna

    2011-01-01

    In this article, we review both theoretical and empirical advancements in retirement adjustment research. After reviewing and integrating current theories about retirement adjustment, we propose a resource-based dynamic perspective to apply to the understanding of retirement adjustment. We then review empirical findings that are associated with…

  4. Theoretical and empirical bases for dialect-neutral language assessment: contributions from theoretical and applied linguistics to communication disorders.

    PubMed

    Pearson, Barbara Zurer

    2004-02-01

    Three avenues of theoretical research provide insights for discovering abstract properties of language that are subject to disorder and amenable to assessment: (1) the study of universal grammar and its acquisition; (2) descriptions of African American English (AAE) Syntax, Semantics, and Phonology within theoretical linguistics; and (3) the study of specific language impairment (SLI) cross-linguistically. Abstract linguistic concepts were translated into a set of assessment protocols that were used to establish normative data on language acquisition (developmental milestones) in typically developing AAE children ages 4 to 9 years. Testing AAE-speaking language impaired (LI) children and both typically developing (TD) and LI Mainstream American English (MAE)-learning children on these same measures provided the data to select assessments for which (1) TD MAE and AAE children performed the same, and (2) TD performance was reliably different from LI performance in both dialect groups.

  5. Outcome (Competency) Based Education: An Exploration of Its Origins, Theoretical Basis, and Empirical Evidence

    ERIC Educational Resources Information Center

    Morcke, Anne Mette; Dornan, Tim; Eika, Berit

    2013-01-01

    Outcome based or competency based education (OBE) is so firmly established in undergraduate medical education that it might not seem necessary to ask why it was included in recommendations for the future, like the Flexner centenary report. Uncritical acceptance may not, however, deliver its greatest benefits. Our aim was to explore the…

  6. Theoretical and Empirical Base for Implementation Components of Health-Promoting Schools

    ERIC Educational Resources Information Center

    Samdal, Oddrun; Rowling, Louise

    2011-01-01

    Purpose: Efforts to create a scientific base for the health-promoting school approach have so far not articulated a clear "Science of Delivery". There is thus a need for systematic identification of clearly operationalised implementation components. To address a next step in the refinement of the health-promoting schools' work, this paper sets out…

  7. Theoretical and Empirical Base for Implementation Components of Health-Promoting Schools

    ERIC Educational Resources Information Center

    Samdal, Oddrun; Rowling, Louise

    2011-01-01

    Purpose: Efforts to create a scientific base for the health-promoting school approach have so far not articulated a clear "Science of Delivery". There is thus a need for systematic identification of clearly operationalised implementation components. To address a next step in the refinement of the health-promoting schools' work, this paper sets out…

  8. Pathways from parental AIDS to child psychological, educational and sexual risk: developing an empirically-based interactive theoretical model.

    PubMed

    Cluver, Lucie; Orkin, Mark; Boyes, Mark E; Sherr, Lorraine; Makasi, Daphne; Nikelo, Joy

    2013-06-01

    Increasing evidence demonstrates negative psychological, health, and developmental outcomes for children associated with parental HIV/AIDS illness and death. However, little is known about how parental AIDS leads to negative child outcomes. This study used a structural equation modelling approach to develop an empirically-based theoretical model of interactive relationships between parental or primary caregiver AIDS-illness, AIDS-orphanhood and predicted intervening factors associated with children's psychological distress, educational access and sexual health. Cross-sectional data were collected in 2009-2011, from 6002 children aged 10-17 years in three provinces of South Africa using stratified random sampling. Comparison groups included children orphaned by AIDS, orphaned by other causes and non-orphans, and children whose parents or primary caregivers were unwell with AIDS, unwell with other causes or healthy. Participants reported on psychological symptoms, educational access, and sexual health risks, as well as hypothesized sociodemographic and intervening factors. In order to build an interactive theoretical model of multiple child outcomes, multivariate regression and structural equation models were developed for each individual outcome, and then combined into an overall model. Neither AIDS-orphanhood nor parental AIDS-illness were directly associated with psychological distress, educational access, or sexual health. Instead, significant indirect effects of AIDS-orphanhood and parental AIDS-illness were obtained on all measured outcomes. Child psychological, educational and sexual health risks share a common set of intervening variables including parental disability, poverty, community violence, stigma, and child abuse that together comprise chain effects. In all models, parental AIDS-illness had stronger effects and more risk pathways than AIDS-orphanhood, especially via poverty and parental disability. AIDS-orphanhood and parental AIDS-illness impact

  9. Measuring Poverty: Theoretical and Empirical Considerations

    ERIC Educational Resources Information Center

    Iceland, John

    2005-01-01

    This article discusses the theoretical underpinnings of different types of income poverty measures--absolute, relative, and a National Academy of Sciences (NAS) "quasi-relative" one--and empirically assesses them by tracking their performance over time and across demographic groups. Part of the assessment involves comparing these measures to…

  10. Evidence-based stillbirth prevention strategies: combining empirical and theoretical paradigms to inform health planning and decision-making.

    PubMed

    King, Mary Lou; Aden, Amna; Tapa, Stephany; Jumah, Reem; Khan, Salma

    2014-08-01

    A global health project undertaken in Qatar on the Arabian Peninsula immersed undergraduate nursing students in hands-on learning to address the question: What strategies are effective in preventing stillbirth? Worldwide stillbirth estimates of 2.6 million per year and the high rate in the Eastern Mediterranean Region of 27 per 1,000 total live births provided the stimulus for this inquiry. We used a dual empirical and theoretical approach that combined the principles of evidence-based practice and population health planning. Students were assisted to translate pre-appraised literature based on the 6S hierarchical pyramid of evidence. The PRECEDE-PROCEED (P-P) model served as an organizing template to assemble data extracted from the appraisal of 21 systematic literature reviews ± meta-analyses, 2 synopses of synthesized reports, and 9 individual studies summarizing stillbirth prevention strategies in low, middle, and high income countries. Consistent with elements of the P-P model, stillbirth prevention strategies were classified as social, epidemiological, educational, ecological, administrative, or policy. Ten recommendations with clear evidence of effectiveness in preventing stillbirth in low, middle, or high income countries were identified. Several other promising interventions were identified with weak, uncertain, or inconclusive evidence. These require further rigorous testing. Two complementary paradigms--evidence-based practice and an ecological population health program planning model--helped baccalaureate nursing students transfer research evidence into useable knowledge for practice. They learned the importance of comprehensive assessments and evidence-informed interventions. The multidimensional elements of the P-P model sensitized students to the complex interrelated factors influencing stillbirth and its prevention. © 2014 Sigma Theta Tau International.

  11. A theoretical and empirical investigation of nutritional label use.

    PubMed

    Drichoutis, Andreas C; Lazaridis, Panagiotis; Nayga, Rodolfo M; Kapsokefalou, Maria; Chryssochoidis, George

    2008-08-01

    Due in part to increasing diet-related health problems caused, among others, by obesity, nutritional labelling has been considered important, mainly because it can provide consumers with information that can be used to make informed and healthier food choices. Several studies have focused on the empirical perspective of nutritional label use. None of these studies, however, have focused on developing a theoretical economic model that would adequately describe nutritional label use based on a utility theoretic framework. We attempt to fill this void by developing a simple theoretical model of nutritional label use, incorporating the time a consumer spends reading labels as part of the food choice process. The demand equations of the model are then empirically tested. Results suggest the significant role of several variables that flow directly from the model which, to our knowledge, have not been used in any previous empirical work.

  12. Analyzing Teaching Commitment: Theoretical and Empirical Dimensions.

    ERIC Educational Resources Information Center

    Tyree, Alexander K., Jr.

    Classical commitment studies are either sociologically oriented or based in the psychological empirical research tradition. A review of the literature reveals agreement on the multidimensionality and the contextual complexity of commitment, two principles which guide the hypotheses of the present study. This study uses the Administrator and…

  13. Designing Educative Curriculum Materials: A Theoretically and Empirically Driven Process

    ERIC Educational Resources Information Center

    Davis, Elizabeth A.; Palincsar, Annemarie Sullivan; Arias, Anna Maria; Bismack, Amber Schultz; Marulis, Loren M.; Iwashyna, Stefanie K.

    2014-01-01

    In this article, the authors argue for a design process in the development of educative curriculum materials that is theoretically and empirically driven. Using a design-based research approach, they describe their design process for incorporating educative features intended to promote teacher learning into existing, high-quality curriculum…

  14. An empirical evaluation of two theoretically-based hypotheses on the directional association between self-worth and hope.

    PubMed

    McDavid, Lindley; McDonough, Meghan H; Smith, Alan L

    2015-06-01

    Fostering self-worth and hope are important goals of positive youth development (PYD) efforts, yet intervention design is complicated by contrasting theoretical hypotheses regarding the directional association between these constructs. Therefore, within a longitudinal design we tested: (1) that self-worth predicts changes in hope (self theory; Harter, 1999), and (2) that hope predicts changes in self-worth (hope theory; Snyder, 2002) over time. Youth (N = 321; Mage = 10.33 years) in a physical activity-based PYD program completed surveys 37-45 days prior to and on the second day and third-to-last day of the program. A latent variable panel model that included autoregressive and cross-lagged paths indicated that self-worth was a significant predictor of change in hope, but hope did not predict change in self-worth. Therefore, the directional association between self-worth and hope is better explained by self-theory and PYD programs should aim to enhance perceptions of self-worth to build perceptions of hope.

  15. Defining Empirically Based Practice.

    ERIC Educational Resources Information Center

    Siegel, Deborah H.

    1984-01-01

    Provides a definition of empirically based practice, both conceptually and operationally. Describes a study of how research and practice were integrated in the graduate social work program at the School of Social Service Administration, University of Chicago. (JAC)

  16. Defining Empirically Based Practice.

    ERIC Educational Resources Information Center

    Siegel, Deborah H.

    1984-01-01

    Provides a definition of empirically based practice, both conceptually and operationally. Describes a study of how research and practice were integrated in the graduate social work program at the School of Social Service Administration, University of Chicago. (JAC)

  17. A Theoretical and Empirical Integrated Method to Select the Optimal Combined Signals for Geometry-Free and Geometry-Based Three-Carrier Ambiguity Resolution

    PubMed Central

    Zhao, Dongsheng; Roberts, Gethin Wyn; Lau, Lawrence; Hancock, Craig M.; Bai, Ruibin

    2016-01-01

    Twelve GPS Block IIF satellites, out of the current constellation, can transmit on three-frequency signals (L1, L2, L5). Taking advantages of these signals, Three-Carrier Ambiguity Resolution (TCAR) is expected to bring much benefit for ambiguity resolution. One of the research areas is to find the optimal combined signals for a better ambiguity resolution in geometry-free (GF) and geometry-based (GB) mode. However, the existing researches select the signals through either pure theoretical analysis or testing with simulated data, which might be biased as the real observation condition could be different from theoretical prediction or simulation. In this paper, we propose a theoretical and empirical integrated method, which first selects the possible optimal combined signals in theory and then refines these signals with real triple-frequency GPS data, observed at eleven baselines of different lengths. An interpolation technique is also adopted in order to show changes of the AR performance with the increase in baseline length. The results show that the AR success rate can be improved by 3% in GF mode and 8% in GB mode at certain intervals of the baseline length. Therefore, the TCAR can perform better by adopting the combined signals proposed in this paper when the baseline meets the length condition. PMID:27854324

  18. A Theoretical and Empirical Integrated Method to Select the Optimal Combined Signals for Geometry-Free and Geometry-Based Three-Carrier Ambiguity Resolution.

    PubMed

    Zhao, Dongsheng; Roberts, Gethin Wyn; Lau, Lawrence; Hancock, Craig M; Bai, Ruibin

    2016-11-16

    Twelve GPS Block IIF satellites, out of the current constellation, can transmit on three-frequency signals (L1, L2, L5). Taking advantages of these signals, Three-Carrier Ambiguity Resolution (TCAR) is expected to bring much benefit for ambiguity resolution. One of the research areas is to find the optimal combined signals for a better ambiguity resolution in geometry-free (GF) and geometry-based (GB) mode. However, the existing researches select the signals through either pure theoretical analysis or testing with simulated data, which might be biased as the real observation condition could be different from theoretical prediction or simulation. In this paper, we propose a theoretical and empirical integrated method, which first selects the possible optimal combined signals in theory and then refines these signals with real triple-frequency GPS data, observed at eleven baselines of different lengths. An interpolation technique is also adopted in order to show changes of the AR performance with the increase in baseline length. The results show that the AR success rate can be improved by 3% in GF mode and 8% in GB mode at certain intervals of the baseline length. Therefore, the TCAR can perform better by adopting the combined signals proposed in this paper when the baseline meets the length condition.

  19. Semivolatile organic compounds in homes: strategies for efficient and systematic exposure measurement based on empirical and theoretical factors.

    PubMed

    Dodson, Robin E; Camann, David E; Morello-Frosch, Rachel; Brody, Julia G; Rudel, Ruthann A

    2015-01-06

    Residential exposure can dominate total exposure for commercial chemicals of health concern; however, despite the importance of consumer exposures, methods for estimating household exposures remain limited. We collected house dust and indoor air samples in 49 California homes and analyzed for 76 semivolatile organic compounds (SVOCs)--phthalates, polybrominated diphenyl ethers (PBDEs), polychlorinated biphenyls (PCBs), polycyclic aromatic hydrocarbons (PAHs), and pesticides. Sixty chemicals were detected in either dust or air and here we report 58 SVOCs detected in dust for the first time. In dust, phthalates (bis(2-ethylhexyl) phthalate, benzyl butyl phthalate, di-n-butyl phthalate) and flame retardants (PBDE 99, PBDE 47) were detected at the highest concentrations relative to other chemicals at the 95th percentile, while phthalates were highest at the median. Because SVOCs are found in both gas and condensed phases and redistribute from their original source over time, partitioning models can clarify their fate indoors. We use empirical data to validate air-dust partitioning models and use these results, combined with experience in SVOC exposure assessment, to recommend residential exposure measurement strategies. We can predict dust concentrations reasonably well from measured air concentrations (R(2) = 0.80). Partitioning models and knowledge of chemical Koa elucidate exposure pathways and suggest priorities for chemical regulation. These findings also inform study design by allowing researchers to select sampling approaches optimized for their chemicals of interest and study goals. While surface wipes are commonly used in epidemiology studies because of ease of implementation, passive air sampling may be more standardized between homes and also relatively simple to deploy. Validation of passive air sampling methods for SVOCs is a priority.

  20. Semivolatile Organic Compounds in Homes: Strategies for Efficient and Systematic Exposure Measurement Based on Empirical and Theoretical Factors

    PubMed Central

    2014-01-01

    Residential exposure can dominate total exposure for commercial chemicals of health concern; however, despite the importance of consumer exposures, methods for estimating household exposures remain limited. We collected house dust and indoor air samples in 49 California homes and analyzed for 76 semivolatile organic compounds (SVOCs)—phthalates, polybrominated diphenyl ethers (PBDEs), polychlorinated biphenyls (PCBs), polycyclic aromatic hydrocarbons (PAHs), and pesticides. Sixty chemicals were detected in either dust or air and here we report 58 SVOCs detected in dust for the first time. In dust, phthalates (bis(2-ethylhexyl) phthalate, benzyl butyl phthalate, di-n-butyl phthalate) and flame retardants (PBDE 99, PBDE 47) were detected at the highest concentrations relative to other chemicals at the 95th percentile, while phthalates were highest at the median. Because SVOCs are found in both gas and condensed phases and redistribute from their original source over time, partitioning models can clarify their fate indoors. We use empirical data to validate air-dust partitioning models and use these results, combined with experience in SVOC exposure assessment, to recommend residential exposure measurement strategies. We can predict dust concentrations reasonably well from measured air concentrations (R2 = 0.80). Partitioning models and knowledge of chemical Koa elucidate exposure pathways and suggest priorities for chemical regulation. These findings also inform study design by allowing researchers to select sampling approaches optimized for their chemicals of interest and study goals. While surface wipes are commonly used in epidemiology studies because of ease of implementation, passive air sampling may be more standardized between homes and also relatively simple to deploy. Validation of passive air sampling methods for SVOCs is a priority. PMID:25488487

  1. Competence and drug use: theoretical frameworks, empirical evidence and measurement.

    PubMed

    Lindenberg, C S; Solorzano, R; Kelley, M; Darrow, V; Gendrop, S C; Strickland, O

    1998-01-01

    Statistics show that use of harmful substances (alcohol, cigarettes, marijuana, cocaine) among women of childbearing age is widespread and serious. Numerous theoretical models and empirical studies have attempted to explain the complex factors that lead individuals to use drugs. The Social Stress Model of Substance Abuse [1] is one model developed to explain parameters that influence drug use. According to the model, the likelihood of an individual engaging in drug use is seen as a function of the stress level and the extent to which it is offset by stress modifiers such as social networks, social competencies, and resources. The variables of the denominator are viewed as interacting with each other to buffer the impact of stress [1]. This article focuses on one of the constructs in this model: that of competence. It presents a summary of theoretical and conceptual formulations for the construct of competence, a review of empirical evidence for the association of competence with drug use, and describes the preliminary development of a multi-scale instrument designed to assess drug protective competence among low-income Hispanic childbearing women. Based upon theoretical and empirical studies, eight domains of drug protective competence were identified and conceptually defined. Using subscales from existing instruments with psychometric evidence for their validity and reliability, a multi-scale instrument was developed to assess drug protective competence. Hypothesis testing was used to assess construct validity. Four drug protective competence domains (social influence, sociability, self-worth, and control/responsibility) were found to be statistically associated with drug use behaviors. Although not statistically significant, expected trends were observed between drug use and the other four domains of drug protective competence (intimacy, nurturance, goal directedness, and spiritual directedness). Study limitations and suggestions for further psychometric testing

  2. Gay identity, interpersonal violence, and HIV risk behaviors: an empirical test of theoretical relationships among a probability-based sample of urban men who have sex with men.

    PubMed

    Relf, Michael V; Huang, Bu; Campbell, Jacquelyn; Catania, Joe

    2004-01-01

    The highest absolute number of new HIV infections and AIDS cases still occur among men who have sex with men (MSM). Numerous theoretical approaches have been used to understand HIV risk behaviors among MSM; however, no theoretical model examines sexual risk behaviors in the context of gay identity and interpersonal violence. Using a model testing predictive correlational design, the theoretical relationships between childhood sexual abuse, adverse early life experiences, gay identity, substance use, battering, aversive emotions, HIV alienation, cue-to-action triggers, and HIV risk behaviors were empirically tested using confirmatory factor analysis and structural equation modeling. The relationships between these constructs are complex, yet childhood sexual abuse and gay identity were found to be theoretically associated with HIV risk behaviors. Also of importance, battering victimization was identified as a key mediating variable between childhood sexual abuse, gay identity, and adverse early life experiences and HIV risk behaviors among urban MSM.

  3. Empirical and theoretical analysis of complex systems

    NASA Astrophysics Data System (ADS)

    Zhao, Guannan

    structures evolve on a similar timescale to individual level transmission, we investigated the process of transmission through a model population comprising of social groups which follow simple dynamical rules for growth and break-up, and the profiles produced bear a striking resemblance to empirical data obtained from social, financial and biological systems. Finally, for better implementation of a widely accepted power law test algorithm, we have developed a fast testing procedure using parallel computation.

  4. Developing Empirically Based Models of Practice.

    ERIC Educational Resources Information Center

    Blythe, Betty J.; Briar, Scott

    1985-01-01

    Over the last decade emphasis has shifted from theoretically based models of practice to empirically based models whose elements are derived from clinical research. These models are defined and a developing model of practice through the use of single-case methodology is examined. Potential impediments to this new role are identified. (Author/BL)

  5. Discrepancies between theoretical and empirical models of the flaring solar chromosphere and their possible resolution

    NASA Technical Reports Server (NTRS)

    Emslie, A. G.; Brown, J. C.; Machado, M. E.

    1981-01-01

    Possible sources of pronounced discrepancy between empirical and theoretical models of the solar chromosphere during flares are discussed. It is noted that a principal source of uncertainty in empirical models is the inhomogeneity of the spectral data on which they are based. With theoretical models, probably the most important source of error is neglect of the radiative coupling of upper and lower chromospheric regions. A new procedure for studying flare energy input is suggested wherein the required input is derived from the empirical model chromosphere. This procedure is applied to the electron-heated case, and it is found that the integral equation defining the flare energy deposition rate can be inverted analytically to yield the injected electron flux energy spectrum from knowledge of the energy balance in the empirical atmosphere. Recent empirical model results are analyzed in this manner, and the calculated injected electron flux spectrum is compared with that needed for hard X-ray bursts in moderately large flares.

  6. Cognitive culture: theoretical and empirical insights into social learning strategies.

    PubMed

    Rendell, Luke; Fogarty, Laurel; Hoppitt, William J E; Morgan, Thomas J H; Webster, Mike M; Laland, Kevin N

    2011-02-01

    Research into social learning (learning from others) has expanded significantly in recent years, not least because of productive interactions between theoretical and empirical approaches. This has been coupled with a new emphasis on learning strategies, which places social learning within a cognitive decision-making framework. Understanding when, how and why individuals learn from others is a significant challenge, but one that is critical to numerous fields in multiple academic disciplines, including the study of social cognition. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. Kinetics of solute adsorption at solid/solution interfaces: a theoretical development of the empirical pseudo-first and pseudo-second order kinetic rate equations, based on applying the statistical rate theory of interfacial transport.

    PubMed

    Rudzinski, Wladyslaw; Plazinski, Wojciech

    2006-08-24

    For practical applications of solid/solution adsorption processes, the kinetics of these processes is at least as much essential as their features at equilibrium. Meanwhile, the general understanding of this kinetics and its corresponding theoretical description are far behind the understanding and the level of theoretical interpretation of adsorption equilibria in these systems. The Lagergren empirical equation proposed at the end of 19th century to describe the kinetics of solute sorption at the solid/solution interfaces has been the most widely used kinetic equation until now. This equation has also been called the pseudo-first order kinetic equation because it was intuitively associated with the model of one-site occupancy adsorption kinetics governed by the rate of surface reaction. More recently, its generalization for the two-sites-occupancy adsorption was proposed and called the pseudo-second-order kinetic equation. However, the general use and the wide applicability of these empirical equations during more than one century have not resulted in a corresponding fundamental search for their theoretical origin. Here the first theoretical development of these equations is proposed, based on applying the new fundamental approach to kinetics of interfacial transport called the Statistical Rate Theory. It is shown that these empirical equations are simplified forms of a more general equation developed here, for the case when the adsorption kinetics is governed by the rate of surface reactions. The features of that general equation are shown by presenting exhaustive model investigations, and the applicability of that equation is tested by presenting a quantitative analysis of some experimental data reported in the literature.

  8. Empirical STORM-E Model. [I. Theoretical and Observational Basis

    NASA Technical Reports Server (NTRS)

    Mertens, Christopher J.; Xu, Xiaojing; Bilitza, Dieter; Mlynczak, Martin G.; Russell, James M., III

    2013-01-01

    Auroral nighttime infrared emission observed by the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument onboard the Thermosphere-Ionosphere-Mesosphere Energetics and Dynamics (TIMED) satellite is used to develop an empirical model of geomagnetic storm enhancements to E-region peak electron densities. The empirical model is called STORM-E and will be incorporated into the 2012 release of the International Reference Ionosphere (IRI). The proxy for characterizing the E-region response to geomagnetic forcing is NO+(v) volume emission rates (VER) derived from the TIMED/SABER 4.3 lm channel limb radiance measurements. The storm-time response of the NO+(v) 4.3 lm VER is sensitive to auroral particle precipitation. A statistical database of storm-time to climatological quiet-time ratios of SABER-observed NO+(v) 4.3 lm VER are fit to widely available geomagnetic indices using the theoretical framework of linear impulse-response theory. The STORM-E model provides a dynamic storm-time correction factor to adjust a known quiescent E-region electron density peak concentration for geomagnetic enhancements due to auroral particle precipitation. Part II of this series describes the explicit development of the empirical storm-time correction factor for E-region peak electron densities, and shows comparisons of E-region electron densities between STORM-E predictions and incoherent scatter radar measurements. In this paper, Part I of the series, the efficacy of using SABER-derived NO+(v) VER as a proxy for the E-region response to solar-geomagnetic disturbances is presented. Furthermore, a detailed description of the algorithms and methodologies used to derive NO+(v) VER from SABER 4.3 lm limb emission measurements is given. Finally, an assessment of key uncertainties in retrieving NO+(v) VER is presented

  9. Segmented crystalline scintillators: empirical and theoretical investigation of a high quantum efficiency EPID based on an initial engineering prototype CsI(TI) detector.

    PubMed

    Sawant, Amit; Antonuk, Larry E; El-Mohri, Youcef; Zhao, Qihua; Wang, Yi; Li, Yixin; Du, Hong; Perna, Louis

    2006-04-01

    Modern-day radiotherapy relies on highly sophisticated forms of image guidance in order to implement increasingly conformal treatment plans and achieve precise dose delivery. One of the most important goals of such image guidance is to delineate the clinical target volume from surrounding normal tissue during patient setup and dose delivery, thereby avoiding dependence on surrogates such as bony landmarks. In order to achieve this goal, it is necessary to integrate highly efficient imaging technology, capable of resolving soft-tissue contrast at very low doses, within the treatment setup. In this paper we report on the development of one such modality, which comprises a nonoptimized, prototype electronic portal imaging device (EPID) based on a 40 mm thick, segmented crystalline CsI(Tl) detector incorporated into an indirect-detection active matrix flat panel imager (AMFPI). The segmented detector consists of a matrix of 160 x 160 optically isolated, crystalline CsI(Tl) elements spaced at 1016 microm pitch. The detector was coupled to an indirect detection-based active matrix array having a pixel pitch of 508 microm, with each detector element registered to 2 x 2 array pixels. The performance of the prototype imager was evaluated under very low-dose radiotherapy conditions and compared to that of a conventional megavoltage AMFPI based on a Lanex Fast-B phosphor screen. Detailed quantitative measurements were performed in order to determine the x-ray sensitivity, modulation transfer function, noise power spectrum, and detective quantum efficiency (DQE). In addition, images of a contrast-detail phantom and an anthropomorphic head phantom were also acquired. The prototype imager exhibited approximately 22 times higher zero-frequency DQE (approximately 22%) compared to that of the conventional AMFPI (approximately 1%). The measured zero-frequency DQE was found to be lower than theoretical upper limits (approximately 27%) calculated from Monte Carlo simulations, which

  10. Collective behavior in animal groups: theoretical models and empirical studies

    PubMed Central

    Giardina, Irene

    2008-01-01

    Collective phenomena in animal groups have attracted much attention in the last years, becoming one of the hottest topics in ethology. There are various reasons for this. On the one hand, animal grouping provides a paradigmatic example of self-organization, where collective behavior emerges in absence of centralized control. The mechanism of group formation, where local rules for the individuals lead to a coherent global state, is very general and transcends the detailed nature of its components. In this respect, collective animal behavior is a subject of great interdisciplinary interest. On the other hand, there are several important issues related to the biological function of grouping and its evolutionary success. Research in this field boasts a number of theoretical models, but much less empirical results to compare with. For this reason, even if the general mechanisms through which self-organization is achieved are qualitatively well understood, a quantitative test of the models assumptions is still lacking. New analysis on large groups, which require sophisticated technological procedures, can provide the necessary empirical data. PMID:19404431

  11. Converging Paradigms: A Reflection on Parallel Theoretical Developments in Psychoanalytic Metapsychology and Empirical Dream Research.

    PubMed

    Schmelowszky, Ágoston

    2016-08-01

    In the last decades one can perceive a striking parallelism between the shifting perspective of leading representatives of empirical dream research concerning their conceptualization of dreaming and the paradigm shift within clinically based psychoanalytic metapsychology with respect to its theory on the significance of dreaming. In metapsychology, dreaming becomes more and more a central metaphor of mental functioning in general. The theories of Klein, Bion, and Matte-Blanco can be considered as milestones of this paradigm shift. In empirical dream research, the competing theories of Hobson and of Solms respectively argued for and against the meaningfulness of the dream-work in the functioning of the mind. In the meantime, empirical data coming from various sources seemed to prove the significance of dream consciousness for the development and maintenance of adaptive waking consciousness. Metapsychological speculations and hypotheses based on empirical research data seem to point in the same direction, promising for contemporary psychoanalytic practice a more secure theoretical base. In this paper the author brings together these diverse theoretical developments and presents conclusions regarding psychoanalytic theory and technique, as well as proposing an outline of an empirical research plan for testing the specificity of psychoanalysis in developing dream formation.

  12. Discrepancies between empirical and theoretical models of the flaring solar chromosphere and their possible resolution

    NASA Technical Reports Server (NTRS)

    Emslie, G. A.; Brown, J. C.; Machado, M. E.

    1980-01-01

    Models of the solar chromosphere during flaring deduced theoretically or empirically are compared. Marked discrepancies are noted and various reasons are offered to explain their existence. A means is presented for testing theoretical heating models (electron heating) by analyzing the net energy loss rates in (observed) empirical atmospheres and inverting the flare energy equation to deduce the parameters of the supposed heating mechanism.

  13. Whole-body cryotherapy: empirical evidence and theoretical perspectives.

    PubMed

    Bleakley, Chris M; Bieuzen, François; Davison, Gareth W; Costello, Joseph T

    2014-01-01

    Whole-body cryotherapy (WBC) involves short exposures to air temperatures below -100°C. WBC is increasingly accessible to athletes, and is purported to enhance recovery after exercise and facilitate rehabilitation postinjury. Our objective was to review the efficacy and effectiveness of WBC using empirical evidence from controlled trials. We found ten relevant reports; the majority were based on small numbers of active athletes aged less than 35 years. Although WBC produces a large temperature gradient for tissue cooling, the relatively poor thermal conductivity of air prevents significant subcutaneous and core body cooling. There is weak evidence from controlled studies that WBC enhances antioxidant capacity and parasympathetic reactivation, and alters inflammatory pathways relevant to sports recovery. A series of small randomized studies found WBC offers improvements in subjective recovery and muscle soreness following metabolic or mechanical overload, but little benefit towards functional recovery. There is evidence from one study only that WBC may assist rehabilitation for adhesive capsulitis of the shoulder. There were no adverse events associated with WBC; however, studies did not seem to undertake active surveillance of predefined adverse events. Until further research is available, athletes should remain cognizant that less expensive modes of cryotherapy, such as local ice-pack application or cold-water immersion, offer comparable physiological and clinical effects to WBC.

  14. Whole-body cryotherapy: empirical evidence and theoretical perspectives

    PubMed Central

    Bleakley, Chris M; Bieuzen, François; Davison, Gareth W; Costello, Joseph T

    2014-01-01

    Whole-body cryotherapy (WBC) involves short exposures to air temperatures below −100°C. WBC is increasingly accessible to athletes, and is purported to enhance recovery after exercise and facilitate rehabilitation postinjury. Our objective was to review the efficacy and effectiveness of WBC using empirical evidence from controlled trials. We found ten relevant reports; the majority were based on small numbers of active athletes aged less than 35 years. Although WBC produces a large temperature gradient for tissue cooling, the relatively poor thermal conductivity of air prevents significant subcutaneous and core body cooling. There is weak evidence from controlled studies that WBC enhances antioxidant capacity and parasympathetic reactivation, and alters inflammatory pathways relevant to sports recovery. A series of small randomized studies found WBC offers improvements in subjective recovery and muscle soreness following metabolic or mechanical overload, but little benefit towards functional recovery. There is evidence from one study only that WBC may assist rehabilitation for adhesive capsulitis of the shoulder. There were no adverse events associated with WBC; however, studies did not seem to undertake active surveillance of predefined adverse events. Until further research is available, athletes should remain cognizant that less expensive modes of cryotherapy, such as local ice-pack application or cold-water immersion, offer comparable physiological and clinical effects to WBC. PMID:24648779

  15. Integrative Behavioral Couple Therapy: Theoretical Background, Empirical Research, and Dissemination.

    PubMed

    Roddy, McKenzie K; Nowlan, Kathryn M; Doss, Brian D; Christensen, Andrew

    2016-09-01

    Integrative Behavioral Couple Therapy (IBCT), developed by Drs. Andrew Christensen and Neil Jacobson, builds off the tradition of behavioral couple therapy by including acceptance strategies as key components of treatment. Results from a large randomized clinical trial of IBCT indicate that it yields large and significant gains in relationship satisfaction. Furthermore, these benefits have been shown to persist for at least 5 years after treatment for the average couple. Not only does IBCT positively impact relationship constructs such as satisfaction and communication, but the benefits of therapy extend to individual, co-parenting, and child functioning. Moreover, IBCT has been shown to operate through the putative mechanisms of improvements in emotional acceptance, behavior change, and communication. IBCT was chosen for nationwide training and dissemination through the Veteran Affairs Medical Centers. Furthermore, the principles of IBCT have been translated into a web-based intervention for distressed couples, OurRelationship.com. IBCT is continuing to evolve and grow as research and technologies allow for continued evaluation and dissemination of this well-supported theoretical model.

  16. [Attachment theory and eating disorders--theoretical and empirical issues].

    PubMed

    Józefik, Barbara

    2008-01-01

    The paper presents the attachment theory in relation to eating disorders. In the first part, the classic concepts of anorexia and bulimia nervosa are discussed taking into account assumptions of Bowlby's and his followers' model. In the second part, empirical data on anorexia and bulimia nervosa and attachment patterns are presented. The importance of methodological issues is stressed regarding the attachment model particularly in eating disorders. In the conclusion significant findings correlation of attachment patterns and eating disorders are indicated.

  17. An empirical investigation of theoretical loss and gambling intensity.

    PubMed

    Auer, Michael; Griffiths, Mark D

    2014-12-01

    Many recent studies of internet gambling-particularly those that have analysed behavioural tracking data-have used variables such 'bet size' and 'number of games played' as proxy measures for 'gambling intensity'. In this paper it is argued that the most stable and reliable measure for 'gambling intensity' is the 'theoretical loss' (a product of total bet size and house advantage). In the long run, the theoretical loss corresponds with the Gross Gaming Revenue generated by commercial gaming operators. For shorter periods of time, theoretical loss is the most stable measure of gambling intensity as it is not distorted by gamblers' occasional wins. Even for single bets, the theoretical loss reflects the amount a player is willing to risk. Using behavioural tracking data of 100,000 players who played online casino, lottery and/or poker games, this paper also demonstrates that bet size does not equate to or explain theoretical loss as it does not take into account the house advantage. This lack of accuracy is shown to be even more pronounced for gamblers who play a variety of games.

  18. Potential benefits of remote sensing: Theoretical framework and empirical estimate

    NASA Technical Reports Server (NTRS)

    Eisgruber, L. M.

    1972-01-01

    A theoretical framwork is outlined for estimating social returns from research and application of remote sensing. The approximate dollar magnitude is given of a particular application of remote sensing, namely estimates of corn production, soybeans, and wheat. Finally, some comments are made on the limitations of this procedure and on the implications of results.

  19. Unifying Different Theories of Learning: Theoretical Framework and Empirical Evidence

    ERIC Educational Resources Information Center

    Phan, Huy Phuong

    2008-01-01

    The main aim of this research study was to test out a conceptual model encompassing the theoretical frameworks of achievement goals, study processing strategies, effort, and reflective thinking practice. In particular, it was postulated that the causal influences of achievement goals on academic performance are direct and indirect through study…

  20. Alternative Information Theoretic Measures of Television Messages: An Empirical Test.

    ERIC Educational Resources Information Center

    Danowski, James A.

    This research examines two information theoretic measures of media exposure within the same sample of respondents and examines their relative strengths in predicting self-reported aggression. The first measure is the form entropy (DYNUFAM) index of Watt and Krull, which assesses the structural and organizational properties of specific television…

  1. Unifying Different Theories of Learning: Theoretical Framework and Empirical Evidence

    ERIC Educational Resources Information Center

    Phan, Huy Phuong

    2008-01-01

    The main aim of this research study was to test out a conceptual model encompassing the theoretical frameworks of achievement goals, study processing strategies, effort, and reflective thinking practice. In particular, it was postulated that the causal influences of achievement goals on academic performance are direct and indirect through study…

  2. Factors of Motivation in Young Children: Theoretical and Empirical.

    ERIC Educational Resources Information Center

    Adkins, Dorothy C.; Ballif, Bonnie L.

    The construction of Gumpgookies, a test for measuring motivation of young children to achieve in school, is discussed. The test is rooted in a theoretical framework, which conceives of five constituents of motivation to achieve: (1) affective; (2) conceptual; (3) purposive; (4) cognitive; and (5) evaluative. Factor analysis and a type of cluster…

  3. Transdiagnostic models of anxiety disorder: Theoretical and empirical underpinnings.

    PubMed

    Norton, Peter J; Paulus, Daniel J

    2017-08-01

    Despite the increasing development, evaluation, and adoption of transdiagnostic cognitive behavioral therapies, relatively little has been written to detail the conceptual and empirical psychopathology framework underlying transdiagnostic models of anxiety and related disorders. In this review, the diagnostic, genetic, neurobiological, developmental, behavioral, cognitive, and interventional data underlying the model are described, with an emphasis on highlighting elements that both support and contradict transdiagnostic conceptualizations. Finally, a transdiagnostic model of anxiety disorder is presented and key areas of future evaluation and refinement are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. The growth of business firms: theoretical framework and empirical evidence.

    PubMed

    Fu, Dongfeng; Pammolli, Fabio; Buldyrev, S V; Riccaboni, Massimo; Matia, Kaushik; Yamasaki, Kazuko; Stanley, H Eugene

    2005-12-27

    We introduce a model of proportional growth to explain the distribution P(g)(g) of business-firm growth rates. The model predicts that P(g)(g) is exponential in the central part and depicts an asymptotic power-law behavior in the tails with an exponent zeta = 3. Because of data limitations, previous studies in this field have been focusing exclusively on the Laplace shape of the body of the distribution. In this article, we test the model at different levels of aggregation in the economy, from products to firms to countries, and we find that the predictions of the model agree with empirical growth distributions and size-variance relationships.

  5. Psychometric Test Theory and Cognitive Processes: A Theoretical Scrutiny and Empirical Research. Research Bulletin No. 57.

    ERIC Educational Resources Information Center

    Leino, Jarkko

    This report is the third in a series of research projects concerning abilities and performance processes, particularly in school mathematics. A theoretical scrutiny of traditional psychometric testing, cognitive processes, their interrelationships, and an empirical application of the theoretical considerations on the level of junior secondary…

  6. Recent work on consciousness: philosophical, theoretical, and empirical.

    PubMed

    Churchland, P M; Churchland, P S

    1997-06-01

    Broad-spectrum philosophical resistance to physicalist accounts of conscious awareness has condensed around a single and clearly identified line of argument. Philosophical analysis and criticism of that line of argument has also begun to crystallize. The nature of that criticism coheres with certain theoretical ideas from cognitive neuroscience that attempt to address both the existence and the contents of consciousness. Also, experimental evidence has recently begun to emerge that will serve both to constrain and to inspire such theorizing. The present article attempts to summarize the situation.

  7. Adaptive evolution: evaluating empirical support for theoretical predictions

    PubMed Central

    Olson-Manning, Carrie F.; Wagner, Maggie R.; Mitchell-Olds, Thomas

    2013-01-01

    Adaptive evolution is shaped by the interaction of population genetics, natural selection and underlying network and biochemical constraints. Variation created by mutation, the raw material for evolutionary change, is translated into phenotypes by flux through metabolic pathways and by the topography and dynamics of molecular networks. Finally, the retention of genetic variation and the efficacy of selection depend on population genetics and demographic history. Emergent high-throughput experimental methods and sequencing technologies allow us to gather more evidence and to move beyond the theory in different systems and populations. Here we review the extent to which recent evidence supports long-established theoretical principles of adaptation. PMID:23154809

  8. The ascent of man: Theoretical and empirical evidence for blatant dehumanization.

    PubMed

    Kteily, Nour; Bruneau, Emile; Waytz, Adam; Cotterill, Sarah

    2015-11-01

    Dehumanization is a central concept in the study of intergroup relations. Yet although theoretical and methodological advances in subtle, "everyday" dehumanization have progressed rapidly, blatant dehumanization remains understudied. The present research attempts to refocus theoretical and empirical attention on blatant dehumanization, examining when and why it provides explanatory power beyond subtle dehumanization. To accomplish this, we introduce and validate a blatant measure of dehumanization based on the popular depiction of evolutionary progress in the "Ascent of Man." We compare blatant dehumanization to established conceptualizations of subtle and implicit dehumanization, including infrahumanization, perceptions of human nature and human uniqueness, and implicit associations between ingroup-outgroup and human-animal concepts. Across 7 studies conducted in 3 countries, we demonstrate that blatant dehumanization is (a) more strongly associated with individual differences in support for hierarchy than subtle or implicit dehumanization, (b) uniquely predictive of numerous consequential attitudes and behaviors toward multiple outgroup targets, (c) predictive above prejudice, and (d) reliable over time. Finally, we show that blatant-but not subtle-dehumanization spikes immediately after incidents of real intergroup violence and strongly predicts support for aggressive actions like torture and retaliatory violence (after the Boston Marathon bombings and Woolwich attacks in England). This research extends theory on the role of dehumanization in intergroup relations and intergroup conflict and provides an intuitive, validated empirical tool to reliably measure blatant dehumanization. (c) 2015 APA, all rights reserved).

  9. Theoretical, Methodological, and Empirical Approaches to Cost Savings: A Compendium

    SciTech Connect

    M Weimar

    1998-12-10

    This publication summarizes and contains the original documentation for understanding why the U.S. Department of Energy's (DOE's) privatization approach provides cost savings and the different approaches that could be used in calculating cost savings for the Tank Waste Remediation System (TWRS) Phase I contract. The initial section summarizes the approaches in the different papers. The appendices are the individual source papers which have been reviewed by individuals outside of the Pacific Northwest National Laboratory and the TWRS Program. Appendix A provides a theoretical basis for and estimate of the level of savings that can be" obtained from a fixed-priced contract with performance risk maintained by the contractor. Appendix B provides the methodology for determining cost savings when comparing a fixed-priced contractor with a Management and Operations (M&O) contractor (cost-plus contractor). Appendix C summarizes the economic model used to calculate cost savings and provides hypothetical output from preliminary calculations. Appendix D provides the summary of the approach for the DOE-Richland Operations Office (RL) estimate of the M&O contractor to perform the same work as BNFL Inc. Appendix E contains information on cost growth and per metric ton of glass costs for high-level waste at two other DOE sites, West Valley and Savannah River. Appendix F addresses a risk allocation analysis of the BNFL proposal that indicates,that the current approach is still better than the alternative.

  10. Input Manipulation, Enhancement and Processing: Theoretical Views and Empirical Research

    ERIC Educational Resources Information Center

    Benati, Alessandro

    2016-01-01

    Researchers in the field of instructed second language acquisition have been examining the issue of how learners interact with input by conducting research measuring particular kinds of instructional interventions (input-oriented and meaning-based). These interventions include such things as input flood, textual enhancement and processing…

  11. Theoretical and Empirical Comparison of Big Data Image Processing with Apache Hadoop and Sun Grid Engine

    PubMed Central

    Bao, Shunxing; Weitendorf, Frederick D.; Plassard, Andrew J.; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A.

    2016-01-01

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., “short” processing times and/or “large” datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply “large scale” processing transitions into “big data” and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and non-relevant for medical imaging. PMID:28736473

  12. Theoretical and empirical comparison of big data image processing with Apache Hadoop and Sun Grid Engine

    NASA Astrophysics Data System (ADS)

    Bao, Shunxing; Weitendorf, Frederick D.; Plassard, Andrew J.; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A.

    2017-03-01

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and nonrelevant for medical imaging.

  13. Theoretical and Empirical Comparison of Big Data Image Processing with Apache Hadoop and Sun Grid Engine.

    PubMed

    Bao, Shunxing; Weitendorf, Frederick D; Plassard, Andrew J; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A

    2017-02-11

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and non-relevant for medical imaging.

  14. Mapping spatial frames of reference onto time: a review of theoretical accounts and empirical findings.

    PubMed

    Bender, Andrea; Beller, Sieghard

    2014-09-01

    When speaking and reasoning about time, people around the world tend to do so with vocabulary and concepts borrowed from the domain of space. This raises the question of whether the cross-linguistic variability found for spatial representations, and the principles on which these are based, may also carry over to the domain of time. Real progress in addressing this question presupposes a taxonomy for the possible conceptualizations in one domain and its consistent and comprehensive mapping onto the other-a challenge that has been taken up only recently and is far from reaching consensus. This article aims at systematizing the theoretical and empirical advances in this field, with a focus on accounts that deal with frames of reference (FoRs). It reviews eight such accounts by identifying their conceptual ingredients and principles for space-time mapping, and it explores the potential for their integration. To evaluate their feasibility, data from some thirty empirical studies, conducted with speakers of sixteen different languages, are then scrutinized. This includes a critical assessment of the methods employed, a summary of the findings for each language group, and a (re-)analysis of the data in view of the theoretical questions. The discussion relates these findings to research on the mental time line, and explores the psychological reality of temporal FoRs, the degree of cross-domain consistency in FoR adoption, the role of deixis, and the sources and extent of space-time mapping more generally. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Emotions and Motivation in Mathematics Education: Theoretical Considerations and Empirical Contributions

    ERIC Educational Resources Information Center

    Schukajlow, Stanislaw; Rakoczy, K.; Pekrun, R.

    2017-01-01

    Emotions and motivation are important prerequisites, mediators, and outcomes of learning and achievement. In this article, we first review major theoretical approaches and empirical findings in research on students' emotions and motivation in mathematics, including a discussion of how classroom instruction can support emotions and motivation.…

  16. A Unified Model of Knowledge Sharing Behaviours: Theoretical Development and Empirical Test

    ERIC Educational Resources Information Center

    Chennamaneni, Anitha; Teng, James T. C.; Raja, M. K.

    2012-01-01

    Research and practice on knowledge management (KM) have shown that information technology alone cannot guarantee that employees will volunteer and share knowledge. While previous studies have linked motivational factors to knowledge sharing (KS), we took a further step to thoroughly examine this theoretically and empirically. We developed a…

  17. University Students' Understanding of the Concepts Empirical, Theoretical, Qualitative and Quantitative Research

    ERIC Educational Resources Information Center

    Murtonen, Mari

    2015-01-01

    University research education in many disciplines is frequently confronted by problems with students' weak level of understanding of research concepts. A mind map technique was used to investigate how students understand central methodological concepts of empirical, theoretical, qualitative and quantitative. The main hypothesis was that some…

  18. Theoretical Foundation of Zisman's Empirical Equation for Wetting of Liquids on Solid Surfaces

    ERIC Educational Resources Information Center

    Zhu, Ruzeng; Cui, Shuwen; Wang, Xiaosong

    2010-01-01

    Theories of wetting of liquids on solid surfaces under the condition that van der Waals force is dominant are briefly reviewed. We show theoretically that Zisman's empirical equation for wetting of liquids on solid surfaces is a linear approximation of the Young-van der Waals equation in the wetting region, and we express the two parameters in…

  19. A Unified Model of Knowledge Sharing Behaviours: Theoretical Development and Empirical Test

    ERIC Educational Resources Information Center

    Chennamaneni, Anitha; Teng, James T. C.; Raja, M. K.

    2012-01-01

    Research and practice on knowledge management (KM) have shown that information technology alone cannot guarantee that employees will volunteer and share knowledge. While previous studies have linked motivational factors to knowledge sharing (KS), we took a further step to thoroughly examine this theoretically and empirically. We developed a…

  20. Cultural Practices and the Conception of Individual Differences: Theoretical and Empirical Considerations.

    ERIC Educational Resources Information Center

    Nunes, Terezinha

    1995-01-01

    Considers empirical evidence and theoretical issues that point out the need to reconceptualize individual differences in psychology. Studies use of arithmetic in everyday life and in the classrooms to explore consequences of cultural practices, the nature of individual differences in "ability," and links between practices and identity.…

  1. University Students' Understanding of the Concepts Empirical, Theoretical, Qualitative and Quantitative Research

    ERIC Educational Resources Information Center

    Murtonen, Mari

    2015-01-01

    University research education in many disciplines is frequently confronted by problems with students' weak level of understanding of research concepts. A mind map technique was used to investigate how students understand central methodological concepts of empirical, theoretical, qualitative and quantitative. The main hypothesis was that some…

  2. Theoretical Foundation of Zisman's Empirical Equation for Wetting of Liquids on Solid Surfaces

    ERIC Educational Resources Information Center

    Zhu, Ruzeng; Cui, Shuwen; Wang, Xiaosong

    2010-01-01

    Theories of wetting of liquids on solid surfaces under the condition that van der Waals force is dominant are briefly reviewed. We show theoretically that Zisman's empirical equation for wetting of liquids on solid surfaces is a linear approximation of the Young-van der Waals equation in the wetting region, and we express the two parameters in…

  3. Social Experiences with Peers and High School Graduation: A Review of Theoretical and Empirical Research

    ERIC Educational Resources Information Center

    Veronneau, Marie-Helene; Vitaro, Frank

    2007-01-01

    This article reviews theoretical and empirical work on the relations between child and adolescent peer experiences and high school graduation. First, the different developmental models that guide research in this domain will be explained. Then, descriptions of peer experiences at the group level (peer acceptance/rejection, victimisation, and crowd…

  4. Theoretical and empirical dimensions of the Aberdeen Glaucoma Questionnaire: a cross sectional survey and principal component analysis

    PubMed Central

    2013-01-01

    .5% of participants (p < 0.001). Conclusions This paper addresses a methodological gap in the application of classical test theory (CTT) techniques, such as PCA, in instrument development. Labels for empirically-derived factors are often selected intuitively whereas they can inform existing bodies of knowledge if selected on the basis of theoretical construct labels, which are more explicitly defined and which relate to each other in ways that are evidence based. PMID:24268026

  5. [Masked orthographic priming in the recognition of written words: empirical data and theoretical prospects].

    PubMed

    Robert, Christelle

    2009-12-01

    The present paper reviews the main studies that have been conducted on the effects of masked orthographic priming in written word recognition. Empirical data accumulated over the last two decades are exposed by considering three factors that play a role in the effects of orthographic priming: prime lexicality, prime duration, and target and/or prime orthographic neighbourhood. The theoretical implications of these data are discussed in light of the two major frameworks of visual word recognition, the serial search and the interactive activation. As a whole, the interactive activation hypothesis seems to be more appropriate to account for the empirical data.

  6. Quantifying heterogeneity attributable to polythetic diagnostic criteria: theoretical framework and empirical application.

    PubMed

    Olbert, Charles M; Gala, Gary J; Tupler, Larry A

    2014-05-01

    Heterogeneity within psychiatric disorders is both theoretically and practically problematic: For many disorders, it is possible for 2 individuals to share very few or even no symptoms in common yet share the same diagnosis. Polythetic diagnostic criteria have long been recognized to contribute to this heterogeneity, yet no unified theoretical understanding of the coherence of symptom criteria sets currently exists. A general framework for analyzing the logical and mathematical structure, coherence, and diversity of Diagnostic and Statistical Manual diagnostic categories (DSM-5 and DSM-IV-TR) is proposed, drawing from combinatorial mathematics, set theory, and information theory. Theoretical application of this framework to 18 diagnostic categories indicates that in most categories, 2 individuals with the same diagnosis may share no symptoms in common, and that any 2 theoretically possible symptom combinations will share on average less than half their symptoms. Application of this framework to 2 large empirical datasets indicates that patients who meet symptom criteria for major depressive disorder and posttraumatic stress disorder tend to share approximately three-fifths of symptoms in common. For both disorders in each of the datasets, pairs of individuals who shared no common symptoms were observed. Any 2 individuals with either diagnosis were unlikely to exhibit identical symptomatology. The theoretical and empirical results stemming from this approach have substantive implications for etiological research into, and measurement of, psychiatric disorders.

  7. Color and psychological functioning: a review of theoretical and empirical work.

    PubMed

    Elliot, Andrew J

    2015-01-01

    In the past decade there has been increased interest in research on color and psychological functioning. Important advances have been made in theoretical work and empirical work, but there are also important weaknesses in both areas that must be addressed for the literature to continue to develop apace. In this article, I provide brief theoretical and empirical reviews of research in this area, in each instance beginning with a historical background and recent advancements, and proceeding to an evaluation focused on weaknesses that provide guidelines for future research. I conclude by reiterating that the literature on color and psychological functioning is at a nascent stage of development, and by recommending patience and prudence regarding conclusions about theory, findings, and real-world application.

  8. Color and psychological functioning: a review of theoretical and empirical work

    PubMed Central

    Elliot, Andrew J.

    2015-01-01

    In the past decade there has been increased interest in research on color and psychological functioning. Important advances have been made in theoretical work and empirical work, but there are also important weaknesses in both areas that must be addressed for the literature to continue to develop apace. In this article, I provide brief theoretical and empirical reviews of research in this area, in each instance beginning with a historical background and recent advancements, and proceeding to an evaluation focused on weaknesses that provide guidelines for future research. I conclude by reiterating that the literature on color and psychological functioning is at a nascent stage of development, and by recommending patience and prudence regarding conclusions about theory, findings, and real-world application. PMID:25883578

  9. Empirically Based Play Interventions for Children

    ERIC Educational Resources Information Center

    Reddy, Linda A., Ed.; Files-Hall, Tara M., Ed.; Schaefer, Charles E., Ed.

    2005-01-01

    "Empirically Based Play Interventions for Children" is a compilation of innovative, well-designed play interventions, presented for the first time in one text. Play therapy is the oldest and most popular form of child therapy in clinical practice and is widely considered by practitioners to be uniquely responsive to children's developmental needs.…

  10. Empirically Based Play Interventions for Children

    ERIC Educational Resources Information Center

    Reddy, Linda A., Ed.; Files-Hall, Tara M., Ed.; Schaefer, Charles E., Ed.

    2005-01-01

    "Empirically Based Play Interventions for Children" is a compilation of innovative, well-designed play interventions, presented for the first time in one text. Play therapy is the oldest and most popular form of child therapy in clinical practice and is widely considered by practitioners to be uniquely responsive to children's developmental needs.…

  11. Dignity in the care of older people – a review of the theoretical and empirical literature

    PubMed Central

    Gallagher, Ann; Li, Sarah; Wainwright, Paul; Jones, Ian Rees; Lee, Diana

    2008-01-01

    Background Dignity has become a central concern in UK health policy in relation to older and vulnerable people. The empirical and theoretical literature relating to dignity is extensive and as likely to confound and confuse as to clarify the meaning of dignity for nurses in practice. The aim of this paper is critically to examine the literature and to address the following questions: What does dignity mean? What promotes and diminishes dignity? And how might dignity be operationalised in the care of older people? This paper critically reviews the theoretical and empirical literature relating to dignity and clarifies the meaning and implications of dignity in relation to the care of older people. If nurses are to provide dignified care clarification is an essential first step. Methods This is a review article, critically examining papers reporting theoretical perspectives and empirical studies relating to dignity. The following databases were searched: Assia, BHI, CINAHL, Social Services Abstracts, IBSS, Web of Knowledge Social Sciences Citation Index and Arts & Humanities Citation Index and location of books a chapters in philosophy literature. An analytical approach was adopted to the publications reviewed, focusing on the objectives of the review. Results and discussion We review a range of theoretical and empirical accounts of dignity and identify key dignity promoting factors evident in the literature, including staff attitudes and behaviour; environment; culture of care; and the performance of specific care activities. Although there is scope to learn more about cultural aspects of dignity we know a good deal about dignity in care in general terms. Conclusion We argue that what is required is to provide sufficient support and education to help nurses understand dignity and adequate resources to operationalise dignity in their everyday practice. Using the themes identified from our review we offer proposals for the direction of future research. PMID:18620561

  12. Theoretical and Empirical Equations of State for Nitrogen Gas at High Pressure and Temperature

    DTIC Science & Technology

    1981-09-01

    probably In the gas phase. Otherwise, there would not be evidence of an exponen- tial dependence of pressure on the burning rate. In view of the...the energy of the products formed. The products formed depend on the pressure , the temperature, and the composition of the propellant gas. Thus, the...Afc-Avc&S?^ AD AD-E400 697 TECHNICAL REPORT ARLCD-TR-81029 THEORETICAL AND EMPIRICAL EQUATIONS OF STATE FOR NITROGEN GAS AT HIGH PRESSURE AND

  13. A review of the nurtured heart approach to parenting: evaluation of its theoretical and empirical foundations.

    PubMed

    Hektner, Joel M; Brennan, Alison L; Brotherson, Sean E

    2013-09-01

    The Nurtured Heart Approach to parenting (NHA; Glasser & Easley, 2008) is summarized and evaluated in terms of its alignment with current theoretical perspectives and empirical evidence in family studies and developmental science. Originally conceived and promoted as a behavior management approach for parents of difficult children (i.e., with behavior disorders), NHA is increasingly offered as a valuable strategy for parents of any children, despite a lack of published empirical support. Parents using NHA are trained to minimize attention to undesired behaviors, provide positive attention and praise for compliance with rules, help children be successful by scaffolding and shaping desired behavior, and establish a set of clear rules and consequences. Many elements of the approach have strong support in the theoretical and empirical literature; however, some of the assumptions are more questionable, such as that negative child behavior can always be attributed to unintentional positive reinforcement by parents responding with negative attention. On balance, NHA appears to promote effective and validated parenting practices, but its effectiveness now needs to be tested empirically.

  14. Modelling drying kinetics of thyme (Thymus vulgaris L.): theoretical and empirical models, and neural networks.

    PubMed

    Rodríguez, J; Clemente, G; Sanjuán, N; Bon, J

    2014-01-01

    The drying kinetics of thyme was analyzed by considering different conditions: air temperature of between 40°C  and 70°C , and air velocity of 1 m/s. A theoretical diffusion model and eight different empirical models were fitted to the experimental data. From the theoretical model application, the effective diffusivity per unit area of the thyme was estimated (between 3.68 × 10(-5) and 2.12 × 10 (-4) s(-1)). The temperature dependence of the effective diffusivity was described by the Arrhenius relationship with activation energy of 49.42 kJ/mol. Eight different empirical models were fitted to the experimental data. Additionally, the dependence of the parameters of each model on the drying temperature was determined, obtaining equations that allow estimating the evolution of the moisture content at any temperature in the established range. Furthermore, artificial neural networks were developed and compared with the theoretical and empirical models using the percentage of the relative errors and the explained variance. The artificial neural networks were found to be more accurate predictors of moisture evolution with VAR ≥ 99.3% and ER ≤ 8.7%.

  15. Conceptual and empirical problems with game theoretic approaches to language evolution

    PubMed Central

    Watumull, Jeffrey; Hauser, Marc D.

    2014-01-01

    The importance of game theoretic models to evolutionary theory has been in formulating elegant equations that specify the strategies to be played and the conditions to be satisfied for particular traits to evolve. These models, in conjunction with experimental tests of their predictions, have successfully described and explained the costs and benefits of varying strategies and the dynamics for establishing equilibria in a number of evolutionary scenarios, including especially cooperation, mating, and aggression. Over the past decade or so, game theory has been applied to model the evolution of language. In contrast to the aforementioned scenarios, however, we argue that these models are problematic due to conceptual confusions and empirical difficiences. In particular, these models conflate the comptutations and representations of our language faculty (mechanism) with its utility in communication (function); model languages as having different fitness functions for which there is no evidence; depend on assumptions for the starting state of the system, thereby begging the question of how these systems evolved; and to date, have generated no empirical studies at all. Game theoretic models of language evolution have therefore failed to advance how or why language evolved, or why it has the particular representations and computations that it does. We conclude with some brief suggestions for how this situation might be ameliorated, enabling this important theoretical tool to make substantive empirical contributions. PMID:24678305

  16. Uncovering curvilinear relationships between conscientiousness and job performance: how theoretically appropriate measurement makes an empirical difference.

    PubMed

    Carter, Nathan T; Dalal, Dev K; Boyce, Anthony S; O'Connell, Matthew S; Kung, Mei-Chuan; Delgado, Kristin M

    2014-07-01

    The personality trait of conscientiousness has seen considerable attention from applied psychologists due to its efficacy for predicting job performance across performance dimensions and occupations. However, recent theoretical and empirical developments have questioned the assumption that more conscientiousness always results in better job performance, suggesting a curvilinear link between the 2. Despite these developments, the results of studies directly testing the idea have been mixed. Here, we propose this link has been obscured by another pervasive assumption known as the dominance model of measurement: that higher scores on traditional personality measures always indicate higher levels of conscientiousness. Recent research suggests dominance models show inferior fit to personality test scores as compared to ideal point models that allow for curvilinear relationships between traits and scores. Using data from 2 different samples of job incumbents, we show the rank-order changes that result from using an ideal point model expose a curvilinear link between conscientiousness and job performance 100% of the time, whereas results using dominance models show mixed results, similar to the current state of the literature. Finally, with an independent cross-validation sample, we show that selection based on predicted performance using ideal point scores results in more favorable objective hiring outcomes. Implications for practice and future research are discussed.

  17. Why it is hard to find genes associated with social science traits: theoretical and empirical considerations.

    PubMed

    Chabris, Christopher F; Lee, James J; Benjamin, Daniel J; Beauchamp, Jonathan P; Glaeser, Edward L; Borst, Gregoire; Pinker, Steven; Laibson, David I

    2013-10-01

    We explain why traits of interest to behavioral scientists may have a genetic architecture featuring hundreds or thousands of loci with tiny individual effects rather than a few with large effects and why such an architecture makes it difficult to find robust associations between traits and genes. We conducted a genome-wide association study at 2 sites, Harvard University and Union College, measuring more than 100 physical and behavioral traits with a sample size typical of candidate gene studies. We evaluated predictions that alleles with large effect sizes would be rare and most traits of interest to social science are likely characterized by a lack of strong directional selection. We also carried out a theoretical analysis of the genetic architecture of traits based on R.A. Fisher's geometric model of natural selection and empirical analyses of the effects of selection bias and phenotype measurement stability on the results of genetic association studies. Although we replicated several known genetic associations with physical traits, we found only 2 associations with behavioral traits that met the nominal genome-wide significance threshold, indicating that physical and behavioral traits are mainly affected by numerous genes with small effects. The challenge for social science genomics is the likelihood that genes are connected to behavioral variation by lengthy, nonlinear, interactive causal chains, and unraveling these chains requires allying with personal genomics to take advantage of the potential for large sample sizes as well as continuing with traditional epidemiological studies.

  18. Why It Is Hard to Find Genes Associated With Social Science Traits: Theoretical and Empirical Considerations

    PubMed Central

    Lee, James J.; Benjamin, Daniel J.; Beauchamp, Jonathan P.; Glaeser, Edward L.; Borst, Gregoire; Pinker, Steven; Laibson, David I.

    2013-01-01

    Objectives. We explain why traits of interest to behavioral scientists may have a genetic architecture featuring hundreds or thousands of loci with tiny individual effects rather than a few with large effects and why such an architecture makes it difficult to find robust associations between traits and genes. Methods. We conducted a genome-wide association study at 2 sites, Harvard University and Union College, measuring more than 100 physical and behavioral traits with a sample size typical of candidate gene studies. We evaluated predictions that alleles with large effect sizes would be rare and most traits of interest to social science are likely characterized by a lack of strong directional selection. We also carried out a theoretical analysis of the genetic architecture of traits based on R.A. Fisher’s geometric model of natural selection and empirical analyses of the effects of selection bias and phenotype measurement stability on the results of genetic association studies. Results. Although we replicated several known genetic associations with physical traits, we found only 2 associations with behavioral traits that met the nominal genome-wide significance threshold, indicating that physical and behavioral traits are mainly affected by numerous genes with small effects. Conclusions. The challenge for social science genomics is the likelihood that genes are connected to behavioral variation by lengthy, nonlinear, interactive causal chains, and unraveling these chains requires allying with personal genomics to take advantage of the potential for large sample sizes as well as continuing with traditional epidemiological studies. PMID:23927501

  19. The Role of Trait Emotional Intelligence in Academic Performance: Theoretical Overview and Empirical Update.

    PubMed

    Perera, Harsha N

    2016-01-01

    Considerable debate still exists among scholars over the role of trait emotional intelligence (TEI) in academic performance. The dominant theoretical position is that TEI should be orthogonal or only weakly related to achievement; yet, there are strong theoretical reasons to believe that TEI plays a key role in performance. The purpose of the current article is to provide (a) an overview of the possible theoretical mechanisms linking TEI with achievement and (b) an update on empirical research examining this relationship. To elucidate these theoretical mechanisms, the overview draws on multiple theories of emotion and regulation, including TEI theory, social-functional accounts of emotion, and expectancy-value and psychobiological model of emotion and regulation. Although these theoretical accounts variously emphasize different variables as focal constructs, when taken together, they provide a comprehensive picture of the possible mechanisms linking TEI with achievement. In this regard, the article redresses the problem of vaguely specified theoretical links currently hampering progress in the field. The article closes with a consideration of directions for future research.

  20. On the complex relationship between energy expenditure and longevity: Reconciling the contradictory empirical results with a simple theoretical model.

    PubMed

    Hou, Chen; Amunugama, Kaushalya

    2015-07-01

    The relationship between energy expenditure and longevity has been a central theme in aging studies. Empirical studies have yielded controversial results, which cannot be reconciled by existing theories. In this paper, we present a simple theoretical model based on first principles of energy conservation and allometric scaling laws. The model takes into considerations the energy tradeoffs between life history traits and the efficiency of the energy utilization, and offers quantitative and qualitative explanations for a set of seemingly contradictory empirical results. We show that oxidative metabolism can affect cellular damage and longevity in different ways in animals with different life histories and under different experimental conditions. Qualitative data and the linearity between energy expenditure, cellular damage, and lifespan assumed in previous studies are not sufficient to understand the complexity of the relationships. Our model provides a theoretical framework for quantitative analyses and predictions. The model is supported by a variety of empirical studies, including studies on the cellular damage profile during ontogeny; the intra- and inter-specific correlations between body mass, metabolic rate, and lifespan; and the effects on lifespan of (1) diet restriction and genetic modification of growth hormone, (2) the cold and exercise stresses, and (3) manipulations of antioxidant.

  1. Theoretical and empirical efficiency of sampling strategies for estimating upper arm elevation.

    PubMed

    Liv, Per; Mathiassen, Svend Erik; Svendsen, Susanne Wulff

    2011-05-01

    To investigate the statistical efficiency of strategies for sampling upper arm elevation data, which differed with respect to sample sizes and sample allocations within and across measurement days. The study was also designed to compare standard theoretical predictions of sampling efficiency, which rely on several assumptions about the data structure, with 'true' efficiency as determined by bootstrap simulations. Sixty-five sampling strategies were investigated using a data set containing minute-by-minute values of average right upper arm elevation, percentage of time with an arm elevated <15°, and percentage of time with an arm elevated >90° in a population of 23 house painters, 23 car mechanics, and 26 machinists, all followed for four full working days. Total sample times per subject between 30 and 240 min were subdivided into continuous time blocks between 1 and 240 min long, allocated to 1 or 4 days per subject. Within day(s), blocks were distributed using either a random or a fixed-interval principle. Sampling efficiency was expressed in terms of the variance of estimated mean exposure values of 20 subjects and assessed using standard theoretical models assuming independence between variables and homoscedasticity. Theoretical performance was compared to empirical efficiencies obtained by a nonparametric bootstrapping procedure. We found the assumptions of independence and homoscedasticity in the theoretical model to be violated, most notably expressed through an autocorrelation between measurement units within working days. The empirical variance of the mean exposure estimates decreased, i.e. sampling efficiency increased, for sampling strategies where measurements were distributed widely across time. Thus, the most efficient allocation strategy was to organize a sample into 1-min block collected at fixed time intervals across 4 days. Theoretical estimates of efficiency generally agreed with empirical variances if the sample was allocated into small blocks

  2. New production in the equatorial Pacific: a comparison of field data with estimates derived from empirical and theoretical models

    NASA Astrophysics Data System (ADS)

    Laws, Edward A.

    2004-02-01

    Measurements of new production based on uptake of 15N-labeled nitrate in the equatorial Pacific are compared with estimates derived from empirical and theoretical models. Average f-ratios and new production are predicted to within 10-20% by a theoretical steady-state model in which temperature and primary production are the independent variables that determine new production. Examination of the results reveals that the theoretical model gives a very accurate representation of the pattern in new production at primary production rates below ˜60-70 mmol C m -2 d -1 but systematically underestimates new production at higher primary production rates. The discrepancy between measured and predicted new production rates is significantly ( p<0.005) correlated with mean euphotic zone nitrate concentrations and drops to zero at nitrate concentrations less than 3 μM. A likely explanation for the bias is the imbalance between primary production and herbivore grazing that occurs in recently upwelled water. This imbalance cannot be taken into account in a steady-state model. At nitrate concentrations less than 3 μM, the new production characteristics of the system closely resemble those predicted by the steady-state model. An empirical model, based on data collected prior to 1979, significantly overestimates new production in the equatorial Pacific at primary production rates above roughly 20 mmol C m -2 d -1. Likely causes of the bias in the empirical model are the need to take temperature effects into account and artifacts in rate measurements made prior to the widespread acceptance of the need to use clean sampling and incubation techniques.

  3. Nonparametric Bayes Factors Based On Empirical Likelihood Ratios

    PubMed Central

    Vexler, Albert; Deng, Wei; Wilding, Gregory E.

    2012-01-01

    Bayes methodology provides posterior distribution functions based on parametric likelihoods adjusted for prior distributions. A distribution-free alternative to the parametric likelihood is use of empirical likelihood (EL) techniques, well known in the context of nonparametric testing of statistical hypotheses. Empirical likelihoods have been shown to exhibit many of the properties of conventional parametric likelihoods. In this article, we propose and examine Bayes factors (BF) methods that are derived via the EL ratio approach. Following Kass & Wasserman [10], we consider Bayes factors type decision rules in the context of standard statistical testing techniques. We show that the asymptotic properties of the proposed procedure are similar to the classical BF’s asymptotic operating characteristics. Although we focus on hypothesis testing, the proposed approach also yields confidence interval estimators of unknown parameters. Monte Carlo simulations were conducted to evaluate the theoretical results as well as to demonstrate the power of the proposed test. PMID:23180904

  4. SAGE II/Umkehr ozone comparisons and aerosols effects: An empirical and theoretical study. Final report

    SciTech Connect

    Newchurch, M.

    1997-09-15

    The objectives of this research were to: (1) examine empirically the aerosol effect on Umkehr ozone profiles using SAGE II aerosol and ozone data; (2) examine theoretically the aerosol effect on Umkehr ozone profiles; (3) examine the differences between SAGE II ozone profiles and both old- and new-format Umkehr ozone profiles for ozone-trend information; (4) reexamine SAGE I-Umkehr ozone differences with the most recent version of SAGE I data; and (5) contribute to the SAGE II science team.

  5. Scientific thinking in young children: theoretical advances, empirical research, and policy implications.

    PubMed

    Gopnik, Alison

    2012-09-28

    New theoretical ideas and empirical research show that very young children's learning and thinking are strikingly similar to much learning and thinking in science. Preschoolers test hypotheses against data and make causal inferences; they learn from statistics and informal experimentation, and from watching and listening to others. The mathematical framework of probabilistic models and Bayesian inference can describe this learning in precise ways. These discoveries have implications for early childhood education and policy. In particular, they suggest both that early childhood experience is extremely important and that the trend toward more structured and academic early childhood programs is misguided.

  6. An empirical comparison of information-theoretic selection criteria for multivariate behavior genetic models.

    PubMed

    Markon, Kristian E; Krueger, Robert F

    2004-11-01

    Information theory provides an attractive basis for statistical inference and model selection. However, little is known about the relative performance of different information-theoretic criteria in covariance structure modeling, especially in behavioral genetic contexts. To explore these issues, information-theoretic fit criteria were compared with regard to their ability to discriminate between multivariate behavioral genetic models under various model, distribution, and sample size conditions. Results indicate that performance depends on sample size, model complexity, and distributional specification. The Bayesian Information Criterion (BIC) is more robust to distributional misspecification than Akaike's Information Criterion (AIC) under certain conditions, and outperforms AIC in larger samples and when comparing more complex models. An approximation to the Minimum Description Length (MDL; Rissanen, J. (1996). IEEE Transactions on Information Theory 42:40-47, Rissanen, J. (2001). IEEE Transactions on Information Theory 47:1712-1717) criterion, involving the empirical Fisher information matrix, exhibits variable patterns of performance due to the complexity of estimating Fisher information matrices. Results indicate that a relatively new information-theoretic criterion, Draper's Information Criterion (DIC; Draper, 1995), which shares features of the Bayesian and MDL criteria, performs similarly to or better than BIC. Results emphasize the importance of further research into theory and computation of information-theoretic criteria.

  7. The Theoretical and Empirical Basis for Meditation as an Intervention for PTSD

    ERIC Educational Resources Information Center

    Lang, Ariel J.; Strauss, Jennifer L.; Bomyea, Jessica; Bormann, Jill E.; Hickman, Steven D.; Good, Raquel C.; Essex, Michael

    2012-01-01

    In spite of the existence of good empirically supported treatments for posttraumatic stress disorder (PTSD), consumers and providers continue to ask for more options for managing this common and often chronic condition. Meditation-based approaches are being widely implemented, but there is minimal research rigorously assessing their effectiveness.…

  8. The Theoretical and Empirical Basis for Meditation as an Intervention for PTSD

    ERIC Educational Resources Information Center

    Lang, Ariel J.; Strauss, Jennifer L.; Bomyea, Jessica; Bormann, Jill E.; Hickman, Steven D.; Good, Raquel C.; Essex, Michael

    2012-01-01

    In spite of the existence of good empirically supported treatments for posttraumatic stress disorder (PTSD), consumers and providers continue to ask for more options for managing this common and often chronic condition. Meditation-based approaches are being widely implemented, but there is minimal research rigorously assessing their effectiveness.…

  9. Common liability to addiction and “gateway hypothesis”: Theoretical, empirical and evolutionary perspective

    PubMed Central

    Vanyukov, Michael M.; Tarter, Ralph E.; Kirillova, Galina P.; Kirisci, Levent; Reynolds, Maureen D.; Kreek, Mary Jeanne; Conway, Kevin P.; Maher, Brion S.; Iacono, William G.; Bierut, Laura; Neale, Michael C.; Clark, Duncan B.; Ridenour, Ty A.

    2013-01-01

    Background Two competing concepts address the development of involvement with psychoactive substances: the “gateway hypothesis” (GH) and common liability to addiction (CLA). Method The literature on theoretical foundations and empirical findings related to both concepts is reviewed. Results The data suggest that drug use initiation sequencing, the core GH element, is variable and opportunistic rather than uniform and developmentally deterministic. The association between risks for use of different substances, if any, can be more readily explained by common underpinnings than by specific staging. In contrast, the CLA concept is grounded in genetic theory and supported by data identifying common sources of variation in the risk for specific addictions. This commonality has identifiable neurobiological substrate and plausible evolutionary explanations. Conclusions Whereas the “gateway” hypothesis does not specify mechanistic connections between “stages”, and does not extend to the risks for addictions, the concept of common liability to addictions incorporates sequencing of drug use initiation as well as extends to related addictions and their severity, provides a parsimonious explanation of substance use and addiction co-occurrence, and establishes a theoretical and empirical foundation to research in etiology, quantitative risk and severity measurement, as well as targeted non-drug-specific prevention and early intervention. PMID:22261179

  10. Perceived barriers to children's active commuting to school: a systematic review of empirical, methodological and theoretical evidence.

    PubMed

    Lu, Wenhua; McKyer, E Lisako J; Lee, Chanam; Goodson, Patricia; Ory, Marcia G; Wang, Suojin

    2014-11-18

    Active commuting to school (ACS) may increase children's daily physical activity and help them maintain a healthy weight. Previous studies have identified various perceived barriers related to children's ACS. However, it is not clear whether and how these studies were methodologically sound and theoretically grounded. The purpose of this review was to critically assess the current literature on perceived barriers to children's ACS and provide recommendations for future studies. Empirically based literature on perceived barriers to ACS was systematically searched from six databases. A methodological quality scale (MQS) and a theory utilization quality scale (TQS) were created based on previously established instruments and tailored for the current review. Among the 39 studies that met the inclusion criteria, 19 (48.7%) reported statistically significant perceived barriers to child's ACS. The methodological and theory utilization qualities of reviewed studies varied, with MQS scores ranging between 7 and 20 (Mean =12.95, SD =2.95) and TQS scores from 1 to 7 (Mean =3.62, SD =1.74). A detailed appraisal of the literature suggests several empirical, methodological, and theoretical recommendations for future studies on perceived barriers to ACS. Empirically, increasing the diversity of study regions and samples should be a high priority, particularly in Asian and European countries, and among rural residents; more prospective and interventions studies are needed to determine the causal mechanism liking the perceived factors and ACS; future researchers should include policy-related barriers into their inquiries. Methodologically, the conceptualization of ACS should be standardized or at least well rationalized in future studies to ensure the comparability of results; researchers' awareness need to be increased for improving the methodological rigor of studies, especially in regard to appropriate statistical analysis techniques, control variable estimation

  11. Linear regression calibration: theoretical framework and empirical results in EPIC, Germany.

    PubMed

    Kynast-Wolf, Gisela; Becker, Nikolaus; Kroke, Anja; Brandstetter, Birgit R; Wahrendorf, Jürgen; Boeing, Heiner

    2002-01-01

    Large scale dietary assessment instruments are usually based on the food frequency technique and have therefore to be tailored to the involved populations with respect to mode of application and inquired food items. In multicenter studies with different populations, the direct comparability of dietary data is therefore a challenge because each local dietary assessment tool might have its specific measurement error. Thus, for risk analysis the direct use of dietary measurements across centers requires a common reference. For example, in the European prospective cohort study EPIC (European Prospective Investigation into Cancer and Nutrition) a 24-hour recall was chosen to serve as such a reference instrument which was based on a highly standardized computer-assisted interview (EPIC-SOFT). The 24-hour recall was applied to a representative subset of EPIC participants in all centers. The theoretical framework of combining multicenter dietary information was previously published in several papers and is called linear regression calibration. It is based on a linear regression of the food frequency questionnaire to the reference. The regression coefficients describe the absolute and proportional scaling bias of the questionnaire with the 24-hour recall taken as reference. This article describes the statistical basis of the calibration approach and presents first empirical results of its application to fruit, cereals and meat consumption in EPIC Germany represented by the two EPIC centers, Heidelberg and Potsdam. It was found that fruit could be measured well by the questionnaire in both centers (lambdacirc; = 0.98 (males) and lambdacirc; = 0.95 (females) in Heidelberg, and lambdacirc; = 0.86 (males) and lambdacirc; = 0.7 (females) in Potsdam), cereals less (lambdacirc; = 0.53 (males) and lambdacirc; = 0.4 (females) in Heidelberg, and lambdacirc; = 0.53 (males) and lambdacirc; = 0.44 (females) in Potsdam), and that the assessment of meat (lambdacirc; = 0.72 (males) and

  12. Evolution of the empirical and theoretical foundations of eyewitness identification reform.

    PubMed

    Clark, Steven E; Moreland, Molly B; Gronlund, Scott D

    2014-04-01

    Scientists in many disciplines have begun to raise questions about the evolution of research findings over time (Ioannidis in Epidemiology, 19, 640-648, 2008; Jennions & Møller in Proceedings of the Royal Society, Biological Sciences, 269, 43-48, 2002; Mullen, Muellerleile, & Bryan in Personality and Social Psychology Bulletin, 27, 1450-1462, 2001; Schooler in Nature, 470, 437, 2011), since many phenomena exhibit decline effects-reductions in the magnitudes of effect sizes as empirical evidence accumulates. The present article examines empirical and theoretical evolution in eyewitness identification research. For decades, the field has held that there are identification procedures that, if implemented by law enforcement, would increase eyewitness accuracy, either by reducing false identifications, with little or no change in correct identifications, or by increasing correct identifications, with little or no change in false identifications. Despite the durability of this no-cost view, it is unambiguously contradicted by data (Clark in Perspectives on Psychological Science, 7, 238-259, 2012a; Clark & Godfrey in Psychonomic Bulletin & Review, 16, 22-42, 2009; Clark, Moreland, & Rush, 2013; Palmer & Brewer in Law and Human Behavior, 36, 247-255, 2012), raising questions as to how the no-cost view became well-accepted and endured for so long. Our analyses suggest that (1) seminal studies produced, or were interpreted as having produced, the no-cost pattern of results; (2) a compelling theory was developed that appeared to account for the no-cost pattern; (3) empirical results changed over the years, and subsequent studies did not reliably replicate the no-cost pattern; and (4) the no-cost view survived despite the accumulation of contradictory empirical evidence. Theories of memory that were ruled out by early data now appear to be supported by data, and the theory developed to account for early data now appears to be incorrect.

  13. Quantifying multi-dimensional functional trait spaces of trees: empirical versus theoretical approaches

    NASA Astrophysics Data System (ADS)

    Ogle, K.; Fell, M.; Barber, J. J.

    2016-12-01

    Empirical, field studies of plant functional traits have revealed important trade-offs among pairs or triplets of traits, such as the leaf (LES) and wood (WES) economics spectra. Trade-offs include correlations between leaf longevity (LL) vs specific leaf area (SLA), LL vs mass-specific leaf respiration rate (RmL), SLA vs RmL, and resistance to breakage vs wood density. Ordination analyses (e.g., PCA) show groupings of traits that tend to align with different life-history strategies or taxonomic groups. It is unclear, however, what underlies such trade-offs and emergent spectra. Do they arise from inherent physiological constraints on growth, or are they more reflective of environmental filtering? The relative importance of these mechanisms has implications for predicting biogeochemical cycling, which is influenced by trait distributions of the plant community. We address this question using an individual-based model of tree growth (ACGCA) to quantify the theoretical trait space of trees that emerges from physiological constraints. ACGCA's inputs include 32 physiological, anatomical, and allometric traits, many of which are related to the LES and WES. We fit ACGCA to 1.6 million USFS FIA observations of tree diameters and heights to obtain vectors of trait values that produce realistic growth, and we explored the structure of this trait space. No notable correlations emerged among the 496 trait pairs, but stepwise regressions revealed complicated multi-variate structure: e.g., relationships between pairs of traits (e.g., RmL and SLA) are governed by other traits (e.g., LL, radiation-use efficiency [RUE]). We also simulated growth under various canopy gap scenarios that impose varying degrees of environmental filtering to explore the multi-dimensional trait space (hypervolume) of trees that died vs survived. The centroid and volume of the hypervolumes differed among dead and live trees, especially under gap conditions leading to low mortality. Traits most predictive

  14. Issues and Controversies that Surround Recent Texts on Empirically Supported and Empirically Based Treatments

    ERIC Educational Resources Information Center

    Paul, Howard A.

    2004-01-01

    Since the 1993 APA task force of the Society of Clinical Psychology developed guidelines to apply data-based psychology to the identification of effective psychotherapy, there has been an increasing number of texts focussing on Empirically based Psychotherapy and Empirically Supported Treatments. This manuscript examines recent key texts and…

  15. Agriculture and deforestation in the tropics: a critical theoretical and empirical review.

    PubMed

    Benhin, James K A

    2006-02-01

    Despite the important role that tropical forests play in human existence, their depletion, especially in the developing world, continue relentlessly. Agriculture has been cited as the major cause of this depletion. This paper discusses two main theoretical underpinnings for the role of agriculture in tropical deforestation. First, the forest biomass as input in agricultural production, and second, the competition between agriculture and forestry underlined by their relative marginal benefits. These are supported by empirical evidence from selected countries in Africa and South America. The paper suggests a need to find a win-win situation to control the spate of tropical deforestation. This may imply improved technologies in the agriculture sector in the developing world, which would lead both to increased production in the agriculture sector, and would also help control the use of tropical forest as an input in agriculture production.

  16. The Influence of Education and Socialization on Radicalization: An Exploration of Theoretical Presumptions and Empirical Research.

    PubMed

    Pels, Trees; de Ruyter, Doret J

    2012-06-01

    BACKGROUND AND OBJECTIVE: Research into radicalization does not pay much attention to education. This is remarkable and possibly misses an important influence on the process of radicalization. Therefore this article sets out to explore the relation between education on the one hand and the onset or prevention of radicalization on the other hand. METHOD: This article is a theoretical literature review. It has analyzed empirical studies-mainly from European countries-about the educational aims, content and style of Muslim parents and parents with (extreme) right-wing sympathies. RESULTS: Research examining similarity in right-wing sympathies between parents and children yields mixed results, but studies among adolescents point to a significant concordance. Research also showed that authoritarian parenting may play a significant role. Similar research among Muslim families was not found. While raising children with distrust and an authoritarian style are prevalent, the impact on adolescents has not been investigated. The empirical literature we reviewed does not give sufficient evidence to conclude that democratic ideal in and an authoritative style of education are conducive to the development of a democratic attitude. CONCLUSION: There is a knowledge gap with regard to the influence of education on the onset or the prevention of radicalization. Schools and families are underappreciated sources of informal social control and social capital and therefore the gap should be closed. If there is a better understanding of the effect of education, policy as well as interventions can be developed to assist parents and teachers in preventing radicalization.

  17. Patient perceptions of patient-centred care: empirical test of a theoretical model.

    PubMed

    Rathert, Cheryl; Williams, Eric S; McCaughey, Deirdre; Ishqaidef, Ghadir

    2015-04-01

    Patient perception measures are gaining increasing interest among scholars and practitioners. The aim of this study was to empirically examine a conceptual model of patient-centred care using patient perception survey data. Patient-centred care is one of the Institute of Medicine's objectives for improving health care in the 21st century. Patient interviews conducted by the Picker Institute/Commonwealth Fund in the 1980s resulted in a theoretical model and survey questions with dimensions and attributes patients defined as patient-centered. The present study used survey data from patients with overnight visits at 142 U.S. hospitals. Regression analysis found significant support for the theoretical model. Perceptions of emotional support had the strongest relationship with overall care ratings. Coordination of care, and physical comfort were strongly related as well. Understanding how patients experience their care can help improve understanding of what patients believe is patient-centred, and of how care processes relate to important patient outcomes. © 2012 John Wiley & Sons Ltd.

  18. Enhanced FMAM based on empirical kernel map.

    PubMed

    Wang, Min; Chen, Songcan

    2005-05-01

    The existing morphological auto-associative memory models based on the morphological operations, typically including morphological auto-associative memories (auto-MAM) proposed by Ritter et al. and our fuzzy morphological auto-associative memories (auto-FMAM), have many attractive advantages such as unlimited storage capacity, one-shot recall speed and good noise-tolerance to single erosive or dilative noise. However, they suffer from the extreme vulnerability to noise of mixing erosion and dilation, resulting in great degradation on recall performance. To overcome this shortcoming, we focus on FMAM and propose an enhanced FMAM (EFMAM) based on the empirical kernel map. Although it is simple, EFMAM can significantly improve the auto-FMAM with respect to the recognition accuracy under hybrid-noise and computational effort. Experiments conducted on the thumbnail-sized faces (28 x 23 and 14 x 11) scaled from the ORL database show the average accuracies of 92%, 90%, and 88% with 40 classes under 10%, 20%, and 30% randomly generated hybrid-noises, respectively, which are far higher than the auto-FMAM (67%, 46%, 31%) under the same noise levels.

  19. Theoretical Foundations for Evidence-Based Health Informatics: Why? How?

    PubMed

    Scott, Philip J; Georgiou, Andrew; Hyppönen, Hannele; Craven, Catherine K; Rigby, Michael; Brender McNair, Jytte

    2016-01-01

    A scientific approach to health informatics requires sound theoretical foundations. Health informatics implementation would be more effective if evidence-based and guided by theories about what is likely to work in what circumstances. We report on a Medinfo 2015 workshop on this topic jointly organized by the EFMI Working Group on Assessment of Health Information Systems and the IMIA Working Group on Technology Assessment and Quality Development. We discuss the findings of the workshop and propose an approach to consolidate empirical knowledge into testable middle-range theories.

  20. Coaching and guidance with patient decision aids: A review of theoretical and empirical evidence

    PubMed Central

    2013-01-01

    Background Coaching and guidance are structured approaches that can be used within or alongside patient decision aids (PtDAs) to facilitate the process of decision making. Coaching is provided by an individual, and guidance is embedded within the decision support materials. The purpose of this paper is to: a) present updated definitions of the concepts “coaching” and “guidance”; b) present an updated summary of current theoretical and empirical insights into the roles played by coaching/guidance in the context of PtDAs; and c) highlight emerging issues and research opportunities in this aspect of PtDA design. Methods We identified literature published since 2003 on shared decision making theoretical frameworks inclusive of coaching or guidance. We also conducted a sub-analysis of randomized controlled trials included in the 2011 Cochrane Collaboration Review of PtDAs with search results updated to December 2010. The sub-analysis was conducted on the characteristics of coaching and/or guidance included in any trial of PtDAs and trials that allowed the impact of coaching and/or guidance with PtDA to be compared to another intervention or usual care. Results Theoretical evidence continues to justify the use of coaching and/or guidance to better support patients in the process of thinking about a decision and in communicating their values/preferences with others. In 98 randomized controlled trials of PtDAs, 11 trials (11.2%) included coaching and 63 trials (64.3%) provided guidance. Compared to usual care, coaching provided alongside a PtDA improved knowledge and decreased mean costs. The impact on some other outcomes (e.g., participation in decision making, satisfaction, option chosen) was more variable, with some trials showing positive effects and other trials reporting no differences. For values-choice agreement, decisional conflict, adherence, and anxiety there were no differences between groups. None of these outcomes were worse when patients were exposed

  1. Resonating minds: a school-independent theoretical conception and its empirical application to psychotherapeutic processes.

    PubMed

    Mergenthaler, Erhard

    2008-03-01

    The resonating minds theory will be introduced as a means to describe psychotherapeutic processes and change. It builds on the mind-brain interface with psychotherapeutic interventions causing change in the brain, an altered brain causes changes in the emotional, cognitive, and behavioral regulation, and this again will change the types of subsequent therapeutic interventions. For the empirical assessment of this theory the therapeutic cycles model will be used. It is based on computer assisted analysis of verbatim transcripts using emotional tone, abstraction and narrative style as language measures. Sample applications and studies are shortly presented in order to provide evidence for the applicability and face validity of this approach.

  2. Evaluation of theoretical and empirical water vapor sorption isotherm models for soils

    NASA Astrophysics Data System (ADS)

    Arthur, Emmanuel; Tuller, Markus; Moldrup, Per; de Jonge, Lis W.

    2016-01-01

    The mathematical characterization of water vapor sorption isotherms of soils is crucial for modeling processes such as volatilization of pesticides and diffusive and convective water vapor transport. Although numerous physically based and empirical models were previously proposed to describe sorption isotherms of building materials, food, and other industrial products, knowledge about the applicability of these functions for soils is noticeably lacking. We present an evaluation of nine models for characterizing adsorption/desorption isotherms for a water activity range from 0.03 to 0.93 based on measured data of 207 soils with widely varying textures, organic carbon contents, and clay mineralogy. In addition, the potential applicability of the models for prediction of sorption isotherms from known clay content was investigated. While in general, all investigated models described measured adsorption and desorption isotherms reasonably well, distinct differences were observed between physical and empirical models and due to the different degrees of freedom of the model equations. There were also considerable differences in model performance for adsorption and desorption data. While regression analysis relating model parameters and clay content and subsequent model application for prediction of measured isotherms showed promise for the majority of investigated soils, for soils with distinct kaolinitic and smectitic clay mineralogy predicted isotherms did not closely match the measurements.

  3. How beauty works. Theoretical mechanisms and two empirical applications on students' evaluation of teaching.

    PubMed

    Wolbring, Tobias; Riordan, Patrick

    2016-05-01

    Plenty of studies show that the physical appearance of a person affects a variety of outcomes in everyday life. However, due to an incomplete theoretical explication and empirical problems in disentangling different beauty effects, it is unclear which mechanisms are at work. To clarify how beauty works we present explanations from evolutionary theory and expectation states theory and show where both perspectives differ and where interlinkage appears promising. Using students' evaluations of teaching we find observational and experimental evidence for the different causal pathways of physical attractiveness. First, independent raters strongly agree over the physical attractiveness of a person. Second, attractive instructors receive better student ratings. Third, students attend classes of attractive instructors more frequently - even after controlling for teaching quality. Fourth, we find no evidence that attractiveness effects become stronger if rater and ratee are of the opposite sex. Finally, the beauty premium turns into a penalty if an attractive instructor falls short of students' expectations. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. The complexities of defining optimal sleep: empirical and theoretical considerations with a special emphasis on children.

    PubMed

    Blunden, Sarah; Galland, Barbara

    2014-10-01

    The main aim of this paper is to consider relevant theoretical and empirical factors defining optimal sleep, and assess the relative importance of each in developing a working definition for, or guidelines about, optimal sleep, particularly in children. We consider whether optimal sleep is an issue of sleep quantity or of sleep quality. Sleep quantity is discussed in terms of duration, timing, variability and dose-response relationships. Sleep quality is explored in relation to continuity, sleepiness, sleep architecture and daytime behaviour. Potential limitations of sleep research in children are discussed, specifically the loss of research precision inherent in sleep deprivation protocols involving children. We discuss which outcomes are the most important to measure. We consider the notion that insufficient sleep may be a totally subjective finding, is impacted by the age of the reporter, driven by socio-cultural patterns and sleep-wake habits, and that, in some individuals, the driver for insufficient sleep can be viewed in terms of a cost-benefit relationship, curtailing sleep in order to perform better while awake. We conclude that defining optimal sleep is complex. The only method of capturing this elusive concept may be by somnotypology, taking into account duration, quality, age, gender, race, culture, the task at hand, and an individual's position in both sleep-alert and morningness-eveningness continuums. At the experimental level, a unified approach by researchers to establish standardized protocols to evaluate optimal sleep across paediatric age groups is required.

  5. Solubility of caffeine from green tea in supercritical CO2: a theoretical and empirical approach.

    PubMed

    Gadkari, Pravin Vasantrao; Balaraman, Manohar

    2015-12-01

    Decaffeination of fresh green tea was carried out with supercritical CO2 in the presence of ethanol as co-solvent. The solubility of caffeine in supercritical CO2 varied from 44.19 × 10(-6) to 149.55 × 10(-6) (mole fraction) over a pressure and temperature range of 15 to 35 MPa and 313 to 333 K, respectively. The maximum solubility of caffeine was obtained at 25 MPa and 323 K. Experimental solubility data were correlated with the theoretical equation of state models Peng-Robinson (PR), Soave Redlich-Kwong (SRK), and Redlich-Kwong (RK). The RK model had regressed experimental data with 15.52 % average absolute relative deviation (AARD). In contrast, Gordillo empirical model regressed the best to experimental data with only 0.96 % AARD. Under supercritical conditions, solubility of caffeine in tea matrix was lower than the solubility of pure caffeine. Further, solubility of caffeine in supercritical CO2 was compared with solubility of pure caffeine in conventional solvents and a maximum solubility 90 × 10(-3) mol fraction was obtained with chloroform.

  6. On the impact of empirical and theoretical star formation laws on galaxy formation

    NASA Astrophysics Data System (ADS)

    Lagos, Claudia Del P.; Lacey, Cedric G.; Baugh, Carlton M.; Bower, Richard G.; Benson, Andrew J.

    2011-09-01

    We investigate the consequences of applying different star formation laws in the galaxy formation model GALFORM. Three broad star formation laws are implemented: the empirical relations of Kennicutt and Schmidt and Blitz & Rosolowsky and the theoretical model of Krumholz, McKee & Tumlinson. These laws have no free parameters once calibrated against observations of the star formation rate (SFR) and gas surface density in nearby galaxies. We start from published models, and investigate which observables are sensitive to a change in the star formation law, without altering any other model parameters. We show that changing the star formation law (i) does not significantly affect either the star formation history of the universe or the galaxy luminosity functions in the optical and near-infrared, due to an effective balance between the quiescent and burst star formation modes, (ii) greatly affects the cold gas contents of galaxies and (iii) changes the location of galaxies in the SFR versus stellar mass plane, so that a second sequence of 'passive' galaxies arises, in addition to the known 'active' sequence. We show that this plane can be used to discriminate between the star formation laws.

  7. Rural Employment, Migration, and Economic Development: Theoretical Issues and Empirical Evidence from Africa. Africa Rural Employment Paper No. 1.

    ERIC Educational Resources Information Center

    Byerlee, Derek; Eicher, Carl K.

    Employment problems in Africa were examined with special emphasis on rural employment and migration within the context of overall economic development. A framework was provided for analyzing rural employment in development; that framework was used to analyze empirical information from Africa; and theoretical issues were raised in analyzing rural…

  8. Should we adjust for a confounder if empirical and theoretical criteria yield contradictory results? A simulation study

    PubMed Central

    Lee, Paul H.

    2014-01-01

    Confounders can be identified by one of two main strategies: empirical or theoretical. Although confounder identification strategies that combine empirical and theoretical strategies have been proposed, the need for adjustment remains unclear if the empirical and theoretical criteria yield contradictory results due to random error. We simulated several scenarios to mimic either the presence or the absence of a confounding effect and tested the accuracy of the exposure-outcome association estimates with and without adjustment. Various criteria (significance criterion, Change-in-estimate(CIE) criterion with a 10% cutoff and with a simulated cutoff) were imposed, and a range of sample sizes were trialed. In the presence of a true confounding effect, unbiased estimates were obtained only by using the CIE criterion with a simulated cutoff. In the absence of a confounding effect, all criteria performed well regardless of adjustment. When the confounding factor was affected by both exposure and outcome, all criteria yielded accurate estimates without adjustment, but the adjusted estimates were biased. To conclude, theoretical confounders should be adjusted for regardless of the empirical evidence found. The adjustment for factors that do not have a confounding effect minimally effects. Potential confounders affected by both exposure and outcome should not be adjusted for. PMID:25124526

  9. A Theoretical Analysis of Social Interactions in Computer-based Learning Environments: Evidence for Reciprocal Understandings.

    ERIC Educational Resources Information Center

    Jarvela, Sanna; Bonk, Curtis Jay; Lehtinen, Erno; Lehti, Sirpa

    1999-01-01

    Presents a theoretical and empirical analysis of social interactions in computer-based learning environments. Explores technology use to support reciprocal understanding between teachers and students based on three technology-based learning environments in Finland and the United States, and discusses situated learning, cognitive apprenticeships,…

  10. Empirically Based Myths: Astrology, Biorhythms, and ATIs.

    ERIC Educational Resources Information Center

    Ragsdale, Ronald G.

    1980-01-01

    A myth may have an empirical basis through chance occurrence; perhaps Aptitude Treatment Interactions (ATIs) are in this category. While ATIs have great utility in describing, planning, and implementing instruction, few disordinal interactions have been found. Article suggests narrowing of ATI research with replications and estimates of effect…

  11. Empirically Based Myths: Astrology, Biorhythms, and ATIs.

    ERIC Educational Resources Information Center

    Ragsdale, Ronald G.

    1980-01-01

    A myth may have an empirical basis through chance occurrence; perhaps Aptitude Treatment Interactions (ATIs) are in this category. While ATIs have great utility in describing, planning, and implementing instruction, few disordinal interactions have been found. Article suggests narrowing of ATI research with replications and estimates of effect…

  12. The Safety Culture Enactment Questionnaire (SCEQ): Theoretical model and empirical validation.

    PubMed

    de Castro, Borja López; Gracia, Francisco J; Tomás, Inés; Peiró, José M

    2017-06-01

    This paper presents the Safety Culture Enactment Questionnaire (SCEQ), designed to assess the degree to which safety is an enacted value in the day-to-day running of nuclear power plants (NPPs). The SCEQ is based on a theoretical safety culture model that is manifested in three fundamental components of the functioning and operation of any organization: strategic decisions, human resources practices, and daily activities and behaviors. The extent to which the importance of safety is enacted in each of these three components provides information about the pervasiveness of the safety culture in the NPP. To validate the SCEQ and the model on which it is based, two separate studies were carried out with data collection in 2008 and 2014, respectively. In Study 1, the SCEQ was administered to the employees of two Spanish NPPs (N=533) belonging to the same company. Participants in Study 2 included 598 employees from the same NPPs, who completed the SCEQ and other questionnaires measuring different safety outcomes (safety climate, safety satisfaction, job satisfaction and risky behaviors). Study 1 comprised item formulation and examination of the factorial structure and reliability of the SCEQ. Study 2 tested internal consistency and provided evidence of factorial validity, validity based on relationships with other variables, and discriminant validity between the SCEQ and safety climate. Exploratory Factor Analysis (EFA) carried out in Study 1 revealed a three-factor solution corresponding to the three components of the theoretical model. Reliability analyses showed strong internal consistency for the three scales of the SCEQ, and each of the 21 items on the questionnaire contributed to the homogeneity of its theoretically developed scale. Confirmatory Factor Analysis (CFA) carried out in Study 2 supported the internal structure of the SCEQ; internal consistency of the scales was also supported. Furthermore, the three scales of the SCEQ showed the expected correlation

  13. Theoretical and Empirical Comparisons between Two Models for Continuous Item Responses.

    ERIC Educational Resources Information Center

    Ferrando, Pere J.

    2002-01-01

    Analyzed the relations between two continuous response models intended for typical response items: the linear congeneric model and Samejima's continuous response model (CRM). Illustrated the relations described using an empirical example and assessed the relations through a simulation study. (SLD)

  14. Theoretical and Empirical Comparisons between Two Models for Continuous Item Responses.

    ERIC Educational Resources Information Center

    Ferrando, Pere J.

    2002-01-01

    Analyzed the relations between two continuous response models intended for typical response items: the linear congeneric model and Samejima's continuous response model (CRM). Illustrated the relations described using an empirical example and assessed the relations through a simulation study. (SLD)

  15. Discovering the Neural Nature of Moral Cognition? Empirical, Theoretical, and Practical Challenges in Bioethical Research with Electroencephalography (EEG).

    PubMed

    Wagner, Nils-Frederic; Chaves, Pedro; Wolff, Annemarie

    2017-02-28

    In this article we critically review the neural mechanisms of moral cognition that have recently been studied via electroencephalography (EEG). Such studies promise to shed new light on traditional moral questions by helping us to understand how effective moral cognition is embodied in the brain. It has been argued that conflicting normative ethical theories require different cognitive features and can, accordingly, in a broadly conceived naturalistic attempt, be associated with different brain processes that are rooted in different brain networks and regions. This potentially morally relevant brain activity has been empirically investigated through EEG-based studies on moral cognition. From neuroscientific evidence gathered in these studies, a variety of normative conclusions have been drawn and bioethical applications have been suggested. We discuss methodological and theoretical merits and demerits of the attempt to use EEG techniques in a morally significant way, point to legal challenges and policy implications, indicate the potential to reveal biomarkers of psychopathological conditions, and consider issues that might inform future bioethical work.

  16. A Model of Resource Allocation in Public School Districts: A Theoretical and Empirical Analysis.

    ERIC Educational Resources Information Center

    Chambers, Jay G.

    This paper formulates a comprehensive model of resource allocation in a local public school district. The theoretical framework specified could be applied equally well to any number of local public social service agencies. Section 1 develops the theoretical model describing the process of resource allocation. This involves the determination of the…

  17. Image Retrieval: Theoretical Analysis and Empirical User Studies on Accessing Information in Images.

    ERIC Educational Resources Information Center

    Ornager, Susanne

    1997-01-01

    Discusses indexing and retrieval for effective searches of digitized images. Reports on an empirical study about criteria for analysis and indexing digitized images, and the different types of user queries done in newspaper image archives in Denmark. Concludes that it is necessary that the indexing represent both a factual and an expressional…

  18. Culminating Experience Empirical and Theoretical Research Projects, University of Tennessee at Chattanooga, Spring, 2005

    ERIC Educational Resources Information Center

    Watson, Sandy White, Ed.

    2005-01-01

    This document represents a sample collection of master's theses from the University of Tennessee at Chattanooga's Teacher Education Program, spring semester, 2005. The majority of these student researchers were simultaneously student teaching while writing their theses. Studies were empirical and conceptual in nature and demonstrate some ways in…

  19. The Status of the Counseling Relationship: An Empirical Review, Theoretical Implications, and Research Directions.

    ERIC Educational Resources Information Center

    Sexton, Thomas L.; Whiston, Susan C.

    1994-01-01

    Reviews studies of counseling relationship, using Gelso and Carter's multidimensional model to summarize empirical support for "real,""unreal," and "working alliance" elements of relationship. Discussion of implications of potential model shift in thinking of counseling relationship outlines how adoption of social…

  20. Why Do People Need Self-Esteem? A Theoretical and Empirical Review

    ERIC Educational Resources Information Center

    Pyszczynsi, Tom; Greenberg, Jeff; Solomon, Sheldon; Arndt, Jamie; Schimel, Jeff

    2004-01-01

    Terror management theory (TMT; J. Greenberg, T. Pyszczynski, & S. Solomon, 1986) posits that people are motivated to pursue positive self-evaluations because self-esteem provides a buffer against the omnipresent potential for anxiety engendered by the uniquely human awareness of mortality. Empirical evidence relevant to the theory is reviewed…

  1. Image Retrieval: Theoretical Analysis and Empirical User Studies on Accessing Information in Images.

    ERIC Educational Resources Information Center

    Ornager, Susanne

    1997-01-01

    Discusses indexing and retrieval for effective searches of digitized images. Reports on an empirical study about criteria for analysis and indexing digitized images, and the different types of user queries done in newspaper image archives in Denmark. Concludes that it is necessary that the indexing represent both a factual and an expressional…

  2. Attachment-based family therapy for depressed and suicidal adolescents: theory, clinical model and empirical support.

    PubMed

    Ewing, E Stephanie Krauthamer; Diamond, Guy; Levy, Suzanne

    2015-01-01

    Attachment-Based Family Therapy (ABFT) is a manualized family-based intervention designed for working with depressed adolescents, including those at risk for suicide, and their families. It is an empirically informed and supported treatment. ABFT has its theoretical underpinnings in attachment theory and clinical roots in structural family therapy and emotion focused therapies. ABFT relies on a transactional model that aims to transform the quality of adolescent-parent attachment, as a means of providing the adolescent with a more secure relationship that can support them during challenging times generally, and the crises related to suicidal thinking and behavior, specifically. This article reviews: (1) the theoretical foundations of ABFT (attachment theory, models of emotional development); (2) the ABFT clinical model, including training and supervision factors; and (3) empirical support.

  3. Use of forensic science in investigating crimes of sexual violence: contrasting its theoretical potential with empirical realities.

    PubMed

    Johnson, Donald; Peterson, Joseph; Sommers, Ira; Baskin, Deborah

    2012-02-01

    This article contrasts the theoretical potential of modern forensic science techniques in the investigation of sexual violence cases with empirical research that has assessed the role played by scientific evidence in the criminal justice processing of sexual assault cases. First, the potential of forensic scientific procedures (including DNA testing) are outlined and the sexual assault literature that examines the importance of physical and forensic evidence in resolving such cases is reviewed. Then, empirical data from a recent National Institute of Justice (NIJ) study of 602 rapes are presented that describe the forensic evidence collected and examined in such cases and its impact on decisions to arrest, prosecute, adjudicate, and sentence defendants. The article closes with a discussion of research and policy recommendations to enhance the role played by forensic science evidence in sexual assault investigations.

  4. Public Disaster Communication and Child and Family Disaster Mental Health: a Review of Theoretical Frameworks and Empirical Evidence.

    PubMed

    Houston, J Brian; First, Jennifer; Spialek, Matthew L; Sorenson, Mary E; Koch, Megan

    2016-06-01

    Children have been identified as particularly vulnerable to psychological and behavioral difficulties following disaster. Public child and family disaster communication is one public health tool that can be utilized to promote coping/resilience and ameliorate maladaptive child reactions following an event. We conducted a review of the public disaster communication literature and identified three main functions of child and family disaster communication: fostering preparedness, providing psychoeducation, and conducting outreach. Our review also indicates that schools are a promising system for child and family disaster communication. We complete our review with three conclusions. First, theoretically, there appears to be a great opportunity for public disaster communication focused on child disaster reactions. Second, empirical research assessing the effects of public child and family disaster communication is essentially nonexistent. Third, despite the lack of empirical evidence in this area, there is opportunity for public child and family disaster communication efforts that address new domains.

  5. Empirical likelihood-based tests for stochastic ordering

    PubMed Central

    BARMI, HAMMOU EL; MCKEAGUE, IAN W.

    2013-01-01

    This paper develops an empirical likelihood approach to testing for the presence of stochastic ordering among univariate distributions based on independent random samples from each distribution. The proposed test statistic is formed by integrating a localized empirical likelihood statistic with respect to the empirical distribution of the pooled sample. The asymptotic null distribution of this test statistic is found to have a simple distribution-free representation in terms of standard Brownian bridge processes. The approach is used to compare the lengths of rule of Roman Emperors over various historical periods, including the “decline and fall” phase of the empire. In a simulation study, the power of the proposed test is found to improve substantially upon that of a competing test due to El Barmi and Mukerjee. PMID:23874142

  6. Theoretical and empirical investigations of KCl:Eu2+ for nearly water-equivalent radiotherapy dosimetry

    PubMed Central

    Zheng, Yuanshui; Han, Zhaohui; Driewer, Joseph P.; Low, Daniel A.; Li, H. Harold

    2010-01-01

    Purpose: The low effective atomic number, reusability, and other computed radiography-related advantages make europium doped potassium chloride (KCl:Eu2+) a promising dosimetry material. The purpose of this study is to model KCl:Eu2+ point dosimeters with a Monte Carlo (MC) method and, using this model, to investigate the dose responses of two-dimensional (2D) KCl:Eu2+ storage phosphor films (SPFs). Methods: KCl:Eu2+ point dosimeters were irradiated using a 6 MV beam at four depths (5–20 cm) for each of five square field sizes (5×5–25×25 cm2). The dose measured by KCl:Eu2+ was compared to that measured by an ionization chamber to obtain the magnitude of energy dependent dose measurement artifact. The measurements were simulated using DOSXYZnrc with phase space files generated by BEAMnrcMP. Simulations were also performed for KCl:Eu2+ films with thicknesses ranging from 1 μm to 1 mm. The work function of the prototype KCl:Eu2+ material was determined by comparing the sensitivity of a 150 μm thick KCl:Eu2+ film to a commercial BaFBr0.85I0.15:Eu2+-based SPF with a known work function. The work function was then used to estimate the sensitivity of a 1 μm thick KCl:Eu2+ film. Results: The simulated dose responses of prototype KCl:Eu2+ point dosimeters agree well with measurement data acquired by irradiating the dosimeters in the 6 MV beam with varying field size and depth. Furthermore, simulations with films demonstrate that an ultrathin KCl:Eu2+ film with thickness of the order of 1 μm would have nearly water-equivalent dose response. The simulation results can be understood using classic cavity theories. Finally, preliminary experiments and theoretical calculations show that ultrathin KCl:Eu2+ film could provide excellent signal in a 1 cGy dose-to-water irradiation. Conclusions: In conclusion, the authors demonstrate that KCl:Eu2+-based dosimeters can be accurately modeled by a MC method and that 2D KCl:Eu2+ films of the order of 1 μm thick would have

  7. The determinants of successful collaboration: a review of theoretical and empirical studies.

    PubMed

    San Martín-Rodríguez, Leticia; Beaulieu, Marie-Dominique; D'Amour, Danielle; Ferrada-Videla, Marcela

    2005-05-01

    Successful collaboration in health care teams can be attributed to numerous elements, including processes at work in interpersonal relationships within the team (the interactional determinants), conditions within the organization (the organizational determinants), and the organization's environment (the systemic determinants). Through a review of the literature, this article presents a tabulated compilation of each of these determinant types as identified by empirical research and identifies the main characteristics of these determinants according to the conceptual work. We then present a "showcase" of recent Canadian policy initiatives--The Canadian Health Transition Fund (HTF)--to illustrate how the various categories of determinants can be mobilized. The literature review reveals that very little of the empirical work has dealt with determinants of interprofessional collaboration in health, particularly its organizational and systemic determinants. Furthermore, our overview of experience at the Canadian HTF suggests that a systemic approach should be adopted in evaluative research on the determinants of effective collaborative practice.

  8. Toward a theoretically based measurement model of the good life.

    PubMed

    Cheung, C K

    1997-06-01

    A theoretically based conceptualization of the good life should differentiate 4 dimensions-the hedonist good life, the dialectical good life, the humanist good life, and the formalist good life. These 4 dimensions incorporate previous fragmentary measures, such as life satisfaction, depression, work alienation, and marital satisfaction, to produce an integrative view. In the present study, 276 Hong Kong Chinese husbands and wives responded to a survey of 13 indicators for these 4 good life dimensions. Confirmatory hierarchical factor analysis showed that these indicators identified the 4 dimensions of the good life, which in turn converged to identify a second-order factor of the overall good life. The model demonstrates discriminant validity in that the first-order factors had high loadings on the overall good life factor despite being linked by a social desirability factor. Analysis further showed that the second-order factor model applied equally well to husbands and wives. Thus, the conceptualization appears to be theoretically and empirically adequate in incorporating previous conceptualizations of the good life.

  9. Field-dependence, extraversion and perception of the vertical: empirical and theoretical perspectives of the rod-and-frame test.

    PubMed

    Fine, B J; Danforth, A V

    1975-06-01

    Using conventional scoring procedures for the Rod-and-frame Test (RFT), extraversion was shown to interact with field-dependence (defined by scores on the Hidden-shapes Test), with the field dependent extraverts being most inaccurate performers on the rod and frame. Of greater importance, serious questions were raised about theoretical and empirical aspects of the relationship between paper-and-pencil measures of field-dependence and performance on the rod and frame, and it was concluded that "what has...been demonstrated over the past ten years is the reliability of a relationship of questionable validity."

  10. Mechanisms of risk and resilience in military families: theoretical and empirical basis of a family-focused resilience enhancement program.

    PubMed

    Saltzman, William R; Lester, Patricia; Beardslee, William R; Layne, Christopher M; Woodward, Kirsten; Nash, William P

    2011-09-01

    Recent studies have confirmed that repeated wartime deployment of a parent exacts a toll on military children and families and that the quality and functionality of familial relations is linked to force preservation and readiness. As a result, family-centered care has increasingly become a priority across the military health system. FOCUS (Families OverComing Under Stress), a family-centered, resilience-enhancing program developed by a team at UCLA and Harvard Schools of Medicine, is a primary initiative in this movement. In a large-scale implementation project initiated by the Bureau of Navy Medicine, FOCUS has been delivered to thousands of Navy, Marine, Navy Special Warfare, Army, and Air Force families since 2008. This article describes the theoretical and empirical foundation and rationale for FOCUS, which is rooted in a broad conception of family resilience. We review the literature on family resilience, noting that an important next step in building a clinically useful theory of family resilience is to move beyond developing broad "shopping lists" of risk indicators by proposing specific mechanisms of risk and resilience. Based on the literature, we propose five primary risk mechanisms for military families and common negative "chain reaction" pathways through which they undermine the resilience of families contending with wartime deployments and parental injury. In addition, we propose specific mechanisms that mobilize and enhance resilience in military families and that comprise central features of the FOCUS Program. We describe these resilience-enhancing mechanisms in detail, followed by a discussion of the ways in which evaluation data from the program's first 2 years of operation supports the proposed model and the specified mechanisms of action.

  11. A system of safety management practices and worker engagement for reducing and preventing accidents: an empirical and theoretical investigation.

    PubMed

    Wachter, Jan K; Yorio, Patrick L

    2014-07-01

    The overall research objective was to theoretically and empirically develop the ideas around a system of safety management practices (ten practices were elaborated), to test their relationship with objective safety statistics (such as accident rates), and to explore how these practices work to achieve positive safety results (accident prevention) through worker engagement. Data were collected using safety manager, supervisor and employee surveys designed to assess and link safety management system practices, employee perceptions resulting from existing practices, and safety performance outcomes. Results indicate the following: there is a significant negative relationship between the presence of ten individual safety management practices, as well as the composite of these practices, with accident rates; there is a significant negative relationship between the level of safety-focused worker emotional and cognitive engagement with accident rates; safety management systems and worker engagement levels can be used individually to predict accident rates; safety management systems can be used to predict worker engagement levels; and worker engagement levels act as mediators between the safety management system and safety performance outcomes (such as accident rates). Even though the presence of safety management system practices is linked with incident reduction and may represent a necessary first-step in accident prevention, safety performance may also depend on mediation by safety-focused cognitive and emotional engagement by workers. Thus, when organizations invest in a safety management system approach to reducing/preventing accidents and improving safety performance, they should also be concerned about winning over the minds and hearts of their workers through human performance-based safety management systems designed to promote and enhance worker engagement. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Empirically Based Comprehensive Treatment Program for Parasuicide.

    ERIC Educational Resources Information Center

    Clum, George A.; And Others

    1979-01-01

    Suggests secondary parasuicide prevention is the most viable path for future research. Aggressive case findings and primary prevention approaches have failed to reduce suicide attempt rates. A secondary prevention model, based on factors predictive of parasuicide, was developed. Stress reduction and cognitive restructuring were primary goals of…

  13. Regional differences of outpatient physician supply as a theoretical economic and empirical generalized linear model.

    PubMed

    Scholz, Stefan; Graf von der Schulenburg, Johann-Matthias; Greiner, Wolfgang

    2015-11-17

    Regional differences in physician supply can be found in many health care systems, regardless of their organizational and financial structure. A theoretical model is developed for the physicians' decision on office allocation, covering demand-side factors and a consumption time function. To test the propositions following the theoretical model, generalized linear models were estimated to explain differences in 412 German districts. Various factors found in the literature were included to control for physicians' regional preferences. Evidence in favor of the first three propositions of the theoretical model could be found. Specialists show a stronger association to higher populated districts than GPs. Although indicators for regional preferences are significantly correlated with physician density, their coefficients are not as high as population density. If regional disparities should be addressed by political actions, the focus should be to counteract those parameters representing physicians' preferences in over- and undersupplied regions.

  14. Empirically Based Strategies for Preventing Juvenile Delinquency.

    PubMed

    Pardini, Dustin

    2016-04-01

    Juvenile crime is a serious public health problem that results in significant emotional and financial costs for victims and society. Using etiologic models as a guide, multiple interventions have been developed to target risk factors thought to perpetuate the emergence and persistence of delinquent behavior. Evidence suggests that the most effective interventions tend to have well-defined treatment protocols, focus on therapeutic approaches as opposed to external control techniques, and use multimodal cognitive-behavioral treatment strategies. Moving forward, there is a need to develop effective policies and procedures that promote the widespread adoption of evidence-based delinquency prevention practices across multiple settings.

  15. An improved theoretical approach to the empirical corrections of density functional theory

    NASA Astrophysics Data System (ADS)

    Lii, Jenn-Huei; Hu, Ching-Han

    2012-02-01

    An empirical correction to density functional theory (DFT) has been developed in this study. The approach, called correlation corrected atomization-dispersion (CCAZD), involves short- and long-range terms. Short-range correction consists of bond ( 1,2-) and angle ( 1,3-) interactions, which remedies the deficiency of DFT in describing the proto-branching stabilization effects. Long-range correction includes a Buckingham potential function aiming to account for the dispersion interactions. The empirical corrections of DFT were parameterized to reproduce reported Δ H f values of the training set containing alkane, alcohol and ether molecules. The Δ H f of the training set molecules predicted by the CCAZD method combined with two different DFT methods, B3LYP and MPWB1K, with a 6-31G* basis set agreed well with the experimental data. For 106 alkane, alcohol and ether compounds, the average absolute deviations (AADs) in Δ H f were 0.45 and 0.51 kcal/mol for B3LYP- and MPWB1K-CCAZD, respectively. Calculations of isomerization energies, rotational barriers and conformational energies further validated the CCAZD approach. The isomerization energies improved significantly with the CCAZD treatment. The AADs for 22 energies of isomerization reactions were decreased from 3.55 and 2.44 to 0.55 and 0.82 kcal/mol for B3LYP and MPWB1K, respectively. This study also provided predictions of MM4, G3, CBS-QB3 and B2PLYP-D for comparison. The final test of the CCAZD approach on the calculation of the cellobiose analog potential surface also showed promising results. This study demonstrated that DFT calculations with CCAZD empirical corrections achieved very good agreement with reported values for various chemical reactions with a small basis set as 6-31G*.

  16. Corrective Feedback in L2 Writing: Theoretical Perspectives, Empirical Insights, and Future Directions

    ERIC Educational Resources Information Center

    Van Beuningen, Catherine

    2010-01-01

    The role of (written) corrective feedback (CF) in the process of acquiring a second language (L2) has been an issue of considerable controversy among theorists and researchers alike. Although CF is a widely applied pedagogical tool and its use finds support in SLA theory, practical and theoretical objections to its usefulness have been raised…

  17. Predicting Child Abuse Potential: An Empirical Investigation of Two Theoretical Frameworks

    ERIC Educational Resources Information Center

    Begle, Angela Moreland; Dumas, Jean E.; Hanson, Rochelle F.

    2010-01-01

    This study investigated two theoretical risk models predicting child maltreatment potential: (a) Belsky's (1993) developmental-ecological model and (b) the cumulative risk model in a sample of 610 caregivers (49% African American, 46% European American; 53% single) with a child between 3 and 6 years old. Results extend the literature by using a…

  18. Theoretical and Empirical Investigations of Integrated Mathematics and Science Education in the Middle Grades.

    ERIC Educational Resources Information Center

    Huntley, Mary Ann

    Integrated mathematics and science teaching and learning is a widely advocated yet largely unexplored phenomenon. This study involves an examination of middle school integrated mathematics and science education from two perspectives: in theory and in practice. The theoretical component of this research addresses the ill-defined nature of the…

  19. Interpersonal Relatedness, Self-Definition, and Their Motivational Orientation during Adolescence: A Theoretical and Empirical Integration.

    ERIC Educational Resources Information Center

    Shahar, Golan; Henrich, Christopher C.; Blatt, Sidney J.; Ryan, Richard; Little, Todd D.

    2003-01-01

    A theoretical model was examined linking early adolescent interpersonal relatedness and self-definition, autonomous and controlled regulation, and negative and positive life events. Findings indicated that self-criticism predicted less positive events, whereas efficacy predicted more positive events. Effects were fully mediated by absence and…

  20. Multiple Embedded Inequalities and Cultural Diversity in Educational Systems: A Theoretical and Empirical Exploration

    ERIC Educational Resources Information Center

    Verhoeven, Marie

    2011-01-01

    This article explores the social construction of cultural diversity in education, with a view to social justice. It examines how educational systems organize ethno-cultural difference and how this process contributes to inequalities. Theoretical resources are drawn from social philosophy as well as from recent developments in social organisation…

  1. Predicting Child Abuse Potential: An Empirical Investigation of Two Theoretical Frameworks

    ERIC Educational Resources Information Center

    Begle, Angela Moreland; Dumas, Jean E.; Hanson, Rochelle F.

    2010-01-01

    This study investigated two theoretical risk models predicting child maltreatment potential: (a) Belsky's (1993) developmental-ecological model and (b) the cumulative risk model in a sample of 610 caregivers (49% African American, 46% European American; 53% single) with a child between 3 and 6 years old. Results extend the literature by using a…

  2. Rural Schools, Social Capital and the Big Society: A Theoretical and Empirical Exposition

    ERIC Educational Resources Information Center

    Bagley, Carl; Hillyard, Sam

    2014-01-01

    The paper commences with a theoretical exposition of the current UK government's policy commitment to the idealised notion of the Big Society and the social capital currency underpinning its formation. The paper positions this debate in relation to the rural and adopts an ethnographically-informed methodological approach to provide an in-depth…

  3. The Role of Identity in Acculturation among Immigrant People: Theoretical Propositions, Empirical Questions, and Applied Recommendations

    ERIC Educational Resources Information Center

    Schwartz, Seth J.; Montgomery, Marilyn J.; Briones, Ervin

    2006-01-01

    The present paper advances theoretical propositions regarding the relationship between acculturation and identity. The most central thesis argued is that acculturation represents changes in cultural identity and that personal identity has the potential to "anchor" immigrant people during their transition to a new society. The article emphasizes…

  4. Rural Schools, Social Capital and the Big Society: A Theoretical and Empirical Exposition

    ERIC Educational Resources Information Center

    Bagley, Carl; Hillyard, Sam

    2014-01-01

    The paper commences with a theoretical exposition of the current UK government's policy commitment to the idealised notion of the Big Society and the social capital currency underpinning its formation. The paper positions this debate in relation to the rural and adopts an ethnographically-informed methodological approach to provide an in-depth…

  5. Empirical Testing of a Theoretical Extension of the Technology Acceptance Model: An Exploratory Study of Educational Wikis

    ERIC Educational Resources Information Center

    Liu, Xun

    2010-01-01

    This study extended the technology acceptance model and empirically tested the new model with wikis, a new type of educational technology. Based on social cognitive theory and the theory of planned behavior, three new variables, wiki self-efficacy, online posting anxiety, and perceived behavioral control, were added to the original technology…

  6. Image-Based Empirical Modeling of the Plasmasphere

    NASA Technical Reports Server (NTRS)

    Adrian, Mark L.; Gallagher, D. L.

    2008-01-01

    A new suite of empirical models of plasmaspheric plasma based on remote, global images from the IMAGE EUV instrument is proposed for development. The purpose of these empirical models is to establish the statistical properties of the plasmasphere as a function of conditions. This suite of models will mark the first time the plasmaspheric plume is included in an empirical model. Development of these empirical plasmaspheric models will support synoptic studies (such as for wave propagation and growth, energetic particle loss through collisions and dust transport as influenced by charging) and serves as a benchmark against which physical models can be tested. The ability to know that a specific global density distribution occurs in response to specific magnetospheric and solar wind factors is a huge advantage over all previous in-situ based empirical models. The consequence of creating these new plasmaspheric models will be to provide much higher fidelity and much richer quantitative descriptions of the statistical properties of plasmaspheric plasma in the inner magnetosphere, whether that plasma is in the main body of the plasmasphere, nearby during recovery or in the plasmaspheric plume. Model products to be presented include statistical probabilities for being in the plasmasphere, near thermal He+ density boundaries and the complexity of its spatial structure.

  7. Image-Based Empirical Modeling of the Plasmasphere

    NASA Technical Reports Server (NTRS)

    Adrian, Mark L.; Gallagher, D. L.

    2008-01-01

    A new suite of empirical models of plasmaspheric plasma based on remote, global images from the IMAGE EUV instrument is proposed for development. The purpose of these empirical models is to establish the statistical properties of the plasmasphere as a function of conditions. This suite of models will mark the first time the plasmaspheric plume is included in an empirical model. Development of these empirical plasmaspheric models will support synoptic studies (such as for wave propagation and growth, energetic particle loss through collisions and dust transport as influenced by charging) and serves as a benchmark against which physical models can be tested. The ability to know that a specific global density distribution occurs in response to specific magnetospheric and solar wind factors is a huge advantage over all previous in-situ based empirical models. The consequence of creating these new plasmaspheric models will be to provide much higher fidelity and much richer quantitative descriptions of the statistical properties of plasmaspheric plasma in the inner magnetosphere, whether that plasma is in the main body of the plasmasphere, nearby during recovery or in the plasmaspheric plume. Model products to be presented include statistical probabilities for being in the plasmasphere, near thermal He+ density boundaries and the complexity of its spatial structure.

  8. Cryoprotective agent and temperature effects on human sperm membrane permeabilities: convergence of theoretical and empirical approaches for optimal cryopreservation methods.

    PubMed

    Gilmore, J A; Liu, J; Woods, E J; Peter, A T; Critser, J K

    2000-02-01

    Previous reports have left unresolved discrepancies between human sperm cryopreservation methods developed using theoretical optimization approaches and those developed empirically. This study was designed to investigate possible reasons for the discrepancies. Human spermatozoa were exposed to 1 mol/l glycerol, 1 mol/l dimethyl sulphoxide (DMSO), 1 mol/l propylene glycol (PG) or 2 mol/l ethylene glycol (EG) at 22, 11 and 0 degrees C, then returned to isosmotic media while changes in cell volume were monitored. Activation energies (E(a)) of the hydraulic conductivity (L(p)) in the presence of cryoprotective agents (CPA) (L(p)(CPA)) were 22.2 (DMSO), 11.9 (glycerol), 15.8 (PG), and 7.8 (EG) kcal/mol. The E(a) values of the membrane permeability to CPA (P(CPA)) were 12.1 (DMSO), 10.4 (glycerol), 8.6 (PG) and 8.0 (EG) kcal/mol. These data indicated that even at low temperatures, EG permeates fastest. The high L(p)(CPA) in the presence of EG and low associated E(a) would allow spermatozoa to remain closer to equilibrium with the extracellular solution during slow cooling in the presence of ice. Collectively, these data suggest that the increase of the E(a) of L(p) in the presence of CPA at low temperature is the likely reason for the observed discrepancy between theoretical predictions of spermatozoa freezing response and empirical data.

  9. Ecological risk and resilience perspective: a theoretical framework supporting evidence-based practice in schools.

    PubMed

    Powers, Joelle D

    2010-10-01

    Multidisciplinary school practitioners are clearly being called to use evidence-based practices from reputable sources such as their own professional organizations and federal agencies. In spite of this encouragement, most schools are not regularly employing empirically supported interventions. This paper further promotes the use of this approach by describing the theoretical support for evidence-based practice in schools. The ecological risk and resilience theoretical framework presented fills a gap in the literature and advocates for evidence-based practice in schools by illustrating how it can assist practitioners such as school social workers to better address problems associated with school failure.

  10. Gatekeeper Training for Suicide Prevention: A Theoretical Model and Review of the Empirical Literature

    DTIC Science & Technology

    2015-01-01

    negative effects of organizational constraints in relation to intervention behavior (Moore et al., 2011). Thus, even in presence of organizational... Effects of Gender and Personal Experience,” Journal of the American Academy of Child and Adolescent Psychiatry, Vol. 28, No. 6, November 1989, pp. 925...what is known about the effectiveness of gatekeepers and of gatekeeper training. We present a theoretical model describing how gatekeeper training may

  11. Statistical learning as an individual ability: Theoretical perspectives and empirical evidence

    PubMed Central

    Siegelman, Noam; Frost, Ram

    2015-01-01

    Although the power of statistical learning (SL) in explaining a wide range of linguistic functions is gaining increasing support, relatively little research has focused on this theoretical construct from the perspective of individual differences. However, to be able to reliably link individual differences in a given ability such as language learning to individual differences in SL, three critical theoretical questions should be posed: Is SL a componential or unified ability? Is it nested within other general cognitive abilities? Is it a stable capacity of an individual? Following an initial mapping sentence outlining the possible dimensions of SL, we employed a battery of SL tasks in the visual and auditory modalities, using verbal and non-verbal stimuli, with adjacent and non-adjacent contingencies. SL tasks were administered along with general cognitive tasks in a within-subject design at two time points to explore our theoretical questions. We found that SL, as measured by some tasks, is a stable and reliable capacity of an individual. Moreover, we found SL to be independent of general cognitive abilities such as intelligence or working memory. However, SL is not a unified capacity, so that individual sensitivity to conditional probabilities is not uniform across modalities and stimuli. PMID:25821343

  12. Empirical Likelihood-Based Confidence Interval of ROC Curves.

    PubMed

    Su, Haiyan; Qin, Yongsong; Liang, Hua

    2009-11-01

    In this article we propose an empirical likelihood-based confidence interval for receiver operating characteristic curves which are based on a continuous-scale test. The approach is easily understood, simply implemented, and computationally efficient. The results from our simulation studies indicate that the finite-sample numerical performance slightly outperforms the most promising methods published recently. Two real datasets are analyzed by using the proposed method and the existing bootstrap-based method.

  13. From the Cover: The growth of business firms: Theoretical framework and empirical evidence

    NASA Astrophysics Data System (ADS)

    Fu, Dongfeng; Pammolli, Fabio; Buldyrev, S. V.; Riccaboni, Massimo; Matia, Kaushik; Yamasaki, Kazuko; Stanley, H. Eugene

    2005-12-01

    We introduce a model of proportional growth to explain the distribution Pg(g) of business-firm growth rates. The model predicts that Pg(g) is exponential in the central part and depicts an asymptotic power-law behavior in the tails with an exponent = 3. Because of data limitations, previous studies in this field have been focusing exclusively on the Laplace shape of the body of the distribution. In this article, we test the model at different levels of aggregation in the economy, from products to firms to countries, and we find that the predictions of the model agree with empirical growth distributions and size-variance relationships. proportional growth | preferential attachment | Laplace distribution

  14. Three essays on energy and environmental economics: Empirical, applied, and theoretical

    NASA Astrophysics Data System (ADS)

    Karney, Daniel Houghton

    Energy and environmental economics are closely related fields as nearly all forms of energy production generate pollution and thus nearly all forms of environmental policy affect energy production and consumption. The three essays in this dissertation are related by their common themes of energy and environmental economics, but they differ in their methodologies. The first chapter is an empirical exercise that looks that the relationship between electricity price deregulation and maintenance outages at nuclear power plants. The second chapter is an applied theory paper that investigates environmental regulation in a multiple pollutants setting. The third chapter develops a new methodology regarding the construction of analytical general equilibrium models that can be used to study topics in energy and environmental economics.

  15. Chronic Pain in a Couples Context: A Review and Integration of Theoretical Models and Empirical Evidence

    PubMed Central

    Leonard, Michelle T.; Cano, Annmarie; Johansen, Ayna B.

    2007-01-01

    Researchers have become increasingly interested in the social context of chronic pain conditions. The purpose of this article is to provide an integrated review of the evidence linking marital functioning with chronic pain outcomes including pain severity, physical disability, pain behaviors, and psychological distress. We first present an overview of existing models that identify an association between marital functioning and pain variables. We then review the empirical evidence for a relationship between pain variables and several marital functioning variables including marital satisfaction, spousal support, spouse responses to pain, and marital interaction. On the basis of the evidence, we present a working model of marital and pain variables, identify gaps in the literature, and offer recommendations for research and clinical work. Perspective The authors provide a comprehensive review of the relationships between marital functioning and chronic pain variables to advance future research and help treatment providers understand marital processes in chronic pain. PMID:16750794

  16. GIS Teacher Training: Empirically-Based Indicators of Effectiveness

    ERIC Educational Resources Information Center

    Höhnle, Steffen; Fögele, Janis; Mehren, Rainer; Schubert, Jan Christoph

    2016-01-01

    In spite of various actions, the implementation of GIS (geographic information systems) in German schools is still very low. In the presented research, teaching experts as well as teaching novices were presented with empirically based constraints for implementation stemming from an earlier survey. In the process of various group discussions, the…

  17. GIS Teacher Training: Empirically-Based Indicators of Effectiveness

    ERIC Educational Resources Information Center

    Höhnle, Steffen; Fögele, Janis; Mehren, Rainer; Schubert, Jan Christoph

    2016-01-01

    In spite of various actions, the implementation of GIS (geographic information systems) in German schools is still very low. In the presented research, teaching experts as well as teaching novices were presented with empirically based constraints for implementation stemming from an earlier survey. In the process of various group discussions, the…

  18. Agent-Based Models in Empirical Social Research

    ERIC Educational Resources Information Center

    Bruch, Elizabeth; Atwell, Jon

    2015-01-01

    Agent-based modeling has become increasingly popular in recent years, but there is still no codified set of recommendations or practices for how to use these models within a program of empirical research. This article provides ideas and practical guidelines drawn from sociology, biology, computer science, epidemiology, and statistics. We first…

  19. Responses to Commentaries on Advances in Empirically Based Assessment.

    ERIC Educational Resources Information Center

    McConaughy, Stephanie H.

    1993-01-01

    Author of article (this issue) describing research program to advance assessment of children's behavioral and emotional problems; presenting conceptual framework for multiaxial empirically based assessment; and summarizing research efforts to develop cross-informant scales for scoring parent, teacher, and self-reports responds to commentaries on…

  20. Empirical Data Sets for Agent Based Modeling of Crowd Scenarios

    DTIC Science & Technology

    2009-08-06

    Conclusion 2UNCLASSIFIED- Approved for Public Release Crowd Research • Large numbers • Heterogeneous • Individual Actors • Interdependence • Language ... Barriers • Empirical testing is difficult • Simulations require models based on real data, otherwise they are fiction 3UNCLASSIFIED- Approved for

  1. Guiding Empirical and Theoretical Explorations of Organic Matter Decay By Synthesizing Temperature Responses of Enzyme Kinetics, Microbes, and Isotope Fluxes

    NASA Astrophysics Data System (ADS)

    Billings, S. A.; Ballantyne, F.; Lehmeier, C.; Min, K.

    2014-12-01

    Soil organic matter (SOM) transformation rates generally increase with temperature, but whether this is realized depends on soil-specific features. To develop predictive models applicable to all soils, we must understand two key, ubiquitous features of SOM transformation: the temperature sensitivity of myriad enzyme-substrate combinations and temperature responses of microbial physiology and metabolism, in isolation from soil-specific conditions. Predicting temperature responses of production of CO2 vs. biomass is also difficult due to soil-specific features: we cannot know the identity of active microbes nor the substrates they employ. We highlight how recent empirical advances describing SOM decay can help develop theoretical tools relevant across diverse spatial and temporal scales. At a molecular level, temperature effects on purified enzyme kinetics reveal distinct temperature sensitivities of decay of diverse SOM substrates. Such data help quantify the influence of microbial adaptations and edaphic conditions on decay, have permitted computation of the relative availability of carbon (C) and nitrogen (N) liberated upon decay, and can be used with recent theoretical advances to predict changes in mass specific respiration rates as microbes maintain biomass C:N with changing temperature. Enhancing system complexity, we can subject microbes to temperature changes while controlling growth rate and without altering substrate availability or identity of the active population, permitting calculation of variables typically inferred in soils: microbial C use efficiency (CUE) and isotopic discrimination during C transformations. Quantified declines in CUE with rising temperature are critical for constraining model CUE estimates, and known changes in δ13C of respired CO2 with temperature is useful for interpreting δ13C-CO2 at diverse scales. We suggest empirical studies important for advancing knowledge of how microbes respond to temperature, and ideas for theoretical

  2. Social Health Inequalities and eHealth: A Literature Review With Qualitative Synthesis of Theoretical and Empirical Studies.

    PubMed

    Latulippe, Karine; Hamel, Christine; Giroux, Dominique

    2017-04-27

    eHealth is developing rapidly and brings with it a promise to reduce social health inequalities (SHIs). Yet, it appears that it also has the potential to increase them. The general objective of this review was to set out how to ensure that eHealth contributes to reducing SHIs rather than exacerbating them. This review has three objectives: (1) identifying characteristics of people at risk of experiencing social inequality in health; (2) determining the possibilities of developing eHealth tools that avoid increasing SHI; and (3) modeling the process of using an eHealth tool by people vulnerable to SHI. Following the EPPI approach (Evidence for Policy and Practice of Information of the Institute of Education at the University of London), two databases were searched for the terms SHIs and eHealth and their derivatives in titles and abstracts. Qualitative, quantitative, and mixed articles were included and evaluated. The software NVivo (QSR International) was employed to extract the data and allow for a metasynthesis of the data. Of the 73 articles retained, 10 were theoretical, 7 were from reviews, and 56 were based on empirical studies. Of the latter, 40 used a quantitative approach, 8 used a qualitative approach, 4 used mixed methods approach, and only 4 were based on participatory research-action approach. The digital divide in eHealth is a serious barrier and contributes greatly to SHI. Ethnicity and low income are the most commonly used characteristics to identify people at risk of SHI. The most promising actions for reducing SHI via eHealth are to aim for universal access to the tool of eHealth, become aware of users' literacy level, create eHealth tools that respect the cultural attributes of future users, and encourage the participation of people at risk of SHI. eHealth has the potential to widen the gulf between those at risk of SHI and the rest of the population. The widespread expansion of eHealth technologies calls for rigorous consideration of

  3. Social Health Inequalities and eHealth: A Literature Review With Qualitative Synthesis of Theoretical and Empirical Studies

    PubMed Central

    Hamel, Christine; Giroux, Dominique

    2017-01-01

    Background eHealth is developing rapidly and brings with it a promise to reduce social health inequalities (SHIs). Yet, it appears that it also has the potential to increase them. Objectives The general objective of this review was to set out how to ensure that eHealth contributes to reducing SHIs rather than exacerbating them. This review has three objectives: (1) identifying characteristics of people at risk of experiencing social inequality in health; (2) determining the possibilities of developing eHealth tools that avoid increasing SHI; and (3) modeling the process of using an eHealth tool by people vulnerable to SHI. Methods Following the EPPI approach (Evidence for Policy and Practice of Information of the Institute of Education at the University of London), two databases were searched for the terms SHIs and eHealth and their derivatives in titles and abstracts. Qualitative, quantitative, and mixed articles were included and evaluated. The software NVivo (QSR International) was employed to extract the data and allow for a metasynthesis of the data. Results Of the 73 articles retained, 10 were theoretical, 7 were from reviews, and 56 were based on empirical studies. Of the latter, 40 used a quantitative approach, 8 used a qualitative approach, 4 used mixed methods approach, and only 4 were based on participatory research-action approach. The digital divide in eHealth is a serious barrier and contributes greatly to SHI. Ethnicity and low income are the most commonly used characteristics to identify people at risk of SHI. The most promising actions for reducing SHI via eHealth are to aim for universal access to the tool of eHealth, become aware of users’ literacy level, create eHealth tools that respect the cultural attributes of future users, and encourage the participation of people at risk of SHI. Conclusions eHealth has the potential to widen the gulf between those at risk of SHI and the rest of the population. The widespread expansion of e

  4. Strengthening Theoretical Testing in Criminology Using Agent-based Modeling

    PubMed Central

    Groff, Elizabeth R.

    2014-01-01

    Objectives: The Journal of Research in Crime and Delinquency (JRCD) has published important contributions to both criminological theory and associated empirical tests. In this article, we consider some of the challenges associated with traditional approaches to social science research, and discuss a complementary approach that is gaining popularity—agent-based computational modeling—that may offer new opportunities to strengthen theories of crime and develop insights into phenomena of interest. Method: Two literature reviews are completed. The aim of the first is to identify those articles published in JRCD that have been the most influential and to classify the theoretical perspectives taken. The second is intended to identify those studies that have used an agent-based model (ABM) to examine criminological theories and to identify which theories have been explored. Results: Ecological theories of crime pattern formation have received the most attention from researchers using ABMs, but many other criminological theories are amenable to testing using such methods. Conclusion: Traditional methods of theory development and testing suffer from a number of potential issues that a more systematic use of ABMs—not without its own issues—may help to overcome. ABMs should become another method in the criminologists toolbox to aid theory testing and falsification. PMID:25419001

  5. Strengthening Theoretical Testing in Criminology Using Agent-based Modeling.

    PubMed

    Johnson, Shane D; Groff, Elizabeth R

    2014-07-01

    The Journal of Research in Crime and Delinquency (JRCD) has published important contributions to both criminological theory and associated empirical tests. In this article, we consider some of the challenges associated with traditional approaches to social science research, and discuss a complementary approach that is gaining popularity-agent-based computational modeling-that may offer new opportunities to strengthen theories of crime and develop insights into phenomena of interest. Two literature reviews are completed. The aim of the first is to identify those articles published in JRCD that have been the most influential and to classify the theoretical perspectives taken. The second is intended to identify those studies that have used an agent-based model (ABM) to examine criminological theories and to identify which theories have been explored. Ecological theories of crime pattern formation have received the most attention from researchers using ABMs, but many other criminological theories are amenable to testing using such methods. Traditional methods of theory development and testing suffer from a number of potential issues that a more systematic use of ABMs-not without its own issues-may help to overcome. ABMs should become another method in the criminologists toolbox to aid theory testing and falsification.

  6. Theoretical performance assessment and empirical analysis of super-resolution under unknown affine sensor motion.

    PubMed

    Thelen, Brian J; Valenzuela, John R; LeBlanc, Joel W

    2016-04-01

    This paper deals with super-resolution (SR) processing and associated theoretical performance assessment for under-sampled video data collected from a moving imaging platform with unknown motion and assuming a relatively flat scene. This general scenario requires joint estimation of the high-resolution image and the parameters that determine a projective transform that relates the collected frames to one another. A quantitative assessment of the variance in the random error as achieved through a joint-estimation approach (e.g., SR image reconstruction and motion estimation) is carried out via the general framework of M-estimators and asymptotic statistics. This approach provides a performance measure on estimating the fine-resolution scene when there is a lack of perspective information and represents a significant advancement over previous work that considered only the more specific scenario of mis-registration. A succinct overview of the theoretical framework is presented along with some specific results on the approximate random error for the case of unknown translation and affine motions. A comparison is given between the approximated random error and that actually achieved by an M-estimator approach to the joint-estimation problem. These results provide insight on the reduction in SR reconstruction accuracy when jointly estimating unknown inter-frame affine motion.

  7. Recombination rate variation and speciation: theoretical predictions and empirical results from rabbits and mice.

    PubMed

    Nachman, Michael W; Payseur, Bret A

    2012-02-05

    Recently diverged taxa may continue to exchange genes. A number of models of speciation with gene flow propose that the frequency of gene exchange will be lower in genomic regions of low recombination and that these regions will therefore be more differentiated. However, several population-genetic models that focus on selection at linked sites also predict greater differentiation in regions of low recombination simply as a result of faster sorting of ancestral alleles even in the absence of gene flow. Moreover, identifying the actual amount of gene flow from patterns of genetic variation is tricky, because both ancestral polymorphism and migration lead to shared variation between recently diverged taxa. New analytic methods have been developed to help distinguish ancestral polymorphism from migration. Along with a growing number of datasets of multi-locus DNA sequence variation, these methods have spawned a renewed interest in speciation models with gene flow. Here, we review both speciation and population-genetic models that make explicit predictions about how the rate of recombination influences patterns of genetic variation within and between species. We then compare those predictions with empirical data of DNA sequence variation in rabbits and mice. We find strong support for the prediction that genomic regions experiencing low levels of recombination are more differentiated. In most cases, reduced gene flow appears to contribute to the pattern, although disentangling the relative contribution of reduced gene flow and selection at linked sites remains a challenge. We suggest fruitful areas of research that might help distinguish between different models.

  8. Pharmaceuticals, political money, and public policy: a theoretical and empirical agenda.

    PubMed

    Jorgensen, Paul D

    2013-01-01

    Why, when confronted with policy alternatives that could improve patient care, public health, and the economy, does Congress neglect those goals and tailor legislation to suit the interests of pharmaceutical corporations? In brief, for generations, the pharmaceutical industry has convinced legislators to define policy problems in ways that protect its profit margin. It reinforces this framework by selectively providing information and by targeting campaign contributions to influential legislators and allies. In this way, the industry displaces the public's voice in developing pharmaceutical policy. Unless citizens mobilize to confront the political power of pharmaceutical firms, objectionable industry practices and public policy will not change. Yet we need to refine this analysis. I propose a research agenda to uncover pharmaceutical influence. It develops the theory of dependence corruption to explain how the pharmaceutical industry is able to deflect the broader interests of the general public. It includes empirical studies of lobbying and campaign finance to uncover the means drug firms use to: (1) shape the policy framework adopted and information used to analyze policy; (2) subsidize the work of political allies; and (3) influence congressional voting. © 2013 American Society of Law, Medicine & Ethics, Inc.

  9. Why do people need self-esteem? A theoretical and empirical review.

    PubMed

    Pyszczynski, Tom; Greenberg, Jeff; Solomon, Sheldon; Arndt, Jamie; Schimel, Jeff

    2004-05-01

    Terror management theory (TMT; J. Greenberg, T. Pyszczynski, & S. Solomon, 1986) posits that people are motivated to pursue positive self-evaluations because self-esteem provides a buffer against the omnipresent potential for anxiety engendered by the uniquely human awareness of mortality. Empirical evidence relevant to the theory is reviewed showing that high levels of self-esteem reduce anxiety and anxiety-related defensive behavior, reminders of one's mortality increase self-esteem striving and defense of self-esteem against threats in a variety of domains, high levels of self-esteem eliminate the effect of reminders of mortality on both self-esteem striving and the accessibility of death-related thoughts, and convincing people of the existence of an afterlife eliminates the effect of mortality salience on self-esteem striving. TMT is compared with other explanations for why people need self-esteem, and a critique of the most prominent of these, sociometer theory, is provided. ((c) 2004 APA, all rights reserved)

  10. The Operation Was a Success, but the Patient Died: Theoretical Orthodoxy versus Empirical Validation.

    ERIC Educational Resources Information Center

    Hershenson, David B.

    1992-01-01

    Reviews issues raised in ongoing debate between advocates of eclecticism and proponents of single-theory-based counseling. Sees essential issue for field of mental health counseling to be need to build theory base specific to profession. Asserts that adequate theory must be based on defining principles of mental health counseling profession and…

  11. Picture-word interference is a Stroop effect: A theoretical analysis and new empirical findings.

    PubMed

    Starreveld, Peter A; La Heij, Wido

    2017-06-01

    The picture-word interference (PWI) paradigm and the Stroop color-word interference task are often assumed to reflect the same underlying processes. On the basis of a PRP study, Dell'Acqua et al. (Psychonomic Bulletin & Review, 14: 717-722, 2007) argued that this assumption is incorrect. In this article, we first discuss the definitions of Stroop- and picture-word interference. Next, we argue that both effects consist of at least four components that correspond to four characteristics of the distractor word: (1) response-set membership, (2) task relevance, (3) semantic relatedness, and (4) lexicality. On the basis of this theoretical analysis, we conclude that the typical Stroop effect and the typical PWI effect mainly differ in the relative contributions of these four components. Finally, the results of an interference task are reported in which only the nature of the target - color or picture - was manipulated and all other distractor task characteristics were kept constant. The results showed no difference between color and picture targets with respect to all behavioral measures examined. We conclude that the assumption that the same processes underlie verbal interference in color and picture naming is warranted.

  12. Adult Coping with Childhood Sexual Abuse: A Theoretical and Empirical Review

    PubMed Central

    Walsh, Kate; Fortier, Michelle A.; DiLillo, David

    2009-01-01

    Coping has been suggested as an important element in understanding the long-term functioning of individuals with a history of child sexual abuse (CSA). The present review synthesizes the literature on coping with CSA, first by examining theories of coping with trauma, and, second by examining how these theories have been applied to studies of coping in samples of CSA victims. Thirty-nine studies were reviewed, including eleven descriptive studies of the coping strategies employed by individuals with a history of CSA, eighteen correlational studies of the relationship between coping strategies and long-term functioning of CSA victims, and ten investigations in which coping was examined as a mediational factor in relation to long-term outcomes. These studies provide initial information regarding early sexual abuse and subsequent coping processes. However, this literature is limited by several theoretical and methodological issues, including a failure to specify the process of coping as it occurs, a disparity between theory and research, and limited applicability to clinical practice. Future directions of research are discussed and include the need to understand coping as a process, identification of coping in relation to adaptive outcomes, and considerations of more complex mediational and moderational processes in the study of coping with CSA. PMID:20161502

  13. Linking predator risk and uncertainty to adaptive forgetting: a theoretical framework and empirical test using tadpoles.

    PubMed

    Ferrari, Maud C O; Brown, Grant E; Bortolotti, Gary R; Chivers, Douglas P

    2010-07-22

    Hundreds of studies have examined how prey animals assess their risk of predation. These studies work from the basic tennet that prey need to continually balance the conflicting demands of predator avoidance with activities such as foraging and reproduction. The information that animals gain regarding local predation risk is most often learned. Yet, the concept of 'memory' in the context of predation remains virtually unexplored. Here, our goal was (i) to determine if the memory window associated with predator recognition is fixed or flexible and, if it is flexible, (ii) to identify which factors affect the length of this window and in which ways. We performed an experiment on larval wood frogs, Rana sylvatica, to test whether the risk posed by, and the uncertainty associated with, the predator would affect the length of the tadpoles' memory window. We found that as the risk associated with the predator increases, tadpoles retained predator-related information for longer. Moreover, if the uncertainty about predator-related information increases, then prey use this information for a shorter period. We also present a theoretical framework aiming at highlighting both intrinsic and extrinsic factors that could affect the memory window of information use by prey individuals.

  14. Lay attitudes toward deception in medicine: Theoretical considerations and empirical evidence

    PubMed Central

    Pugh, Jonathan; Kahane, Guy; Maslen, Hannah; Savulescu, Julian

    2016-01-01

    Abstract Background: There is a lack of empirical data on lay attitudes toward different sorts of deception in medicine. However, lay attitudes toward deception should be taken into account when we consider whether deception is ever permissible in a medical context. The objective of this study was to examine lay attitudes of U.S. citizens toward different sorts of deception across different medical contexts. Methods: A one-time online survey was administered to U.S. users of the Amazon “Mechanical Turk” website. Participants were asked to answer questions regarding a series of vignettes depicting different sorts of deception in medical care, as well as a question regarding their general attitudes toward truth-telling. Results: Of the 200 respondents, the majority found the use of placebos in different contexts to be acceptable following partial disclosure but found it to be unacceptable if it involved outright lying. Also, 55.5% of respondents supported the use of sham surgery in clinical research, although 55% claimed that it would be unacceptable to deceive patients in this research, even if this would improve the quality of the data from the study. Respondents supported fully informing patients about distressing medical information in different contexts, especially when the patient is suffering from a chronic condition. In addition, 42.5% of respondents believed that it is worse to deceive someone by providing the person with false information than it is to do so by giving the person true information that is likely to lead them to form a false belief, without telling them other important information that shows it to be false. However, 41.5% believed that the two methods of deception were morally equivalent. Conclusions: Respondents believed that some forms of deception were acceptable in some circumstances. While the majority of our respondents opposed outright lying in medical contexts, they were prepared to support partial disclosure and the use of

  15. Empirical analysis on a keyword-based semantic system

    NASA Astrophysics Data System (ADS)

    Zhang, Zi-Ke; Lü, Linyuan; Liu, Jian-Guo; Zhou, Tao

    2008-12-01

    Keywords in scientific articles have found their significance in information filtering and classification. In this article, we empirically investigated statistical characteristics and evolutionary properties of keywords in a very famous journal, namely Proceedings of the National Academy of Science of the United States of America (PNAS), including frequency distribution, temporal scaling behavior, and decay factor. The empirical results indicate that the keyword frequency in PNAS approximately follows a Zipf’s law with exponent 0.86. In addition, there is a power-low correlation between the cumulative number of distinct keywords and the cumulative number of keyword occurrences. Extensive empirical analysis on some other journals’ data is also presented, with decaying trends of most popular keywords being monitored. Interestingly, top journals from various subjects share very similar decaying tendency, while the journals of low impact factors exhibit completely different behavior. Those empirical characters may shed some light on the in-depth understanding of semantic evolutionary behaviors. In addition, the analysis of keyword-based system is helpful for the design of corresponding recommender systems.

  16. A theoretical and empirical investigation of delayed growth response in the continuous culture of bacteria.

    PubMed

    Ellermeyer, Sean; Hendrix, Jerald; Ghoochan, Nariman

    2003-06-21

    When the growth of bacteria in a chemostat is controlled by limiting the supply of a single essential nutrient, the growth rate is affected both by the concentration of this nutrient in the culture medium and by the amount of time that it takes for the chemical and physiological processes that result in the production of new biomass. Thus, although the uptake of nutrient by cells is an essentially instantaneous process, the addition of new biomass is delayed by the amount of time that it takes to metabolize the nutrient. Mathematical models that incorporate this "delayed growth response" (DGR) phenomenon have been developed and analysed. However, because they are formulated in terms of parameters that are difficult to measure directly, these models are of limited value to experimentalists. In this paper, we introduce a DGR model that is formulated in terms of measurable parameters. In addition, we provide for this model a complete set of criteria for determining persistence versus extinction of the bacterial culture in the chemostat. Specifically, we show that DGR plays a role in determining persistence versus extinction only under certain ranges of chemostat operating parameters. It is also shown, however, that DGR plays a role in determining the steady-state nutrient and bacteria concentrations in all instances of persistence. The steady state and transient behavior of solutions of our model is found to be in agreement with data that we obtained in growing Escherichia coli 23716 in a chemostat with glucose as a limiting nutrient. One of the theoretical predictions of our model that does not occur in other DGR models is that under certain conditions a large delay in growth response might actually have a positive effect on the bacteria's ability to persist.

  17. Innovation in Information Technology: Theoretical and Empirical Study in SMQR Section of Export Import in Automotive Industry

    NASA Astrophysics Data System (ADS)

    Edi Nugroho Soebandrija, Khristian; Pratama, Yogi

    2014-03-01

    This paper has the objective to provide the innovation in information technology in both theoretical and empirical study. Precisely, both aspects relate to the Shortage Mispacking Quality Report (SMQR) Claims in Export and Import in Automotive Industry. This paper discusses the major aspects of Innovation, Information Technology, Performance and Competitive Advantage. Furthermore, In the empirical study of PT. Astra Honda Motor (AHM) refers to SMQR Claims, Communication Systems, Analysis and Design Systems. Briefly both aspects of the major aspects and its empirical study are discussed in the Introduction Session. Furthermore, the more detail discussion is conducted in the related aspects in other sessions of this paper, in particular in Literature Review in term classical and updated reference of current research. The increases of SMQR claim and communication problem at PT. Astra Daihatsu Motor (PT. ADM) which still using the email cause the time of claim settlement become longer and finally it causes the rejected of SMQR claim by supplier. With presence of this problem then performed to design the integrated communication system to manage the communication process of SMQR claim between PT. ADM with supplier. The systems was analyzed and designed is expected to facilitate the claim communication process so that can be run in accordance with the procedure and fulfill the target of claim settlement time and also eliminate the difficulties and problems on the previous manual communication system with the email. The design process of the system using the approach of system development life cycle method by Kendall & Kendall (2006)which design process covers the SMQR problem communication process, judgment process by the supplier, claim process, claim payment process and claim monitoring process. After getting the appropriate system designs for managing the SMQR claim, furthermore performed the system implementation and can be seen the improvement in claim communication

  18. Determining VA physician requirements through empirically based models.

    PubMed Central

    Lipscomb, J; Kilpatrick, K E; Lee, K L; Pieper, K S

    1995-01-01

    OBJECTIVE: As part of a project to estimate physician requirements for the Department of Veterans Affairs, the Institute of Medicine (IOM) developed and tested empirically based models of physician staffing, by specialty, that could be applied to each VA facility. DATA SOURCE/STUDY SETTING. These analyses used selected data on all patient encounters and all facilities in VA's management information systems for FY 1989. STUDY DESIGN. Production functions (PFs), with patient workload dependent on physicians, other providers, and nonpersonnel factors, were estimated for each of 14 patient care areas in a VA medical center. Inverse production functions (IPFs), with physician staffing levels dependent on workload and other factors, were estimated for each of 11 specialty groupings. These models provide complementary approaches to deriving VA physician requirements for patient care and medical education. DATA COLLECTION/EXTRACTION METHODS. All data were assembled by VA and put in analyzable SAS data sets containing FY 1989 workload and staffing variables used in the PFs and IPFs. All statistical analyses reported here were conducted by the IOM. PRINCIPAL FINDINGS. Existing VA data can be used to develop statistically strong, clinically plausible, empirically based models for calculating physician requirements, by specialty. These models can (1) compare current physician staffing in a given setting with systemwide norms and (2) yield estimates of future staffing requirements conditional on future workload. CONCLUSIONS. Empirically based models can play an important role in determining VA physician staffing requirements. VA should test, evaluate, and revise these models on an ongoing basis. PMID:7860320

  19. Studying Scale-Up and Spread as Social Practice: Theoretical Introduction and Empirical Case Study

    PubMed Central

    Shaw, Sara; Wherton, Joseph; Hughes, Gemma; Greenhalgh, Trisha

    2017-01-01

    Background Health and care technologies often succeed on a small scale but fail to achieve widespread use (scale-up) or become routine practice in other settings (spread). One reason for this is under-theorization of the process of scale-up and spread, for which a potentially fruitful theoretical approach is to consider the adoption and use of technologies as social practices. Objective This study aimed to use an in-depth case study of assisted living to explore the feasibility and usefulness of a social practice approach to explaining the scale-up of an assisted-living technology across a local system of health and social care. Methods This was an individual case study of the implementation of a Global Positioning System (GPS) “geo-fence” for a person living with dementia, nested in a much wider program of ethnographic research and organizational case study of technology implementation across health and social care (Studies in Co-creating Assisted Living Solutions [SCALS] in the United Kingdom). A layered sociological analysis included micro-level data on the index case, meso-level data on the organization, and macro-level data on the wider social, technological, economic, and political context. Data (interviews, ethnographic notes, and documents) were analyzed and synthesized using structuration theory. Results A social practice lens enabled the uptake of the GPS technology to be studied in the context of what human actors found salient, meaningful, ethical, legal, materially possible, and professionally or culturally appropriate in particular social situations. Data extracts were used to illustrate three exemplar findings. First, professional practice is (and probably always will be) oriented not to “implementing technologies” but to providing excellent, ethical care to sick and vulnerable individuals. Second, in order to “work,” health and care technologies rely heavily on human relationships and situated knowledge. Third, such technologies do not

  20. Studying Scale-Up and Spread as Social Practice: Theoretical Introduction and Empirical Case Study.

    PubMed

    Shaw, James; Shaw, Sara; Wherton, Joseph; Hughes, Gemma; Greenhalgh, Trisha

    2017-07-07

    Health and care technologies often succeed on a small scale but fail to achieve widespread use (scale-up) or become routine practice in other settings (spread). One reason for this is under-theorization of the process of scale-up and spread, for which a potentially fruitful theoretical approach is to consider the adoption and use of technologies as social practices. This study aimed to use an in-depth case study of assisted living to explore the feasibility and usefulness of a social practice approach to explaining the scale-up of an assisted-living technology across a local system of health and social care. This was an individual case study of the implementation of a Global Positioning System (GPS) "geo-fence" for a person living with dementia, nested in a much wider program of ethnographic research and organizational case study of technology implementation across health and social care (Studies in Co-creating Assisted Living Solutions [SCALS] in the United Kingdom). A layered sociological analysis included micro-level data on the index case, meso-level data on the organization, and macro-level data on the wider social, technological, economic, and political context. Data (interviews, ethnographic notes, and documents) were analyzed and synthesized using structuration theory. A social practice lens enabled the uptake of the GPS technology to be studied in the context of what human actors found salient, meaningful, ethical, legal, materially possible, and professionally or culturally appropriate in particular social situations. Data extracts were used to illustrate three exemplar findings. First, professional practice is (and probably always will be) oriented not to "implementing technologies" but to providing excellent, ethical care to sick and vulnerable individuals. Second, in order to "work," health and care technologies rely heavily on human relationships and situated knowledge. Third, such technologies do not just need to be adopted by individuals; they need

  1. Reversing Language Shift: Theoretical and Empirical Foundations of Assistance to Threatened Languages. Multilingual Matters Series: 76.

    ERIC Educational Resources Information Center

    Fishman, Joshua A.

    The theory and practice of assistance to speech communities whose native languages are threatened are examined. The discussion focuses on why most efforts to reverse language shift are unsuccessful or even harmful, diagnosing difficulties and prescribing alternatives based on a combination of ethnolinguistic, sociocultural, and econotechnical…

  2. Student Conceptual Level and Models of Teaching: Theoretical and Empirical Coordination of Two Models

    ERIC Educational Resources Information Center

    Hunt, David E.; And Others

    1974-01-01

    The studies described here are the first in a series of investigations of the teaching-learning process based on Kurt Lewin's B-P-E paradigm (learning outcomes are a result of the interactive effects of different kinds of students and different kinds of teaching approaches). (JA)

  3. Alignment of Standards and Assessment: A Theoretical and Empirical Study of Methods for Alignment

    ERIC Educational Resources Information Center

    Nasstrom, Gunilla; Henriksson, Widar

    2008-01-01

    Introduction: In a standards-based school-system alignment of policy documents with standards and assessment is important. To be able to evaluate whether schools and students have reached the standards, the assessment should focus on the standards. Different models and methods can be used for measuring alignment, i.e. the correspondence between…

  4. Alignment of Standards and Assessment: A Theoretical and Empirical Study of Methods for Alignment

    ERIC Educational Resources Information Center

    Nasstrom, Gunilla; Henriksson, Widar

    2008-01-01

    Introduction: In a standards-based school-system alignment of policy documents with standards and assessment is important. To be able to evaluate whether schools and students have reached the standards, the assessment should focus on the standards. Different models and methods can be used for measuring alignment, i.e. the correspondence between…

  5. Coping, acculturation, and psychological adaptation among migrants: a theoretical and empirical review and synthesis of the literature.

    PubMed

    Kuo, Ben C H

    2014-01-01

    Given the continuous, dynamic demographic changes internationally due to intensive worldwide migration and globalization, the need to more fully understand how migrants adapt and cope with acculturation experiences in their new host cultural environment is imperative and timely. However, a comprehensive review of what we currently know about the relationship between coping behavior and acculturation experience for individuals undergoing cultural changes has not yet been undertaken. Hence, the current article aims to compile, review, and examine cumulative cross-cultural psychological research that sheds light on the relationships among coping, acculturation, and psychological and mental health outcomes for migrants. To this end, this present article reviews prevailing literature pertaining to: (a) the stress and coping conceptual perspective of acculturation; (b) four theoretical models of coping, acculturation and cultural adaptation; (c) differential coping pattern among diverse acculturating migrant groups; and (d) the relationship between coping variabilities and acculturation levels among migrants. In terms of theoretical understanding, this review points to the relative strengths and limitations associated with each of the four theoretical models on coping-acculturation-adaptation. These theories and the empirical studies reviewed in this article further highlight the central role of coping behaviors/strategies in the acculturation process and outcome for migrants and ethnic populations, both conceptually and functionally. Moreover, the review shows that across studies culturally preferred coping patterns exist among acculturating migrants and migrant groups and vary with migrants' acculturation levels. Implications and limitations of the existing literature for coping, acculturation, and psychological adaptation research are discussed and recommendations for future research are put forth.

  6. Theoretical and empirical study of 2-biphenylmethanol molecule: the structure and intermolecular interactions

    NASA Astrophysics Data System (ADS)

    Babkov, L. M.; Baran, J.; Davydova, N. A.; Pietraszko, A.; Uspenskiy, K. E.

    2005-06-01

    The crystal structure of 2-biphenylmethanol has been studied by X-ray crystallography at room temperature and its IR transmittance spectra have been measured in the wide frequency region 400-4000 cm -1. The structure, energy, electrooptical parameters, frequencies and intensities in the IR spectra for the free molecules of 2-biphenylmethanol, methanol, and tetramer of hydrogen-bonded methanol molecules have been calculated at the B3LYP level of the density functional theory with the 6-31G* basis set. Based on analysis of the obtained results the interpretation of the IR spectra for room temperature was given and estimation of the hydrogen bonds energy has been done.

  7. Theoretical and empirical investigation of the structure and intermolecular interactions in 2-biphenylmethanol

    NASA Astrophysics Data System (ADS)

    Uspenskiy, K. E.; Babkov, L. M.; Baran, J.; Davydova, N. A.; Pietraszko, A.

    2005-06-01

    The crystal structure of 2-biphenylmethanol has been studied by X-ray crystallography at room temperature and its JR transmittance spectra have been measured in the wide frequency region 400-4000 cm-1. The structure, energy, electrooptical parameters, frequencies and intensities in the IR spectra for the free molecules of 2-biphenylmethanol, methanol, and tetramer of hydrogen-bonded methanol molecules have been calculated at the B3LYP level of the density functional theory with the 6-3 1G* basis set. Based on analysis of the obtained results the interpretation of the JR spectra for room temperature was given and estimation of the hydrogen bonds energy has been done.

  8. Parenting around child snacking: development of a theoretically-guided, empirically informed conceptual model.

    PubMed

    Davison, Kirsten K; Blake, Christine E; Blaine, Rachel E; Younginer, Nicholas A; Orloski, Alexandria; Hamtil, Heather A; Ganter, Claudia; Bruton, Yasmeen P; Vaughn, Amber E; Fisher, Jennifer O

    2015-09-17

    Snacking contributes to excessive energy intakes in children. Yet factors shaping child snacking are virtually unstudied. This study examines food parenting practices specific to child snacking among low-income caregivers. Semi-structured interviews were conducted in English or Spanish with 60 low-income caregivers of preschool-aged children (18 non-Hispanic white, 22 African American/Black, 20 Hispanic; 92% mothers). A structured interview guide was used to solicit caregivers' definitions of snacking and strategies they use to decide what, when and how much snack their child eats. Interviews were audio-recorded, transcribed verbatim and analyzed using an iterative theory-based and grounded approach. A conceptual model of food parenting specific to child snacking was developed to summarize the findings and inform future research. Caregivers' descriptions of food parenting practices specific to child snacking were consistent with previous models of food parenting developed based on expert opinion [1, 2]. A few noteworthy differences however emerged. More than half of participants mentioned permissive feeding approaches (e.g., my child is the boss when it comes to snacks). As a result, permissive feeding was included as a higher order feeding dimension in the resulting model. In addition, a number of novel feeding approaches specific to child snacking emerged including child-centered provision of snacks (i.e., responding to a child's hunger cues when making decisions about snacks), parent unilateral decision making (i.e., making decisions about a child's snacks without any input from the child), and excessive monitoring of snacks (i.e., monitoring all snacks provided to and consumed by the child). The resulting conceptual model includes four higher order feeding dimensions including autonomy support, coercive control, structure and permissiveness and 20 sub-dimensions. This study formulates a language around food parenting practices specific to child snacking

  9. The Ease of Language Understanding (ELU) model: theoretical, empirical, and clinical advances

    PubMed Central

    Rönnberg, Jerker; Lunner, Thomas; Zekveld, Adriana; Sörqvist, Patrik; Danielsson, Henrik; Lyxell, Björn; Dahlström, Örjan; Signoret, Carine; Stenfelt, Stefan; Pichora-Fuller, M. Kathleen; Rudner, Mary

    2013-01-01

    Working memory is important for online language processing during conversation. We use it to maintain relevant information, to inhibit or ignore irrelevant information, and to attend to conversation selectively. Working memory helps us to keep track of and actively participate in conversation, including taking turns and following the gist. This paper examines the Ease of Language Understanding model (i.e., the ELU model, Rönnberg, 2003; Rönnberg et al., 2008) in light of new behavioral and neural findings concerning the role of working memory capacity (WMC) in uni-modal and bimodal language processing. The new ELU model is a meaning prediction system that depends on phonological and semantic interactions in rapid implicit and slower explicit processing mechanisms that both depend on WMC albeit in different ways. It is based on findings that address the relationship between WMC and (a) early attention processes in listening to speech, (b) signal processing in hearing aids and its effects on short-term memory, (c) inhibition of speech maskers and its effect on episodic long-term memory, (d) the effects of hearing impairment on episodic and semantic long-term memory, and finally, (e) listening effort. New predictions and clinical implications are outlined. Comparisons with other WMC and speech perception models are made. PMID:23874273

  10. "The liability of newness" revisited: Theoretical restatement and empirical testing in emergent organizations.

    PubMed

    Yang, Tiantian; Aldrich, Howard E

    2017-03-01

    The mismatch between Stinchcombe's original propositions regarding "the liability of newness" and subsequent attempts to test those propositions suggests to us that the form and causes of the liability remain open to further investigation. Taking organizational emergence as a process comprising entrepreneurs engaging in actions that produce outcomes, we propose hypotheses about the social mechanisms of organizational construction involved in investing resources, developing routines, and maintaining boundaries. Distinguishing between initial founding conditions versus subsequent activities, our results not only confirm the liability of newness hypothesis, but also reveal a much higher risk of failure in organizations' early lifetime than rates found in previous research. Moreover, our results highlight the importance of entrepreneurs' continuing effort after their initial organizing attempts. Whereas only a few initial founding conditions lower the risk of failure, subsequent entrepreneurial activities play a major role in keeping the venture alive. Entrepreneurs contribute to whether a venture survives through raising more resources, enacting routines, and gaining increased public recognition of organizational boundaries. After controlling for financial performance, our results still hold. Based on our analysis, we offer suggestions for theory and research on organizations and entrepreneurship.

  11. Theoretical and empirical low perigee aerodynamic heating during orbital flight of an atmosphere explorer

    NASA Technical Reports Server (NTRS)

    Caruso, P. S., Jr.; Naegeli, C. R.

    1976-01-01

    This document presents the results of an extensive, low perigee, orbital aerodynamic heating study undertaken in support of the Atmosphere Explorer-C Temperature Alarm. Based upon in-flight orbital temperature data from the Temperature Alarm tungsten resistance wire thermometer, aerodynamic heating rates have been determined for eight selected orbits by means of a reduced thermal analytical model verified by both ground test and flight data. These heating rates are compared with the classical free molecular and first order collision regime values. It has been concluded that, for engineering purposes, the aerodynamic heating rate of atmospheric gases at perigee altitudes between 170 and 135 km on pure tungsten wire is 30 to 60% of the value set by the classical free molecular limit. Relative to the more usual orbital thermal input attributable to direct solar radiation, the aerodynamic heating rate at the lowest altitude attempted with the spacecraft despun (135 km) is the equivalent of about 1.2 solar constants incident on a tungsten wire with a solar absorptivity of 0.85.

  12. Periodic limb movements of sleep: empirical and theoretical evidence supporting objective at-home monitoring

    PubMed Central

    Moro, Marilyn; Goparaju, Balaji; Castillo, Jelina; Alameddine, Yvonne; Bianchi, Matt T

    2016-01-01

    Introduction Periodic limb movements of sleep (PLMS) may increase cardiovascular and cerebrovascular morbidity. However, most people with PLMS are either asymptomatic or have nonspecific symptoms. Therefore, predicting elevated PLMS in the absence of restless legs syndrome remains an important clinical challenge. Methods We undertook a retrospective analysis of demographic data, subjective symptoms, and objective polysomnography (PSG) findings in a clinical cohort with or without obstructive sleep apnea (OSA) from our laboratory (n=443 with OSA, n=209 without OSA). Correlation analysis and regression modeling were performed to determine predictors of periodic limb movement index (PLMI). Markov decision analysis with TreeAge software compared strategies to detect PLMS: in-laboratory PSG, at-home testing, and a clinical prediction tool based on the regression analysis. Results Elevated PLMI values (>15 per hour) were observed in >25% of patients. PLMI values in No-OSA patients correlated with age, sex, self-reported nocturnal leg jerks, restless legs syndrome symptoms, and hypertension. In OSA patients, PLMI correlated only with age and self-reported psychiatric medications. Regression models indicated only a modest predictive value of demographics, symptoms, and clinical history. Decision modeling suggests that at-home testing is favored as the pretest probability of PLMS increases, given plausible assumptions regarding PLMS morbidity, costs, and assumed benefits of pharmacological therapy. Conclusion Although elevated PLMI values were commonly observed, routinely acquired clinical information had only weak predictive utility. As the clinical importance of elevated PLMI continues to evolve, it is likely that objective measures such as PSG or at-home PLMS monitors will prove increasingly important for clinical and research endeavors. PMID:27540316

  13. Theoretical and Empirical Study of Visual Cortex Using the Bcm Neural Network Model

    NASA Astrophysics Data System (ADS)

    Clothiaux, Eugene Edmund

    1990-01-01

    Most neurons in kitten visual cortex respond to specific visual stimuli that are presented to either one or both eyes. These selective response properties of neurons in kitten visual cortex depend on the visual environment that is experienced during a critical period of postnatal development. For example, closing one eye (monocular deprivation) leads to the loss of that eye's ability to elicit a cortical response and any manipulation that causes the two eyes to become misaligned produces abnormal neuronal responses as well. A number of proposed learning rules attempt to account for these observations. The learning rule considered in this thesis is the cortical synaptic plasticity theory of Bienenstock, Cooper and Munro (1982), which uses a sliding modification threshold based on average cell activity to determine the changes made to the synaptic weights. A detailed investigation of the BCM model is undertaken with special emphasis on finding parameter sets that produce rates of change of the BCM model synapses consistent with experiment. Computer simulations using the developed parameter sets indicate that the BCM theory does indeed capture the essential features of a broad range of experimental results. The BCM theory also makes one clear prediction: during monocular deprivation the closed eye's loss of response occurs because of the highly specific response properties of the open eye. An analysis of an experiment designed to test this prediction of the theory is described in detail. The results of the analysis do indicate a slight correlation between open eye selectivity and closed eye responsiveness; however, the results are not highly statistically significant.

  14. Size-dependent standard deviation for growth rates: Empirical results and theoretical modeling

    NASA Astrophysics Data System (ADS)

    Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H. Eugene; Grosse, I.

    2008-05-01

    We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation σ(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation σ(R) on the average value of the wages with a scaling exponent β≈0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation σ(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of σ(R) on the average payroll with a scaling exponent β≈-0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.

  15. AGENT-BASED MODELS IN EMPIRICAL SOCIAL RESEARCH*

    PubMed Central

    Bruch, Elizabeth; Atwell, Jon

    2014-01-01

    Agent-based modeling has become increasingly popular in recent years, but there is still no codified set of recommendations or practices for how to use these models within a program of empirical research. This article provides ideas and practical guidelines drawn from sociology, biology, computer science, epidemiology, and statistics. We first discuss the motivations for using agent-based models in both basic science and policy-oriented social research. Next, we provide an overview of methods and strategies for incorporating data on behavior and populations into agent-based models, and review techniques for validating and testing the sensitivity of agent-based models. We close with suggested directions for future research. PMID:25983351

  16. Utilizing disclosure in the treatment of the sequelae of childhood sexual abuse: a theoretical and empirical review.

    PubMed

    Bradley, R G; Follingstad, D R

    2001-02-01

    Although disclosure is a component of many therapeutic approaches to treating the long-term symptoms associated with child sexual abuse (CSA), the ameliorative mechanisms of this approach are still unclear. This review investigates the expected benefits of disclosure in therapy by looking at the theoretical and empirical support for its effectiveness in treating the specific psychopathological sequelae associated with a history of CSA. In order to accomplish this task, a core group of sequelae associated with sexual abuse are presented. The components of disclosure as a therapeutic process are divided into three processes: disclosure-through-description, disclosure-through-rethinking, and disclosure-in-relationship. The review describes the ways in which these elements of disclosure are used within different therapeutic approaches. The treatment outcome literature is then reviewed in terms of the elements of disclosure included in the treatment approaches and the symptoms improved by treatment. In conclusion, implications are presented concerning the appropriate uses of disclosure in psychotherapy directed at alleviating the long-term sequelae associated with a history of CSA.

  17. The Social Implications of Sexual Identity Formation and the Coming-Out Process: A Review of the Theoretical and Empirical Literature.

    ERIC Educational Resources Information Center

    Mosher, Chad M.

    2001-01-01

    Examines current research on publicly communicating one's sexual orientation and sexual identity formation models within the two prevalent theoretical orientations: essentialism and social constructionism. Aspects of both theories find support in the empirical literature reviewed. The discovery process is discussed and three coming-out audiences…

  18. Video watermarking with empirical PCA-based decoding.

    PubMed

    Khalilian, Hanieh; Bajic, Ivan V

    2013-12-01

    A new method for video watermarking is presented in this paper. In the proposed method, data are embedded in the LL subband of wavelet coefficients, and decoding is performed based on the comparison among the elements of the first principal component resulting from empirical principal component analysis (PCA). The locations for data embedding are selected such that they offer the most robust PCA-based decoding. Data are inserted in the LL subband in an adaptive manner based on the energy of high frequency subbands and visual saliency. Extensive testing was performed under various types of attacks, such as spatial attacks (uniform and Gaussian noise and median filtering), compression attacks (MPEG-2, H. 263, and H. 264), and temporal attacks (frame repetition, frame averaging, frame swapping, and frame rate conversion). The results show that the proposed method offers improved performance compared with several methods from the literature, especially under additive noise and compression attacks.

  19. Developing a theoretical framework for complex community-based interventions.

    PubMed

    Angeles, Ricardo N; Dolovich, Lisa; Kaczorowski, Janusz; Thabane, Lehana

    2014-01-01

    Applying existing theories to research, in the form of a theoretical framework, is necessary to advance knowledge from what is already known toward the next steps to be taken. This article proposes a guide on how to develop a theoretical framework for complex community-based interventions using the Cardiovascular Health Awareness Program as an example. Developing a theoretical framework starts with identifying the intervention's essential elements. Subsequent steps include the following: (a) identifying and defining the different variables (independent, dependent, mediating/intervening, moderating, and control); (b) postulating mechanisms how the independent variables will lead to the dependent variables; (c) identifying existing theoretical models supporting the theoretical framework under development; (d) scripting the theoretical framework into a figure or sets of statements as a series of hypotheses, if/then logic statements, or a visual model; (e) content and face validation of the theoretical framework; and (f) revising the theoretical framework. In our example, we combined the "diffusion of innovation theory" and the "health belief model" to develop our framework. Using the Cardiovascular Health Awareness Program as the model, we demonstrated a stepwise process of developing a theoretical framework. The challenges encountered are described, and an overview of the strategies employed to overcome these challenges is presented.

  20. Examination of the hierarchical structure of the brief COPE in a French sample: empirical and theoretical convergences.

    PubMed

    Doron, Julie; Trouillet, Raphaël; Gana, Kamel; Boiché, Julie; Neveu, Dorine; Ninot, Grégory

    2014-01-01

    This study aimed to determine whether the various factors of coping as measured by the Brief COPE could be integrated into a more parsimonious hierarchical structure. To identify a higher structure for the Brief COPE, several measurement models based on prior theoretical and hierarchical conceptions of coping were tested. First, confirmatory factor analysis (CFA) results revealed that the Brief COPE's 14 original factors could be represented more parsimoniously with 5 higher order dimensions: problem-solving, support-seeking, avoidance, cognitive restructuring, and distraction (N = 2,187). Measurement invariance across gender was also shown. Second, results provided strong support for the cross-validation and the concurrent validity of the hierarchical structure of the Brief COPE (N = 584). Results indicated statistically significant correlations between Brief COPE factors and trait anxiety and perceived stress. Limitations and theoretical and methodological implications of these results are discussed.

  1. Empirically based device modeling of bulk heterojunction organic photovoltaics

    NASA Astrophysics Data System (ADS)

    Pierre, Adrien; Lu, Shaofeng; Howard, Ian A.; Facchetti, Antonio; Arias, Ana Claudia

    2013-10-01

    An empirically based, open source, optoelectronic model is constructed to accurately simulate organic photovoltaic (OPV) devices. Bulk heterojunction OPV devices based on a new low band gap dithienothiophene- diketopyrrolopyrrole donor polymer (P(TBT-DPP)) are blended with PC70BM and processed under various conditions, with efficiencies up to 4.7%. The mobilities of electrons and holes, bimolecular recombination coefficients, exciton quenching efficiencies in donor and acceptor domains and optical constants of these devices are measured and input into the simulator to yield photocurrent with less than 7% error. The results from this model not only show carrier activity in the active layer but also elucidate new routes of device optimization by varying donor-acceptor composition as a function of position. Sets of high and low performance devices are investigated and compared side-by-side.

  2. Development of an empirically based dynamic biomechanical strength model

    NASA Technical Reports Server (NTRS)

    Pandya, A.; Maida, J.; Aldridge, A.; Hasson, S.; Woolford, B.

    1992-01-01

    The focus here is on the development of a dynamic strength model for humans. Our model is based on empirical data. The shoulder, elbow, and wrist joints are characterized in terms of maximum isolated torque, position, and velocity in all rotational planes. This information is reduced by a least squares regression technique into a table of single variable second degree polynomial equations determining the torque as a function of position and velocity. The isolated joint torque equations are then used to compute forces resulting from a composite motion, which in this case is a ratchet wrench push and pull operation. What is presented here is a comparison of the computed or predicted results of the model with the actual measured values for the composite motion.

  3. Empirically and theoretically determined spatial and temporal variability of the Late Holocene sea level in the South-Central Pacific (Invited)

    NASA Astrophysics Data System (ADS)

    Eisenhauer, A.; Rashid, R. J.; Hallmann, N.; Stocchi, P.; Fietzke, J.; Camoin, G.; Vella, C.; Samankassou, E.

    2013-12-01

    We present U/Th dated fossil corals which were collected from reef platforms on three islands (Moorea, Huahine and Bora Bora) of the Society Islands, French Polynesia. In particular U/Th-dated fossil microatolls precisely constrain the timing and amplitude of sea-level variations at and after the 'Holocene Sea Level Maximum, HSLM' because microatolls grow close or even directly at the current sea-level position. We found that sea level reached a subsidence corrected position of at least ~1.5 m above present sea level (apsl) at ~5.4 ka before present (BP) relative to Huahine island and a maximum amplitude of at least ~2.0 m apsl at ~2.0 ka BP relative to Moorea. In between 5.4 and 2 ka minimum sealevel oscillated between 1.5 and 2 m for ~3 ka but then declined to the present position after ~2 ka BP. Based on statistical arguments on the coral age distribution HSLM is constrained to an interval of 3.5×0.8 ka. Former studies being in general accord with our data show that sea level in French Polynesia was ~1 m higher than present between 5,000 and 1,250 yrs BP and that a highstand was reached between 2,000 and 1,500 yrs BP (Pirazzoli and Montaggioni, 1988) and persisted until 1,200 yrs BP in the Tuamotu Archipelago (Pirazzoli and Montaggioni, 1986). Modeling of the Late Holocene sea-level rise performed during the course of this study taking glacio-isostatic and the ocean syphoning effect into account predicts a Late Holocene sea-level highstand of ~1 m apsl at ~4 ka BP for Bora Bora which is in general agreement with the statistical interpretation of our empirical data. However, the modeled HSLM amplitude of ~1 m apsl is considerably smaller than predicted by the empirical data indicating amplitudes of more than 2 m. Furthermore, the theoretical model predicts a continuously falling sea level after ~4 ka to the present. This is in contrast to the empirical data indicating a sea level remaining above at least ~1 m apsl between 5 ka and 2 ka then followed by a certain

  4. Theoretical Perspectives of Adherence to Web-Based Interventions: a Scoping Review.

    PubMed

    Ryan, Cathal; Bergin, Michael; Wells, John Sg

    2017-07-20

    The purpose of this paper is to review the literature as this relates to theoretical perspectives of adherence to web-based interventions, drawing upon empirical evidence from the fields of psychology, business, information technology and health care. A scoping review of the literature utilising principles outlined by Arksey and O'Malley was undertaken. Several relevant theoretical perspectives have emerged, eight of which are charted and discussed in this review. These are the Internet Intervention Model, Persuasive Systems Design, the 'PERMA' framework, the Support Accountability Model, the Model of User Engagement, the Technology Acceptance Model, the Unified Theory of Acceptance and Use of IT and the Conceptual Model of User Engagement. The findings of the review indicate that an interdisciplinary approach, incorporating a range of technological, environmental and individual factors, may be needed in order to comprehensively explain user adherence to web-based interventions.

  5. Activity Theory as a Theoretical Framework for Health Self-Quantification: A Systematic Review of Empirical Studies

    PubMed Central

    2016-01-01

    Background Self-quantification (SQ) is a way of working in which, by using tracking tools, people aim to collect, manage, and reflect on personal health data to gain a better understanding of their own body, health behavior, and interaction with the world around them. However, health SQ lacks a formal framework for describing the self-quantifiers’ activities and their contextual components or constructs to pursue these health related goals. Establishing such framework is important because it is the first step to operationalize health SQ fully. This may in turn help to achieve the aims of health professionals and researchers who seek to make or study changes in the self-quantifiers’ health systematically. Objective The aim of this study was to review studies on health SQ in order to answer the following questions: What are the general features of the work and the particular activities that self-quantifiers perform to achieve their health objectives? What constructs of health SQ have been identified in the scientific literature? How have these studies described such constructs? How would it be possible to model these constructs theoretically to characterize the work of health SQ? Methods A systematic review of peer-reviewed literature was conducted. A total of 26 empirical studies were included. The content of these studies was thematically analyzed using Activity Theory as an organizing framework. Results The literature provided varying descriptions of health SQ as data-driven and objective-oriented work mediated by SQ tools. From the literature, we identified two types of SQ work: work on data (ie, data management activities) and work with data (ie, health management activities). Using Activity Theory, these activities could be characterized into 6 constructs: users, tracking tools, health objectives, division of work, community or group setting, and SQ plan and rules. We could not find a reference to any single study that accounted for all these activities and

  6. Activity Theory as a Theoretical Framework for Health Self-Quantification: A Systematic Review of Empirical Studies.

    PubMed

    Almalki, Manal; Gray, Kathleen; Martin-Sanchez, Fernando

    2016-05-27

    Self-quantification (SQ) is a way of working in which, by using tracking tools, people aim to collect, manage, and reflect on personal health data to gain a better understanding of their own body, health behavior, and interaction with the world around them. However, health SQ lacks a formal framework for describing the self-quantifiers' activities and their contextual components or constructs to pursue these health related goals. Establishing such framework is important because it is the first step to operationalize health SQ fully. This may in turn help to achieve the aims of health professionals and researchers who seek to make or study changes in the self-quantifiers' health systematically. The aim of this study was to review studies on health SQ in order to answer the following questions: What are the general features of the work and the particular activities that self-quantifiers perform to achieve their health objectives? What constructs of health SQ have been identified in the scientific literature? How have these studies described such constructs? How would it be possible to model these constructs theoretically to characterize the work of health SQ? A systematic review of peer-reviewed literature was conducted. A total of 26 empirical studies were included. The content of these studies was thematically analyzed using Activity Theory as an organizing framework. The literature provided varying descriptions of health SQ as data-driven and objective-oriented work mediated by SQ tools. From the literature, we identified two types of SQ work: work on data (ie, data management activities) and work with data (ie, health management activities). Using Activity Theory, these activities could be characterized into 6 constructs: users, tracking tools, health objectives, division of work, community or group setting, and SQ plan and rules. We could not find a reference to any single study that accounted for all these activities and constructs of health SQ activity. A

  7. Ontology-Based Empirical Knowledge Verification for Professional Virtual Community

    ERIC Educational Resources Information Center

    Chen, Yuh-Jen

    2011-01-01

    A professional virtual community provides an interactive platform for enterprise experts to create and share their empirical knowledge cooperatively, and the platform contains a tremendous amount of hidden empirical knowledge that knowledge experts have preserved in the discussion process. Therefore, enterprise knowledge management highly…

  8. Ontology-Based Empirical Knowledge Verification for Professional Virtual Community

    ERIC Educational Resources Information Center

    Chen, Yuh-Jen

    2011-01-01

    A professional virtual community provides an interactive platform for enterprise experts to create and share their empirical knowledge cooperatively, and the platform contains a tremendous amount of hidden empirical knowledge that knowledge experts have preserved in the discussion process. Therefore, enterprise knowledge management highly…

  9. Empirical Likelihood-Based ANOVA for Trimmed Means

    PubMed Central

    Velina, Mara; Valeinis, Janis; Greco, Luca; Luta, George

    2016-01-01

    In this paper, we introduce an alternative to Yuen’s test for the comparison of several population trimmed means. This nonparametric ANOVA type test is based on the empirical likelihood (EL) approach and extends the results for one population trimmed mean from Qin and Tsao (2002). The results of our simulation study indicate that for skewed distributions, with and without variance heterogeneity, Yuen’s test performs better than the new EL ANOVA test for trimmed means with respect to control over the probability of a type I error. This finding is in contrast with our simulation results for the comparison of means, where the EL ANOVA test for means performs better than Welch’s heteroscedastic F test. The analysis of a real data example illustrates the use of Yuen’s test and the new EL ANOVA test for trimmed means for different trimming levels. Based on the results of our study, we recommend the use of Yuen’s test for situations involving the comparison of population trimmed means between groups of interest. PMID:27690063

  10. Methods for combining a theoretical and an empirical approach in modelling pressure and flow control valves for CAE-programs for fluid power circuits

    NASA Astrophysics Data System (ADS)

    Handroos, Heikki

    An analytical mathematical model for a fluid power valve uses equations based on physical laws. The parameters consist of physical coefficients, dimensions of the internal elements, spring constants, etc. which are not provided by the component manufacturers. The valve has to be dismantled in order to determine their values. The model is only in accordance with a particular type of valve construction and there are a large number of parameters. This is a major common problem in computer aided engineering (CAE) programs for fluid power circuits. Methods for solving this problem by combining a theoretical and an empirical approach are presented. Analytical models for single stage pressure and flow control valves are brought into forms which contain fewer parameters whose values can be determined from measured characteristic curves. The least squares criterion is employed to identify the parameter values describing the steady state of a valve. The steady state characteristic curves that are required data for this identification are quite often provided by the manufacturers. The parameters describing the dynamics of a valve are determined using a simple noncomputational method using dynamic characteristic curves that can be easily measured. The importance of the identification accuracy of the different parameters of the single stage pressure relief valve model is compared using a parameter sensitivity analysis method. A new comparison method called relative mean value criterion is used to compare the influences of variations of the different parameters to a nominal dynamic response.

  11. A patient-centered pharmacy services model of HIV patient care in community pharmacy settings: a theoretical and empirical framework.

    PubMed

    Kibicho, Jennifer; Owczarzak, Jill

    2012-01-01

    Reflecting trends in health care delivery, pharmacy practice has shifted from a drug-specific to a patient-centered model of care, aimed at improving the quality of patient care and reducing health care costs. In this article, we outline a theoretical model of patient-centered pharmacy services (PCPS), based on in-depth, qualitative interviews with a purposive sample of 28 pharmacists providing care to HIV-infected patients in specialty, semispecialty, and nonspecialty pharmacy settings. Data analysis was an interactive process informed by pharmacists' interviews and a review of the general literature on patient centered care, including Medication Therapy Management (MTM) services. Our main finding was that the current models of pharmacy services, including MTM, do not capture the range of pharmacy services in excess of mandated drug dispensing services. In this article, we propose a theoretical PCPS model that reflects the actual services pharmacists provide. The model includes five elements: (1) addressing patients as whole, contextualized persons; (2) customizing interventions to unique patient circumstances; (3) empowering patients to take responsibility for their own health care; (4) collaborating with clinical and nonclinical providers to address patient needs; and (5) developing sustained relationships with patients. The overarching goal of PCPS is to empower patients' to take responsibility for their own health care and self-manage their HIV-infection. Our findings provide the foundation for future studies regarding how widespread these practices are in diverse community settings, the validity of the proposed PCPS model, the potential for standardizing pharmacist practices, and the feasibility of a PCPS framework to reimburse pharmacists services.

  12. MODELING OF 2LIBH4 PLUS MGH2 HYDROGEN STORAGE SYSTEM ACCIDENT SCENARIOS USING EMPIRICAL AND THEORETICAL THERMODYNAMICS

    SciTech Connect

    James, C; David Tamburello, D; Joshua Gray, J; Kyle Brinkman, K; Bruce Hardy, B; Donald Anton, D

    2009-04-01

    It is important to understand and quantify the potential risk resulting from accidental environmental exposure of condensed phase hydrogen storage materials under differing environmental exposure scenarios. This paper describes a modeling and experimental study with the aim of predicting consequences of the accidental release of 2LiBH{sub 4}+MgH{sub 2} from hydrogen storage systems. The methodology and results developed in this work are directly applicable to any solid hydride material and/or accident scenario using appropriate boundary conditions and empirical data. The ability to predict hydride behavior for hypothesized accident scenarios facilitates an assessment of the of risk associated with the utilization of a particular hydride. To this end, an idealized finite volume model was developed to represent the behavior of dispersed hydride from a breached system. Semiempirical thermodynamic calculations and substantiating calorimetric experiments were performed in order to quantify the energy released, energy release rates and to quantify the reaction products resulting from water and air exposure of a lithium borohydride and magnesium hydride combination. The hydrides, LiBH{sub 4} and MgH{sub 2}, were studied individually in the as-received form and in the 2:1 'destabilized' mixture. Liquid water hydrolysis reactions were performed in a Calvet calorimeter equipped with a mixing cell using neutral water. Water vapor and oxygen gas phase reactivity measurements were performed at varying relative humidities and temperatures by modifying the calorimeter and utilizing a gas circulating flow cell apparatus. The results of these calorimetric measurements were compared with standardized United Nations (UN) based test results for air and water reactivity and used to develop quantitative kinetic expressions for hydrolysis and air oxidation in these systems. Thermodynamic parameters obtained from these tests were then inputted into a computational fluid dynamics model to

  13. Empirically based device modeling of bulk heterojunction organic photovoltaics

    NASA Astrophysics Data System (ADS)

    Pierre, Adrien; Lu, Shaofeng; Howard, Ian A.; Facchetti, Antonio; Arias, Ana Claudia

    2013-04-01

    We develop an empirically based optoelectronic model to accurately simulate the photocurrent in organic photovoltaic (OPV) devices with novel materials including bulk heterojunction OPV devices based on a new low band gap dithienothiophene-DPP donor polymer, P(TBT-DPP), blended with PC70BM at various donor-acceptor weight ratios and solvent compositions. Our devices exhibit power conversion efficiencies ranging from 1.8% to 4.7% at AM 1.5G. Electron and hole mobilities are determined using space-charge limited current measurements. Bimolecular recombination coefficients are both analytically calculated using slowest-carrier limited Langevin recombination and measured using an electro-optical pump-probe technique. Exciton quenching efficiencies in the donor and acceptor domains are determined from photoluminescence spectroscopy. In addition, dielectric and optical constants are experimentally determined. The photocurrent and its bias-dependence that we simulate using the optoelectronic model we develop, which takes into account these physically measured parameters, shows less than 7% error with respect to the experimental photocurrent (when both experimentally and semi-analytically determined recombination coefficient is used). Free carrier generation and recombination rates of the photocurrent are modeled as a function of the position in the active layer at various applied biases. These results show that while free carrier generation is maximized in the center of the device, free carrier recombination is most dominant near the electrodes even in high performance devices. Such knowledge of carrier activity is essential for the optimization of the active layer by enhancing light trapping and minimizing recombination. Our simulation program is intended to be freely distributed for use in laboratories fabricating OPV devices.

  14. Phospholipid-based nonlamellar mesophases for delivery systems: bridging the gap between empirical and rational design.

    PubMed

    Martiel, Isabelle; Sagalowicz, Laurent; Mezzenga, Raffaele

    2014-07-01

    Phospholipids are ubiquitous cell membrane components and relatively well-accepted ingredients due to their natural origin. Phosphatidylcholine (PC) in particular offers a promising alternative to monoglycerides for lyotropic liquid crystalline (LLC) delivery system applications in the food, cosmetics and pharmaceutical industries, provided its strong tendency to form zero-mean curvature lamellar mesophases in water can be overcome. Higher negative curvatures are usually reached through the addition of a third lipid component, forming a ternary diagram phospholipid/water/oil. The initial part of this work summarizes the potential advantages and the challenges of phospholipid-based delivery system applications. In the next part, various ternary PC/water/oil systems are discussed, with a special emphasis on the PC/water/cyclohexane and PC/water/α-tocopherol systems. We report that R-(+)-limonene has a quantitatively similar effect as cyclohexane. The last part is devoted to the theoretical interpretation of the observed phase behaviors. A fruitful parallel is drawn with PC polymer-like reverse micelles, leading to a thermodynamic description in terms of interfacial bending energy. Investigations at the molecular level are reviewed to help in bridging the empirical and theoretical approaches. Predictive rules are finally derived from this wide-ranging overview, thereby opening the way to a future rational design of PC-based LLC delivery systems.

  15. PDE-based nonlinear diffusion techniques for denoising scientific and industrial images: an empirical study

    NASA Astrophysics Data System (ADS)

    Weeratunga, Sisira K.; Kamath, Chandrika

    2002-05-01

    Removing noise from data is often the first step in data analysis. Denoising techniques should not only reduce the noise, but do so without blurring or changing the location of the edges. Many approaches have been proposed to accomplish this; in this paper, we focus on one such approach, namely the use of non-linear diffusion operators. This approach has been studied extensively from a theoretical viewpoint ever since the 1987 work of Perona and Malik showed that non-linear filters outperformed the more traditional linear Canny edge detector. We complement this theoretical work by investigating the performance of several isotropic diffusion operators on test images from scientific domains. We explore the effects of various parameters such as the choice of diffusivity function, explicit and implicit methods for the discretization of the PDE, and approaches for the spatial discretization of the non-linear operator etc. We also compare these schemes with simple spatial filters and the more complex wavelet-based shrinkage techniques. Our empirical results show that, with an appropriate choice of parameters, diffusion-based schemes can be as effective as competitive techniques.

  16. Meaningful learning: theoretical support for concept-based teaching.

    PubMed

    Getha-Eby, Teresa J; Beery, Theresa; Xu, Yin; O'Brien, Beth A

    2014-09-01

    Novice nurses’ inability to transfer classroom knowledge to the bedside has been implicated in adverse patient outcomes, including death. Concept-based teaching is a pedagogy found to improve knowledge transfer. Concept-based teaching emanates from a constructivist paradigm of teaching and learning and can be implemented most effectively when the underlying theory and principles are applied. Ausubel’s theory of meaningful learning and its construct of substantive knowledge integration provides a model to help educators to understand, implement, and evaluate concept-based teaching. Contemporary findings from the fields of cognitive psychology, human development, and neurobiology provide empirical evidence of the relationship between concept-based teaching, meaningful learning, and knowledge transfer. This article describes constructivist principles and meaningful learning as they apply to nursing pedagogy.

  17. Empirical estimates and theoretical predictions of the shorting factor for the THEMIS double-probe electric field instrument

    NASA Astrophysics Data System (ADS)

    Califf, S.; Cully, C. M.

    2016-07-01

    Double-probe electric field measurements on board spacecraft present significant technical challenges, especially in the inner magnetosphere where the ambient plasma characteristics can vary dramatically and alter the behavior of the instrument. We explore the shorting factor for the Time History of Events and Macroscale Interactions during Substorms electric field instrument, which is a scale factor error on the measured electric field due to coupling between the sensing spheres and the long wire booms, using both an empirical technique and through simulations with varying levels of fidelity. The empirical data and simulations both show that there is effectively no shorting when the spacecraft is immersed in high-density plasma deep within the plasmasphere and that shorting becomes more prominent as plasma density decreases and the Debye length increases outside the plasmasphere. However, there is a significant discrepancy between the data and theory for the shorting factor in low-density plasmas: the empirical estimate indicates ~0.7 shorting for long Debye lengths, but the simulations predict a shorting factor of ~0.94. This paper systematically steps through the empirical and modeling methods leading to the disagreement with the intention of motivating further study on the topic.

  18. Segmented Labor Markets: A Review of the Theoretical and Empirical Literature and Its Implication for Educational Planning.

    ERIC Educational Resources Information Center

    Carnoy, Martin

    The study reviews orthodox theories of labor markets, presents new formulations of segmentation theory, and provides empirical tests of segmentation in the United States and several developing nations. Orthodox labor market theory views labor as being paid for its contribution to production and that investment in education and vocational training…

  19. Fleet Fatality Risk and its Sensitivity to Vehicle Mass Change in Frontal Vehicle-to-Vehicle Crashes, Using a Combined Empirical and Theoretical Model.

    PubMed

    Shi, Yibing; Nusholtz, Guy S

    2015-11-01

    The objective of this study is to analytically model the fatality risk in frontal vehicle-to-vehicle crashes of the current vehicle fleet, and its sensitivity to vehicle mass change. A model is built upon an empirical risk ratio-mass ratio relationship from field data and a theoretical mass ratio-velocity change ratio relationship dictated by conservation of momentum. The fatality risk of each vehicle is averaged over the closing velocity distribution to arrive at the mean fatality risks. The risks of the two vehicles are summed and averaged over all possible crash partners to find the societal mean fatality risk associated with a subject vehicle of a given mass from a fleet specified by a mass distribution function. Based on risk exponent and mass distribution from a recent fleet, the subject vehicle mean fatality risk is shown to increase, while at the same time that for the partner vehicles decreases, as the mass of the subject vehicle decreases. The societal mean fatality risk, the sum of these, incurs a penalty with respect to a fleet with complete mass equality. This penalty reaches its minimum (~8% for the example fleet) for crashes with a subject vehicle whose mass is close to the fleet mean mass. The sensitivity, i.e., the rate of change of the societal mean fatality risk with respect to the mass of the subject vehicle is assessed. Results from two sets of fully regression-based analyses, Kahane (2012) and Van Auken and Zellner (2013), are approximately compared with the current result. The general magnitudes of the results are comparable, but differences exist at a more detailed level. The subject vehicle-oriented societal mean fatality risk is averaged over all possible subject vehicle masses of a given fleet to obtain the overall mean fatality risk of the fleet. It is found to increase approximately linearly at a rate of about 0.8% for each 100 lb decrease in mass of all vehicles in the fleet.

  20. Terahertz Spectrum Analysis Based on Empirical Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Su, Yunpeng; Zheng, Xiaoping; Deng, Xiaojiao

    2017-08-01

    Precise identification of terahertz absorption peaks for materials with low concentration and high attenuation still remains a challenge. Empirical mode decomposition was applied to terahertz spectrum analysis in order to improve the performance on spectral fingerprints identification. We conducted experiments on water vapor and carbon monoxide respectively with terahertz time domain spectroscopy. By comparing their absorption spectra before and after empirical mode decomposition, we demonstrated that the first-order intrinsic mode function shows absorption peaks clearly in high-frequency range. By comparing the frequency spectra of the sample signals and their intrinsic mode functions, we proved that the first-order function contains most of the original signal's energy and frequency information so that it cannot be left out or replaced by high-order functions in spectral fingerprints detection. Empirical mode decomposition not only acts as an effective supplementary means to terahertz time-domain spectroscopy but also shows great potential in discrimination of materials and prediction of their concentrations.

  1. Assessment of Young Children Using the Achenbach System of Empirically Based Assessment (ASEBA)

    ERIC Educational Resources Information Center

    Rescorla, Leslie A.

    2005-01-01

    After providing a brief review of three other approaches to assessment of preschool children (DSM-IV diagnoses, "Zero to Three" diagnoses, and temperament scales), this paper focuses on the Achenbach System of Empirically Based Assessment (ASEBA). The empirically based assessment paradigm provides user-friendly, cost-effective, reliable,…

  2. Evidence-based ethics? On evidence-based practice and the "empirical turn" from normative bioethics

    PubMed Central

    Goldenberg, Maya J

    2005-01-01

    Background The increase in empirical methods of research in bioethics over the last two decades is typically perceived as a welcomed broadening of the discipline, with increased integration of social and life scientists into the field and ethics consultants into the clinical setting, however it also represents a loss of confidence in the typical normative and analytic methods of bioethics. Discussion The recent incipiency of "Evidence-Based Ethics" attests to this phenomenon and should be rejected as a solution to the current ambivalence toward the normative resolution of moral problems in a pluralistic society. While "evidence-based" is typically read in medicine and other life and social sciences as the empirically-adequate standard of reasonable practice and a means for increasing certainty, I propose that the evidence-based movement in fact gains consensus by displacing normative discourse with aggregate or statistically-derived empirical evidence as the "bottom line". Therefore, along with wavering on the fact/value distinction, evidence-based ethics threatens bioethics' normative mandate. The appeal of the evidence-based approach is that it offers a means of negotiating the demands of moral pluralism. Rather than appealing to explicit values that are likely not shared by all, "the evidence" is proposed to adjudicate between competing claims. Quantified measures are notably more "neutral" and democratic than liberal markers like "species normal functioning". Yet the positivist notion that claims stand or fall in light of the evidence is untenable; furthermore, the legacy of positivism entails the quieting of empirically non-verifiable (or at least non-falsifiable) considerations like moral claims and judgments. As a result, evidence-based ethics proposes to operate with the implicit normativity that accompanies the production and presentation of all biomedical and scientific facts unchecked. Summary The "empirical turn" in bioethics signals a need for

  3. On a goodness-of-fit between theoretical hypsometric curve and its empirical equivalents derived for various depth bins from 30 arc-second GEBCO bathymetry

    NASA Astrophysics Data System (ADS)

    Włosińska, M.; Niedzielski, T.; Priede, I. G.; Migoń, P.

    2012-04-01

    The poster reports ongoing investigations into hypsometric curve modelling and its implications for sea level change. Numerous large-scale geodynamic phenomena, including global tectonics and the related sea level changes, are well described by a hypsometric curve that quantifies how the area of sea floor varies along with depth. Although the notion of hypsometric curve is rather simple, it is difficult to provide a reasonable theoretical model that fits an empirical curve. An analytical equation for a hypsometric curve is well known, but its goodness-of-fit to an empirical one is far from perfect. Such a limited accuracy may result from either not entirely adequate theoretical assumptions and concepts of a theoretical hypsometric curve or rather poorly modelled global bathymetry. Recent progress in obtaining accurate data on sea floor topography is due to subsea surveying and remote sensing. There are bathymetric datasets, including Global Bathymetric Charts of the Oceans (GEBCO), that provide a global framework for hypsometric curve computation. The recent GEBCO bathymetry - a gridded dataset that consists a sea floor topography raster revealing a global coverage with a spatial resolution of 30 arc-seconds - can be analysed to verify a depth-area relationship and to re-evaluate classical models for sea level change in geological time. Processing of the geospatial data is feasible on the basis of modern powerful tools provided by Geographic Information System (GIS) and automated with Python, the programming language that allows the user to utilise the GIS geoprocessor.

  4. E-learning in engineering education: a theoretical and empirical study of the Algerian higher education institution

    NASA Astrophysics Data System (ADS)

    Benchicou, Soraya; Aichouni, Mohamed; Nehari, Driss

    2010-06-01

    Technology-mediated education or e-learning is growing globally both in scale and delivery capacity due to the large diffusion of the ubiquitous information and communication technologies (ICT) in general and the web technologies in particular. This statement has not yet been fully supported by research, especially in developing countries such as Algeria. The purpose of this paper was to identify directions for addressing the needs of academics in higher education institutions in Algeria in order to adopt the e-learning approach as a strategy to improve quality of education. The paper will report results of an empirical study that measures the readiness of the Algerian higher education institutions towards the implementation of ICT in the educational process and the attitudes of faculty members towards the application of the e-learning approach in engineering education. Three main objectives were targeted, namely: (a) to provide an initial evaluation of faculty members' attitudes and perceptions towards web-based education; (b) reporting on their perceived requirements for implementing e-learning in university courses; (c) providing an initial input for a collaborative process of developing an institutional strategy for e-learning. Statistical analysis of the survey results indicates that the Algerian higher education institution, which adopted the Licence - Master and Doctorate educational system, is facing a big challenge to take advantage of emerging technological innovations and the advent of e-learning to further develop its teaching programmes and to enhance the quality of education in engineering fields. The successful implementation of this modern approach is shown to depend largely on a set of critical success factors that would include: 1. The extent to which the institution will adopt a formal and official e-learning strategy. 2. The extent to which faculty members will adhere and adopt this strategy and develop ownership of the various measures in the

  5. Theoretical bases for conducting certain technological processes in space

    NASA Technical Reports Server (NTRS)

    Okhotin, A. S.

    1979-01-01

    Dimensionless conservation equations are presented and the theoretical bases of fluid behavior aboard orbiting satellites with application to the processes of manufacturing crystals in weightlessness. The small amount of gravitational acceleration is shown to increase the separation of bands of varying concentration. Natural convection is shown to have no practical effect on crystallization from a liquid melt. Barodiffusion is also negligibly small in realistic conditions of weightlessness. The effects of surface tension become increasingly large, and suggestions are made for further research.

  6. Partial differential equation-based approach for empirical mode decomposition: application on image analysis.

    PubMed

    Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques

    2012-09-01

    The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.

  7. MAIS: An Empirically-Based Intelligent CBI System.

    ERIC Educational Resources Information Center

    Christensen, Dean L.; Tennyson, Robert D.

    The goal of the programmatic research program for the Minnesota Adaptive Instructional System (MAIS), an intelligent computer-assisted instruction system, is to empirically investigate generalizable instructional variables and conditions that improve learning through the use of adaptive instructional strategies. Research has been initiated in the…

  8. Theoretical and empirical investigations of KCl:Eu{sup 2+} for nearly water-equivalent radiotherapy dosimetry

    SciTech Connect

    Zheng Yuanshui; Han Zhaohui; Driewer, Joseph P.; Low, Daniel A.; Li, H. Harold

    2010-01-15

    Purpose: The low effective atomic number, reusability, and other computed radiography-related advantages make europium doped potassium chloride (KCl:Eu{sup 2+}) a promising dosimetry material. The purpose of this study is to model KCl:Eu{sup 2+} point dosimeters with a Monte Carlo (MC) method and, using this model, to investigate the dose responses of two-dimensional (2D) KCl:Eu{sup 2+} storage phosphor films (SPFs). Methods: KCl:Eu{sup 2+} point dosimeters were irradiated using a 6 MV beam at four depths (5-20 cm) for each of five square field sizes (5x5-25x25 cm{sup 2}). The dose measured by KCl:Eu{sup 2+} was compared to that measured by an ionization chamber to obtain the magnitude of energy dependent dose measurement artifact. The measurements were simulated using DOSXYZnrc with phase space files generated by BEAMnrcMP. Simulations were also performed for KCl:Eu{sup 2+} films with thicknesses ranging from 1 {mu}m to 1 mm. The work function of the prototype KCl:Eu{sup 2+} material was determined by comparing the sensitivity of a 150 {mu}m thick KCl:Eu{sup 2+} film to a commercial BaFBr{sub 0.85}I{sub 0.15}:Eu{sup 2+}-based SPF with a known work function. The work function was then used to estimate the sensitivity of a 1 {mu}m thick KCl:Eu{sup 2+} film. Results: The simulated dose responses of prototype KCl:Eu{sup 2+} point dosimeters agree well with measurement data acquired by irradiating the dosimeters in the 6 MV beam with varying field size and depth. Furthermore, simulations with films demonstrate that an ultrathin KCl:Eu{sup 2+} film with thickness of the order of 1 {mu}m would have nearly water-equivalent dose response. The simulation results can be understood using classic cavity theories. Finally, preliminary experiments and theoretical calculations show that ultrathin KCl:Eu{sup 2+} film could provide excellent signal in a 1 cGy dose-to-water irradiation. Conclusions: In conclusion, the authors demonstrate that KCl:Eu{sup 2+}-based dosimeters can be

  9. An empirical/theoretical model with dimensionless numbers to predict the performance of electrodialysis systems on the basis of operating conditions.

    PubMed

    Karimi, Leila; Ghassemi, Abbas

    2016-07-01

    Among the different technologies developed for desalination, the electrodialysis/electrodialysis reversal (ED/EDR) process is one of the most promising for treating brackish water with low salinity when there is high risk of scaling. Multiple researchers have investigated ED/EDR to optimize the process, determine the effects of operating parameters, and develop theoretical/empirical models. Previously published empirical/theoretical models have evaluated the effect of the hydraulic conditions of the ED/EDR on the limiting current density using dimensionless numbers. The reason for previous studies' emphasis on limiting current density is twofold: 1) to maximize ion removal, most ED/EDR systems are operated close to limiting current conditions if there is not a scaling potential in the concentrate chamber due to a high concentration of less-soluble salts; and 2) for modeling the ED/EDR system with dimensionless numbers, it is more accurate and convenient to use limiting current density, where the boundary layer's characteristics are known at constant electrical conditions. To improve knowledge of ED/EDR systems, ED/EDR models should be also developed for the Ohmic region, where operation reduces energy consumption, facilitates targeted ion removal, and prolongs membrane life compared to limiting current conditions. In this paper, theoretical/empirical models were developed for ED/EDR performance in a wide range of operating conditions. The presented ion removal and selectivity models were developed for the removal of monovalent ions and divalent ions utilizing the dominant dimensionless numbers obtained from laboratory scale electrodialysis experiments. At any system scale, these models can predict ED/EDR performance in terms of monovalent and divalent ion removal.

  10. Theoretical geology

    NASA Astrophysics Data System (ADS)

    Mikeš, Daniel

    2010-05-01

    erroneous assumptions and do not solve the very fundamental issue that lies at the base of the problem. This problem is straighforward and obvious: a sedimentary system is inherently four-dimensional (3 spatial dimensions + 1 temporal dimension). Any method using an inferior number or dimensions is bound to fail to describe the evolution of a sedimentary system. It is indicative of the present day geological world that such fundamental issues be overlooked. The only reason for which one can appoint the socalled "rationality" in todays society. Simple "common sense" leads us to the conclusion that in this case the empirical method is bound to fail and the only method that can solve the problem is the theoretical approach. Reasoning that is completely trivial for the traditional exact sciences like physics and mathematics and applied sciences like engineering. However, not for geology, a science that was traditionally descriptive and jumped to empirical science, skipping the stage of theoretical science. I argue that the gap of theoretical geology is left open and needs to be filled. Every discipline in geology lacks a theoretical base. This base can only be filled by the theoretical/inductive approach and can impossibly be filled by the empirical/deductive approach. Once a critical mass of geologists realises this flaw in todays geology, we can start solving the fundamental problems in geology.

  11. A theoretical and empirical investigation into the willingness-to-pay function for new innovative drugs by Germany's health technology assessment agency (IQWiG).

    PubMed

    Gandjour, Afschin

    2013-11-01

    Under the recently enacted pharmaceutical price and reimbursement regulation in Germany, new drugs are subject to a rapid assessment to determine whether there is sufficient evidence of added clinical benefits compared with the existing standard of treatment. If such added benefits are confirmed, manufacturers and representatives of the Statutory Health Insurance (SHI) are expected to negotiate an appropriate reimbursement price. If parties fail to reach an agreement, a final decision on the reimbursement price will be made by an arbitration body. If one of the parties involved wishes so, then the Institute for Quality and Efficiency in Health Care (Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen, IQWiG) will be commissioned with a formal evaluation of costs and benefits of the product in question. IQWiG will make a recommendation for a reimbursement price based on the 'efficiency frontier' in a therapeutic area. The purpose of the assessments is to provide support for decision-making bodies that act on behalf of the SHI insurants. To determine the willingness to pay for new drugs, IQWiG uses the following decision rule: the incremental cost-effectiveness ratio of a new drug compared with the next effective intervention should not be higher than that of the next effective intervention compared with its comparator. The purpose of this paper was to investigate the theoretical and empirical relationship between the willingness to pay for drugs and their health benefits. The analysis shows that across disease areas IQWiG has a curvilinear relationship between willingness to pay and health benefits. Future research may address the validity of the willingness-to-pay function from the viewpoint of the individual SHI insurants. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  12. What should we mean by empirical validation in hypnotherapy: evidence-based practice in clinical hypnosis.

    PubMed

    Alladin, Assen; Sabatini, Linda; Amundson, Jon K

    2007-04-01

    This paper briefly surveys the trend of and controversy surrounding empirical validation in psychotherapy. Empirical validation of hypnotherapy has paralleled the practice of validation in psychotherapy and the professionalization of clinical psychology, in general. This evolution in determining what counts as evidence for bona fide clinical practice has gone from theory-driven clinical approaches in the 1960s and 1970s through critical attempts at categorization of empirically supported therapies in the 1990s on to the concept of evidence-based practice in 2006. Implications of this progression in professional psychology are discussed in the light of hypnosis's current quest for validation and empirical accreditation.

  13. Mindfulness-Based Treatment to Prevent Addictive Behavior Relapse: Theoretical Models and Hypothesized Mechanisms of Change

    PubMed Central

    Witkiewitz, Katie; Bowen, Sarah; Harrop, Erin N.; Douglas, Haley; Enkema, Matthew; Sedgwick, Carly

    2017-01-01

    Mindfulness-based treatments are growing in popularity among addiction treatment providers, and several studies suggest the efficacy of incorporating mindfulness practices into the treatment of addiction, including the treatment of substance use disorders and behavioral addictions (i.e., gambling). The current paper provides a review of theoretical models of mindfulness in the treatment of addiction and several hypothesized mechanisms of change. We provide an overview of mindfulness-based relapse prevention (MBRP), including session content, treatment targets, and client feedback from participants who have received MBRP in the context of empirical studies. Future research directions regarding operationalization and measurement, identifying factors that moderate treatment effects, and protocol adaptations for specific populations are discussed. PMID:24611847

  14. Mindfulness-based treatment to prevent addictive behavior relapse: theoretical models and hypothesized mechanisms of change.

    PubMed

    Witkiewitz, Katie; Bowen, Sarah; Harrop, Erin N; Douglas, Haley; Enkema, Matthew; Sedgwick, Carly

    2014-04-01

    Mindfulness-based treatments are growing in popularity among addiction treatment providers, and several studies suggest the efficacy of incorporating mindfulness practices into the treatment of addiction, including the treatment of substance use disorders and behavioral addictions (i.e., gambling). The current paper provides a review of theoretical models of mindfulness in the treatment of addiction and several hypothesized mechanisms of change. We provide an overview of mindfulness-based relapse prevention (MBRP), including session content, treatment targets, and client feedback from participants who have received MBRP in the context of empirical studies. Future research directions regarding operationalization and measurement, identifying factors that moderate treatment effects, and protocol adaptations for specific populations are discussed.

  15. The Importance of Emotion in Theories of Motivation: Empirical, Methodological, and Theoretical Considerations from a Goal Theory Perspective

    ERIC Educational Resources Information Center

    Turner, Julianne C.; Meyer, Debra K.; Schweinle, Amy

    2003-01-01

    Despite its importance to educational psychology, prominent theories of motivation have mostly ignored emotion. In this paper, we review theoretical conceptions of the relation between motivation and emotion and discuss the role of emotion in understanding student motivation in classrooms. We demonstrate that emotion is one of the best indicators…

  16. Computer-Assisted Language Intervention Using Fast ForWord: Theoretical and Empirical Considerations for Clinical Decision-Making.

    ERIC Educational Resources Information Center

    Gillam, Ronald B.

    1999-01-01

    This article critiques the theoretical basis of the Fast ForWord program, a computer-assisted language intervention program for children with language-learning impairments. It notes undocumented treatment outcomes and questions the clinical methods associated with the procedures. Fifteen cautionary statements are provided that clinicians may want…

  17. The Non-Neutrality of Technology: A Theoretical Analysis and Empirical Study of Computer Mediated Communication Technologies

    ERIC Educational Resources Information Center

    Zhao, Yong; Alvarez-Torres, Maria Jose; Smith, Bryan; Tan, Hueyshan Sophia

    2004-01-01

    Arguing against the common perception of technology as passive, neutral, and universal, this article presents a theoretical analysis of a commonly used and frequently studied technology--Computer Mediated Communication (CMC)--to illustrate how a technology that is often undistinguished in practice and research is indeed active, biased, and…

  18. Comparison of empirical, semi-empirical and physically based models of soil hydraulic functions derived for bi-modal soils

    NASA Astrophysics Data System (ADS)

    Kutílek, M.; Jendele, L.; Krejča, M.

    2009-02-01

    The accelerated flow in soil pores is responsible for a rapid transport of pollutants from the soil surface to deeper layers up to groundwater. The term preferential flow is used for this type of transport. Our study was aimed at the preferential flow realized in the structural porous domain in bi-modal soils. We compared equations describing the soil water retention function h( θ) and unsaturated hydraulic conductivity K( h), eventually K( θ) modified for bi-modal soils, where θ is the soil water content and h is the pressure head. The analytical description of a curve passing experimental data sets of the soil hydraulic function is typical for the empirical equation characterized by fitting parameters only. If the measured data are described by the equation derived by the physical model without using fitting parameters, we speak about a physically based model. There exist several transitional subtypes between empirical and physically based models. They are denoted as semi-empirical, or semi-physical. We tested 3 models of soil water retention function and 3 models of unsaturated conductivity using experimental data sets of sand, silt, silt loam and loam. All used soils are typical by their bi-modality of the soil porous system. The model efficiency was estimated by RMSE (Root mean square error) and by RSE (Relative square error). The semi-empirical equation of the soil water retention function had the lowest values of RMSE and RSE and was qualified as "optimal" for the formal description of the shape of the water retention function. With this equation, the fit of the modelled data to experiments was the closest one. The fitting parameters smoothed the difference between the model and the physical reality of the soil porous media. The physical equation based upon the model of the pore size distribution did not allow exact fitting of the modelled data to the experimental data due to the rigidity and simplicity of the physical model when compared to the real soil

  19. [Settings-based prevention of overweight in childhood and adolescents : Theoretical foundation, determinants and intervention planning].

    PubMed

    Quilling, Eike; Dadaczynski, Kevin; Müller, Merle

    2016-11-01

    Childhood and adolescent overweight can still be seen as a global public health problem. Based on our socioeconomic understanding, overweight is the result of a complex interplay of a diverse array of factors acting on different levels. Hence, in addition to individual level determinants overweight prevention should also address environmental related factors as part of a holistic and integrated setting approach. This paper aims to discuss the setting approach with regard to overweight prevention in childhood and adolescence. In addition to a summary of environmental factors and their empirical influence on the determinants of overweight, theoretical approaches and planning models of settings-based overweight prevention are discussed. While settings can be characterized as specific social-spatial subsystems (e. g. kindergarten, schools), living environments relate to complex subject-oriented environments that may include various subsystems. Direct social contexts, educational contexts and community contexts as relevant systems for young people contain different evidence-based influences that need to be taken into account in settings based overweight prevention. To support a theory-driven intervention, numerous planning models exist, which are presented here. Given the strengthening of environments for health within the prevention law, the underlying settings approach also needs further development with regard to overweigth prevention. This includes the improvement of the theoretical foundation by aligning intervention practice of planning models, which also has a positive influence on the ability to measure its success.

  20. Viscoelastic shear properties of human vocal fold mucosa: theoretical characterization based on constitutive modeling.

    PubMed

    Chan, R W; Titze, I R

    2000-01-01

    The viscoelastic shear properties of human vocal fold mucosa (cover) were previously measured as a function of frequency [Chan and Titze, J. Acoust. Soc. Am. 106, 2008-2021 (1999)], but data were obtained only in a frequency range of 0.01-15 Hz, an order of magnitude below typical frequencies of vocal fold oscillation (on the order of 100 Hz). This study represents an attempt to extrapolate the data to higher frequencies based on two viscoelastic theories, (1) a quasilinear viscoelastic theory widely used for the constitutive modeling of the viscoelastic properties of biological tissues [Fung, Biomechanics (Springer-Verlag, New York, 1993), pp. 277-292], and (2) a molecular (statistical network) theory commonly used for the rheological modeling of polymeric materials [Zhu et al., J. Biomech. 24, 1007-1018 (1991)]. Analytical expressions of elastic and viscous shear moduli, dynamic viscosity, and damping ratio based on the two theories with specific model parameters were applied to curve-fit the empirical data. Results showed that the theoretical predictions matched the empirical data reasonably well, allowing for parametric descriptions of the data and their extrapolations to frequencies of phonation.

  1. Why resilience is unappealing to social science: Theoretical and empirical investigations of the scientific use of resilience.

    PubMed

    Olsson, Lennart; Jerneck, Anne; Thoren, Henrik; Persson, Johannes; O'Byrne, David

    2015-05-01

    Resilience is often promoted as a boundary concept to integrate the social and natural dimensions of sustainability. However, it is a troubled dialogue from which social scientists may feel detached. To explain this, we first scrutinize the meanings, attributes, and uses of resilience in ecology and elsewhere to construct a typology of definitions. Second, we analyze core concepts and principles in resilience theory that cause disciplinary tensions between the social and natural sciences (system ontology, system boundary, equilibria and thresholds, feedback mechanisms, self-organization, and function). Third, we provide empirical evidence of the asymmetry in the use of resilience theory in ecology and environmental sciences compared to five relevant social science disciplines. Fourth, we contrast the unification ambition in resilience theory with methodological pluralism. Throughout, we develop the argument that incommensurability and unification constrain the interdisciplinary dialogue, whereas pluralism drawing on core social scientific concepts would better facilitate integrated sustainability research.

  2. Second modernity as a research agenda: theoretical and empirical explorations in the 'meta-change' of modern society.

    PubMed

    Beck, Ulrich; Lau, Christoph

    2005-12-01

    In this article we are reformulating the theory of reflexive modernization as an empirical research programme and summarize some of the most recent findings which have been produced by a research consortium in Munich (integrating four universities, funded by the German Research Society (DFG)). On this basis we reject the idea that Western societies at the beginning of the twenty-first century move from the modern to the post-modern. We argue that there has been no clear break with the basic principles of modernity but rather a transformation of basic institutions of modernity such as the nation-state and the nuclear family. We would suggest, therefore, that what we are witnessing is a second modernity. Finally, we reform the theory of reflexive modernization in reaction to three uttered objections.

  3. [Adaptation and quality of life in anorectal malformation: empirical findings, theoretical concept, Psychometric assessment, and cognitive-behavioral intervention].

    PubMed

    Noeker, Meinolf

    2010-01-01

    Anorectal malformations are inborn developmental defects that are associated with multiple functional Impairments (especially incontinence) and psychosocial burden with a major impact on body schema and self-esteem. Child psychology and psychiatry research begin to identify disorder-dependent and -independent risk and protective factors that predict the outcome of psychological adaptation and quality of life. The present paper analyses the interference of structural and functional disease parameters with the achievement of regular developmental tasks, presents a hypothetical conceptual framework concerning the development of psychological adaptation and quality of life in ARM, integrates findings from empirical research with the framework presented and outlines strategies of psychological support from a cognitive-behavioural perspective within a multidisciplinary treatment approach to enhance medical, functional, and psychosocial quality of life.

  4. Why resilience is unappealing to social science: Theoretical and empirical investigations of the scientific use of resilience

    PubMed Central

    Olsson, Lennart; Jerneck, Anne; Thoren, Henrik; Persson, Johannes; O’Byrne, David

    2015-01-01

    Resilience is often promoted as a boundary concept to integrate the social and natural dimensions of sustainability. However, it is a troubled dialogue from which social scientists may feel detached. To explain this, we first scrutinize the meanings, attributes, and uses of resilience in ecology and elsewhere to construct a typology of definitions. Second, we analyze core concepts and principles in resilience theory that cause disciplinary tensions between the social and natural sciences (system ontology, system boundary, equilibria and thresholds, feedback mechanisms, self-organization, and function). Third, we provide empirical evidence of the asymmetry in the use of resilience theory in ecology and environmental sciences compared to five relevant social science disciplines. Fourth, we contrast the unification ambition in resilience theory with methodological pluralism. Throughout, we develop the argument that incommensurability and unification constrain the interdisciplinary dialogue, whereas pluralism drawing on core social scientific concepts would better facilitate integrated sustainability research. PMID:26601176

  5. Landscape influences on dispersal behaviour: a theoretical model and empirical test using the fire salamander, Salamandra infraimmaculata.

    PubMed

    Kershenbaum, Arik; Blank, Lior; Sinai, Iftach; Merilä, Juha; Blaustein, Leon; Templeton, Alan R

    2014-06-01

    When populations reside within a heterogeneous landscape, isolation by distance may not be a good predictor of genetic divergence if dispersal behaviour and therefore gene flow depend on landscape features. Commonly used approaches linking landscape features to gene flow include the least cost path (LCP), random walk (RW), and isolation by resistance (IBR) models. However, none of these models is likely to be the most appropriate for all species and in all environments. We compared the performance of LCP, RW and IBR models of dispersal with the aid of simulations conducted on artificially generated landscapes. We also applied each model to empirical data on the landscape genetics of the endangered fire salamander, Salamandra infraimmaculata, in northern Israel, where conservation planning requires an understanding of the dispersal corridors. Our simulations demonstrate that wide dispersal corridors of the low-cost environment facilitate dispersal in the IBR model, but inhibit dispersal in the RW model. In our empirical study, IBR explained the genetic divergence better than the LCP and RW models (partial Mantel correlation 0.413 for IBR, compared to 0.212 for LCP, and 0.340 for RW). Overall dispersal cost in salamanders was also well predicted by landscape feature slope steepness (76%), and elevation (24%). We conclude that fire salamander dispersal is well characterised by IBR predictions. Together with our simulation findings, these results indicate that wide dispersal corridors facilitate, rather than hinder, salamander dispersal. Comparison of genetic data to dispersal model outputs can be a useful technique in inferring dispersal behaviour from population genetic data.

  6. The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction.

    PubMed

    Williamson, Ross S; Sahani, Maneesh; Pillow, Jonathan W

    2015-04-01

    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.

  7. Training-based interventions in motor rehabilitation after stroke: theoretical and clinical considerations.

    PubMed

    Sterr, Annette

    2004-01-01

    Basic neuroscience research on brain plasticity, motor learning and recovery has stimulated new concepts in neurological rehabilitation. Combined with the development of set methodological standards in clinical outcome research, these findings have led to a double-paradigm shift in motor rehabilitation: (a) the move towards evidence-based procedures for the assessment of clinical outcome & the employment of disablement models to anchor outcome parameters, and (b) the introduction of practice-based concepts that are derived from testable models that specify treatment mechanisms. In this context, constraint-induced movement therapy (CIT) has played a catalytic role in taking motor rehabilitation forward into the scientific arena. As a theoretically founded and hypothesis-driven intervention, CIT research focuses on two main issues. The first issue is the assessment of long-term clinical benefits in an increasing range of patient groups, and the second issue is the investigation of neuronal and behavioural treatment mechanisms and their interactive contribution to treatment success. These studies are mainly conducted in the research environment and will eventually lead to increased treatment benefits for patients in standard health care. However, gradual but presumably more immediate benefits for patients may be achieved by introducing and testing derivates of the CIT concept that are more compatible with current clinical practice. Here, we summarize the theoretical and empirical issues related to the translation of research-based CIT work into the clinical context of standard health care.

  8. Time Domain Strain/Stress Reconstruction Based on Empirical Mode Decomposition: Numerical Study and Experimental Validation

    PubMed Central

    He, Jingjing; Zhou, Yibin; Guan, Xuefei; Zhang, Wei; Zhang, Weifang; Liu, Yongming

    2016-01-01

    Structural health monitoring has been studied by a number of researchers as well as various industries to keep up with the increasing demand for preventive maintenance routines. This work presents a novel method for reconstruct prompt, informed strain/stress responses at the hot spots of the structures based on strain measurements at remote locations. The structural responses measured from usage monitoring system at available locations are decomposed into modal responses using empirical mode decomposition. Transformation equations based on finite element modeling are derived to extrapolate the modal responses from the measured locations to critical locations where direct sensor measurements are not available. Then, two numerical examples (a two-span beam and a 19956-degree of freedom simplified airfoil) are used to demonstrate the overall reconstruction method. Finally, the present work investigates the effectiveness and accuracy of the method through a set of experiments conducted on an aluminium alloy cantilever beam commonly used in air vehicle and spacecraft. The experiments collect the vibration strain signals of the beam via optical fiber sensors. Reconstruction results are compared with theoretical solutions and a detailed error analysis is also provided. PMID:27537889

  9. Time Domain Strain/Stress Reconstruction Based on Empirical Mode Decomposition: Numerical Study and Experimental Validation.

    PubMed

    He, Jingjing; Zhou, Yibin; Guan, Xuefei; Zhang, Wei; Zhang, Weifang; Liu, Yongming

    2016-08-16

    Structural health monitoring has been studied by a number of researchers as well as various industries to keep up with the increasing demand for preventive maintenance routines. This work presents a novel method for reconstruct prompt, informed strain/stress responses at the hot spots of the structures based on strain measurements at remote locations. The structural responses measured from usage monitoring system at available locations are decomposed into modal responses using empirical mode decomposition. Transformation equations based on finite element modeling are derived to extrapolate the modal responses from the measured locations to critical locations where direct sensor measurements are not available. Then, two numerical examples (a two-span beam and a 19956-degree of freedom simplified airfoil) are used to demonstrate the overall reconstruction method. Finally, the present work investigates the effectiveness and accuracy of the method through a set of experiments conducted on an aluminium alloy cantilever beam commonly used in air vehicle and spacecraft. The experiments collect the vibration strain signals of the beam via optical fiber sensors. Reconstruction results are compared with theoretical solutions and a detailed error analysis is also provided.

  10. Accuracy of Population Validity and Cross-Validity Estimation: An Empirical Comparison of Formula-Based, Traditional Empirical, and Equal Weights Procedures.

    ERIC Educational Resources Information Center

    Raju, Nambury S.; Bilgic, Reyhan; Edwards, Jack E.; Fleer, Paul F.

    1999-01-01

    Performed an empirical Monte Carlo study using predictor and criterion data from 84,808 U.S. Air Force enlistees. Compared formula-based, traditional empirical, and equal-weights procedures. Discusses issues for basic research on validation and cross-validation. (SLD)

  11. Landfill modelling in LCA - a contribution based on empirical data.

    PubMed

    Obersteiner, Gudrun; Binner, Erwin; Mostbauer, Peter; Salhofer, Stefan

    2007-01-01

    Landfills at various stages of development, depending on their age and location, can be found throughout Europe. The type of facilities goes from uncontrolled dumpsites to highly engineered facilities with leachate and gas management. In addition, some landfills are designed to receive untreated waste, while others can receive incineration residues (MSWI) or residues after mechanical biological treatment (MBT). Dimension, type and duration of the emissions from landfills depend on the quality of the disposed waste, the technical design, and the location of the landfill. Environmental impacts are produced by the leachate (heavy metals, organic loading), emissions into the air (CH(4), hydrocarbons, halogenated hydrocarbons) and from the energy or fuel requirements for the operation of the landfill (SO(2) and NO(x) from the production of electricity from fossil fuels). To include landfilling in an life-cycle assessment (LCA) approach entails several methodological questions (multi-input process, site-specific influence, time dependency). Additionally, no experiences are available with regard to mid-term behaviour (decades) for the relatively new types of landfill (MBT landfill, landfill for residues from MSWI). The present paper focuses on two main issues concerning modelling of landfills in LCA: Firstly, it is an acknowledged fact that emissions from landfills may prevail for a very long time, often thousands of years or longer. The choice of time frame in the LCA of landfilling may therefore clearly affect the results. Secondly, the reliability of results obtained through a life-cycle assessment depends on the availability and quality of Life Cycle Inventory (LCI) data. Therefore the choice of the general approach, using multi-input inventory tool versus empirical results, may also influence the results. In this paper the different approaches concerning time horizon and LCI will be introduced and discussed. In the application of empirical results, the presence of

  12. Empirical study of ARFIMA model based on fractional differencing

    NASA Astrophysics Data System (ADS)

    Xiu, Jin; Jin, Yao

    2007-04-01

    In this paper, we studied the long-term memory of Hong Kong Hang Sheng index using MRS analysis, established ARFIMA model for it, and detailed the procedure of fractional differencing. Furthermore, we compared the ARFIMA model built by this means with the one that took first-order differencing as an alternative. The result showed that, if doing so, much useful information of time series would be lost. The forecast formula of ARFIMA model was corrected according to the method of fractional differencing, and was employed in the empirical study. It was illustrated that the forecast performance of ARFIMA model was not as good as we expected since the ARFIMA model was ineffective in forecasting Hang Sheng index. The certainty of this conclusion was proposed from two different aspects.

  13. Empirical Mode Decomposition Based Features for Diagnosis and Prognostics of Systems

    DTIC Science & Technology

    2008-04-01

    bearing fault diagnosis – their effectiveness and flexibilities. Journal of Vibration and Acoustics July 2001, ASME. 3. Staszewski, W. J. Structural...Empirical Mode Decomposition Based Features for Diagnosis and Prognostics of Systems by Hiralal Khatri, Kenneth Ranney, Kwok Tom, and Romeo...Laboratory Adelphi, MD 20783-1197 ARL-TR-4301 April 2008 Empirical Mode Decomposition Based Features for Diagnosis and Prognostics of Systems

  14. Bridging process-based and empirical approaches to modeling tree growth

    Treesearch

    Harry T. Valentine; Annikki Makela; Annikki Makela

    2005-01-01

    The gulf between process-based and empirical approaches to modeling tree growth may be bridged, in part, by the use of a common model. To this end, we have formulated a process-based model of tree growth that can be fitted and applied in an empirical mode. The growth model is grounded in pipe model theory and an optimal control model of crown development. Together, the...

  15. Are prejudices against disabled persons determined by personality characteristics? Reviewing a theoretical approach on the basis of empirical research findings.

    PubMed

    Cloerkes, G

    1981-01-01

    Taking as point of departure the results obtained from research on prejudice, many authors believe that the quality of attitudes toward disabled persons is influenced by the personality structure of the nondisabled. In order to verify this assumption, a secondary analysis of 67 empirical studies was undertaken. These studies referred to different personality variables such as authoritarianism, ethnocentrism, dogmatism, rigidity, intolerance of ambiguity, cognitive simplicity, anxiety, ego-weakness, self-concept, body-concept, aggressiveness, empathy, intelligence, etc. The results can be summarized as follows: Statistical criteria show that single personality traits have relatively little influence on the attitudes towards disabled persons. An adequate evaluation of the research findings is complicated by, at times, considerable methodological problems which arise when applying the proper test instruments to non-clinical populations. Marked correlations are to be found in particular in the case of authoritarianism, ethnocentrism, intolerance of ambiguity, anxiety, and ego-weakness. The intercorrelations, however, between most of the personality variables are rather high, which by cumulation of "extreme" factors may, in fact, sometimes result in particularly unfavorable attitudes toward the disabled. Thus, personality-related research findings to provide certain valuable explanations. Special attention should be devoted to the multiple connections between personality structure and social structure.

  16. The Demand for Cigarettes as Derived from the Demand for Weight Loss: A Theoretical and Empirical Investigation.

    PubMed

    Cawley, John; Dragone, Davide; Von Hinke Kessler Scholder, Stephanie

    2016-01-01

    This paper offers an economic model of smoking and body weight and provides new empirical evidence on the extent to which the demand for cigarettes is derived from the demand for weight loss. In the model, smoking causes weight loss in addition to having direct utility benefits and direct health consequences. It predicts that some individuals smoke for weight loss and that the practice is more common among those who consider themselves overweight and those who experience greater disutility from excess weight. We test these hypotheses using nationally representative data in which adolescents are directly asked whether they smoke to control their weight. We find that, among teenagers who smoke frequently, 46% of girls and 30% of boys are smoking in part to control their weight. As predicted by the model, this practice is significantly more common among those who describe themselves as too fat and among groups that tend to experience greater disutility from obesity. We conclude by discussing the implications of these findings for tax policy; specifically, the demand for cigarettes is less price elastic among those who smoke for weight loss, all else being equal. Public health efforts to reduce smoking initiation and encourage cessation may wish to design campaigns to alter the derived nature of cigarette demand, especially among adolescent girls. Copyright © 2014 John Wiley & Sons, Ltd.

  17. Electronic and optical properties of semiconductors: A study based on the empirical tight binding model

    SciTech Connect

    Lew, Yan Voon, L.C.

    1993-01-01

    This study is a theoretical investigation of the electronic and optical properties of intrinsic semiconductors using the orthogonal empirical tight binding model. An analysis of the bulk properties of semiconductors with the zincblende, diamond and rocksalt structures has been carried out. The author has extended the work of others to higher order in the interaction integrals and derived new parameter sets for certain semiconductors which better fit the experimental data over the Brillouin zone. The Hamiltonian of the heterostructures is built up layer by layer from the parameters of the bulk constituents. The second part of this work examines a number of applications of the theory. A new microscopic derivation of the intervalley deformation potentials is presented within the tight binding representation and computes a number of conduction-band deformation potentials of bulk semiconductors. The author has also studied the electronic states in heterostructures and have shown theoretically the possibility of having barrier localization of above-barrier states in a multivalley heterostructure using a multiband calculation. Another result is the proposal for a new [open quotes]type-II[close quotes] lasing mechanism in short-period GaAs/AlAs super-lattices. As for the author's work on the optical properties, a new formalism, based on the generalized Feynman-Hellmann theorem, for computing interband optical matrix elements has been obtained and has been used to compute the linear and second-order nonlinear optical properties of a number of bulk semiconductors and semiconductor heterostructures. In agreement with the one-band effective-mass calculations of other groups, the more elaborate calculations show that the intersubband oscillator strengths of quantum wells can be greatly enhanced over the bulk interband values.

  18. Computer-Assisted Language Intervention Using Fast ForWord®: Theoretical and Empirical Considerations for Clinical Decision-Making.

    PubMed

    Gillam, Ronald B

    1999-10-01

    A computer-assisted language intervention program called Fast ForWord® (Scientific Learning Corporation, 1998) has received a great deal of attention at professional meetings and in the popular media. Newspaper and magazine articles about this program contain statements like, "On average, after only 6 to 7 weeks of training, language-learning impaired children ages 4 to 12 showed improvement of more than one and a half years in speech processing and language ability." (Scientific Learning Corporation, 1997). Are the claims that are being made about this intervention approach just a matter of product promotion, or is this really a scientifically proven remedy for language-learning impairments? This article critiques the theoretical basis of Fast ForWord®, the documented treatment outcomes, and the clinical methods associated with the procedure. Fifteen cautionary statements are provided that clinicians may want to consider before they recommend Fast ForWord® intervention for the children they serve.

  19. Theoretical and empirical studies of impurity incorporation into beta-SiC thin films during epitaxial growth

    NASA Astrophysics Data System (ADS)

    Kim, H. J.; Davis, R. F.

    1986-11-01

    A theoretical determination of the vapor species present, and their respective partial pressures, is made using the SOLGASMIX-PV program for the n-type and p-type dopants of N and P, and B, respectively, under conditions used to grow monocrystalline beta-SiC thin films via CVD. The model shows that Al and P behave ideally while B and N apparently interact with the C or Si in the SiC or fill normally unoccupied interstitial positions. The relationship between the carrier concentrations or the atomic concentrations and the partial pressure of the dopant source gases is linear and parallel. The more efficient n-type and p-type dopants of N and Al have been used to produce what is suggested to be the first p-n junction diode in a beta-SiC film.

  20. Empirically Estimable Classification Bounds Based on a Nonparametric Divergence Measure

    PubMed Central

    Berisha, Visar; Wisler, Alan; Hero, Alfred O.; Spanias, Andreas

    2015-01-01

    Information divergence functions play a critical role in statistics and information theory. In this paper we show that a non-parametric f-divergence measure can be used to provide improved bounds on the minimum binary classification probability of error for the case when the training and test data are drawn from the same distribution and for the case where there exists some mismatch between training and test distributions. We confirm the theoretical results by designing feature selection algorithms using the criteria from these bounds and by evaluating the algorithms on a series of pathological speech classification tasks. PMID:26807014

  1. Exploring the UMLS: a rough sets based theoretical framework.

    PubMed

    Srinivasan, P

    1999-01-01

    The Unified Medical Language System (UMLS) [1] has a unique and leading position in the evolution of thesauri and metathesauri. Features that set it apart are: its composition from more than fifty component health care vocabularies; the sophisticated UMLS ontology linking the Metathesaurus with structures such as the Semantic Network and the SPECIALIST lexicon; and the high level of social collaboration invested in its construction and growth. It is our thesis that in order to successfully harness such a complex vocabulary for text retrieval we need sophisticated methods derived from a deeper understanding of the UMLS system. Thus we propose a theoretical framework based on the theory of rough sets, that supports the systematic and exploratory investigation of the UMLS Metathesaurus for text retrieval. Our goal is to make it more feasible for individuals such as patients and health care professionals to access relevant information at the point of need.

  2. Theoretical analysis of tin incorporated group IV alloy based QWIP

    NASA Astrophysics Data System (ADS)

    Pareek, Prakash; Das, Mukul K.; Kumar, S.

    2017-07-01

    Detailed theoretical investigation on the frequency response, responsivity and detectivity of tin incorporated GeSn based quantum well infrared photodetector (QWIP) is presented in this paper. Rate equation and continuity equation in the well are solved simultaneously to obtained photo generated current. Quantum mechanical carrier transport like carrier capture in QW, escape of carrier from the well due to thermionic emission and tunneling are considered in this calculation. Impact of Sn composition in the GeSn well on the frequency response, bandwidth, responsivity and detectivity are studied. Results show that Sn concentration and applied bias have important role on the performance of the device. Significant bandwidth is obtained at low reverse bias voltage, e.g., 150 GHz is obtained at 0.14 V bias for single Ge0.83Sn0.17 layer. Detectivity, in the range of 107 cm Hz1/2 W-1 is obtained for particular choice of Sn-composition and bias.

  3. Personality traits and achievement motives: theoretical and empirical relations between the NEO Personality Inventory-Revised and the Achievement Motives Scale.

    PubMed

    Diseth, Age; Martinsen, Øyvind

    2009-04-01

    Theoretical and empirical relations between personality traits and motive dispositions were investigated by comparing scores of 315 undergraduate psychology students on the NEO Personality Inventory-Revised and the Achievement Motives Scale. Analyses showed all NEO Personality Inventory-Revised factors except agreeableness were significantly correlated with the motive for success and the motive to avoid failure. A structural equation model showed that motive for success was predicted by Extraversion, Openness, Conscientiousness, and Neuroticism (negative relation), and motive to avoid failure was predicted by Neuroticism and Openness (negative relation). Although both achievement motives were predicted by several personality factors, motive for success was most strongly predicted by Openness, and motive to avoid failure was most strongly predicted by neuroticism. These findings extended previous research on the relations of personality traits and achievement motives and provided a basis for the discussion of motive dispositions in personality. The results also added to the construct validity of the Achievement Motives Scale.

  4. Attachment-Based Family Therapy: A Review of the Empirical Support.

    PubMed

    Diamond, Guy; Russon, Jody; Levy, Suzanne

    2016-09-01

    Attachment-based family therapy (ABFT) is an empirically supported treatment designed to capitalize on the innate, biological desire for meaningful and secure relationships. The therapy is grounded in attachment theory and provides an interpersonal, process-oriented, trauma-focused approach to treating adolescent depression, suicidality, and trauma. Although a process-oriented therapy, ABFT offers a clear structure and road map to help therapists quickly address attachment ruptures that lie at the core of family conflict. Several clinical trials and process studies have demonstrated empirical support for the model and its proposed mechanism of change. This article provides an overview of the clinical model and the existing empirical support for ABFT.

  5. An empirical formula based on Monte Carlo simulation for diffuse reflectance from turbid media

    NASA Astrophysics Data System (ADS)

    Gnanatheepam, Einstein; Aruna, Prakasa Rao; Ganesan, Singaravelu

    2016-03-01

    Diffuse reflectance spectroscopy has been widely used in diagnostic oncology and characterization of laser irradiated tissue. However, still accurate and simple analytical equation does not exist for estimation of diffuse reflectance from turbid media. In this work, a diffuse reflectance lookup table for a range of tissue optical properties was generated using Monte Carlo simulation. Based on the generated Monte Carlo lookup table, an empirical formula for diffuse reflectance was developed using surface fitting method. The variance between the Monte Carlo lookup table surface and the surface obtained from the proposed empirical formula is less than 1%. The proposed empirical formula may be used for modeling of diffuse reflectance from tissue.

  6. Empirical Analysis and Refinement of Expert System Knowledge Bases.

    DTIC Science & Technology

    1987-11-30

    Knowledge base refinement is the modification of an existing expert system knowledge base with the goals of localizing specific weaknesses in a... expert system techniques for knowledge acquisition, knowledge base refinement, maintenance, and verification....on the related problems of knowledge base acquisition, maintenance, verification, and learning from experience. The SEEK system was the first expert

  7. Strong Generative Capacity and the Empirical Base of Linguistic Theory

    PubMed Central

    Ott, Dennis

    2017-01-01

    This Perspective traces the evolution of certain central notions in the theory of Generative Grammar (GG). The founding documents of the field suggested a relation between the grammar, construed as recursively enumerating an infinite set of sentences, and the idealized native speaker that was essentially equivalent to the relation between a formal language (a set of well-formed formulas) and an automaton that recognizes strings as belonging to the language or not. But this early view was later abandoned, when the focus of the field shifted to the grammar's strong generative capacity as recursive generation of hierarchically structured objects as opposed to strings. The grammar is now no longer seen as specifying a set of well-formed expressions and in fact necessarily constructs expressions of any degree of intuitive “acceptability.” The field of GG, however, has not sufficiently acknowledged the significance of this shift in perspective, as evidenced by the fact that (informal and experimentally-controlled) observations about string acceptability continue to be treated as bona fide data and generalizations for the theory of GG. The focus on strong generative capacity, it is argued, requires a new discussion of what constitutes valid empirical evidence for GG beyond observations pertaining to weak generation. PMID:28983268

  8. Empirical and theoretical dosimetry in support of whole body resonant RF exposure (100 MHz) in human volunteers.

    PubMed

    Allen, Stewart J; Adair, Eleanor R; Mylacraine, Kevin S; Hurt, William; Ziriax, John

    2003-10-01

    This study reports the dosimetry performed to support an experiment that measured physiological responses of volunteer human subjects exposed to the resonant frequency for a seated human adult at 100 MHz. Exposures were performed in an anechoic chamber which was designed to provide uniform fields for frequencies of 100 MHz or greater. A half wave dipole with a 90 degrees reflector was used to optimize the field at the subject location. The dosimetry plan required measurement of transmitter harmonics, stationary probe drift, field strengths as a function of distance, electric and magnetic field maps at 200, 225, and 250 cm from the dipole antenna, and specific absorption rate (SAR) measurements using a human phantom, as well as theoretical predictions of SAR with the finite difference time domain (FDTD) method. On each exposure test day, a measurement was taken at 225 cm on the beam centerline with a NBS E field probe to assure consistently precise exposures. A NBS 10 cm loop antenna was positioned 150 cm to the right, 100 cm above, and 60 cm behind the subject and was read at 5 min intervals during all RF exposures. These dosimetry measurements assured accurate and consistent exposures. FDTD calculations were used to determine SAR distribution in a seated human subject. This study reports the necessary dosimetry for work on physiological consequences of human volunteer exposures to 100 MHz.

  9. Genetic load, inbreeding depression, and hybrid vigor covary with population size: An empirical evaluation of theoretical predictions.

    PubMed

    Lohr, Jennifer N; Haag, Christoph R

    2015-12-01

    Reduced population size is thought to have strong consequences for evolutionary processes as it enhances the strength of genetic drift. In its interaction with selection, this is predicted to increase the genetic load, reduce inbreeding depression, and increase hybrid vigor, and in turn affect phenotypic evolution. Several of these predictions have been tested, but comprehensive studies controlling for confounding factors are scarce. Here, we show that populations of Daphnia magna, which vary strongly in genetic diversity, also differ in genetic load, inbreeding depression, and hybrid vigor in a way that strongly supports theoretical predictions. Inbreeding depression is positively correlated with genetic diversity (a proxy for Ne ), and genetic load and hybrid vigor are negatively correlated with genetic diversity. These patterns remain significant after accounting for potential confounding factors and indicate that, in small populations, a large proportion of the segregation load is converted into fixed load. Overall, the results suggest that the nature of genetic variation for fitness-related traits differs strongly between large and small populations. This has large consequences for evolutionary processes in natural populations, such as selection on dispersal, breeding systems, ageing, and local adaptation.

  10. On the Theoretical Breadth of Design-Based Research in Education

    ERIC Educational Resources Information Center

    Bell, Philip

    2004-01-01

    Over the past decade, design experimentation has become an increasingly accepted mode of research appropriate for the theoretical and empirical study of learning amidst complex educational interventions as they are enacted in everyday settings. There is still a significant lack of clarity surrounding methodological and epistemological features of…

  11. Deep in Data. Empirical Data Based Software Accuracy Testing Using the Building America Field Data Repository

    SciTech Connect

    Neymark, J.; Roberts, D.

    2013-06-01

    This paper describes progress toward developing a usable, standardized, empirical data-based software accuracy test suite using home energy consumption and building description data. Empirical data collected from around the United States have been translated into a uniform Home Performance Extensible Markup Language format that may enable software developers to create translators to their input schemes for efficient access to the data. This could allow for modeling many homes expediently, and thus implementing software accuracy test cases by applying the translated data.

  12. The emergence of a temporally extended self and factors that contribute to its development: from theoretical and empirical perspectives.

    PubMed

    2013-04-01

    The main aims of the current research were to determine when children develop a temporally extended self (TES) and what factors contribute to its development. However, in order to address these aims it was important to, first, assess whether the test of delayed self-recognition (DSR) is a valid measure for the development of the TES, and, second, to propose and evaluate a theoretical model that describes what factors influence the development of the TES. The validity of the DSR test was verified by comparing the performance of 57 children on the DSR test to their performance on a meta-representational task (modified false belief task) and to a task that was essentially the same as the DSR test but was specifically designed to rely on the capacity to entertain secondary representations (i.e., surprise body task). Longitudinal testing of the children showed that at the mental age (MA) of 2.5 years they failed the DSR test, despite training them to understand the intended functions of the medium used in the DSR test; whereas, with training, children at the MA of 3.0 and 3.5 years exhibited DSR. Children at the MA of 4 years exhibited DSR without any training. Finally, results suggest that children's meta-representational ability was the only factor that contributed to the prediction of successful performance on the DSR test, and thus to the emergence of the TES. Furthermore, prospective longitudinal data revealed that caregiver conversational style was the only factor that contributed to the prediction of level of training required to pass the DSR test. That is, children of low-elaborative caregivers required significantly more training to pass the DSR test than children of high-elaborative caregivers, indicating that children who received more elaborative conversational input from their caregivers had a more advanced understanding of their TES. © 2013 The Society for Research in Child Development, Inc.

  13. Annett's theory that individuals heterozygous for the right shift gene are intellectually advantaged: theoretical and empirical problems.

    PubMed

    McManus, I C; Shergill, S; Bryden, M P

    1993-11-01

    Annett & Manning (1989; 1990a,b) have proposed that left-handedness is maintained by a balanced polymorphism, whereby the rs+/-heterozygote manifests increased intellectual ability compared with the rs-/- and rs+/+ homozygotes. In this paper we demonstrate that Annett's method of dividing subjects into putative genotypes does not allow the rs+/- genotype to be compared with the rs-/- genotype within handedness groups. Our alternative method does allow heterozygous right-handers to be compared both with rs+/+ and rs-/- homozygotes. Using this method in undergraduates we find no evidence that supposed heterozygotes are relatively more intellectually able than homozygotes on tests of verbal IQ, spatial IQ, diagrammatic IQ or vocabulary. Theoretical analysis of the balanced polymorphism hypothesis reveals additional limitations. Although estimation of the size of the heterozygote advantage suggests that it must be very large (21 or 45 IQ points) to explain the effects found by Annett & Manning, it nevertheless must be very small (3.4 IQ points) to be compatible with the known differences between right- and left-handers in social class and intelligence. Moreover power analysis shows that the latter effect size is too small for Annett & Manning to have found effects in their studies. Additional power analyses show that studies looking for effects in groups of high intellectual ability, such as university students, are incapable of finding significant results, despite Annett claiming such effects. If the Annett & Manning paradigm does demonstrate differences in intellectual ability related to skill asymmetry then those differences are unlikely to result from a balanced polymorphism, but instead probably reflect motivational or other differences between right-handers of high and low degrees of laterality.

  14. Empirical and theoretical dosimetry in support of whole body radio frequency (RF) exposure in seated human volunteers at 220 MHz.

    PubMed

    Allen, Stewart J; Adair, Eleanor R; Mylacraine, Kevin S; Hurt, William; Ziriax, John

    2005-09-01

    This study reports the dosimetry performed to support an experiment that measured physiological responses of seated volunteer human subjects exposed to 220 MHz fields. Exposures were performed in an anechoic chamber which was designed to provide uniform fields for frequencies of 100 MHz or greater. A vertical half-wave dipole with a 90 degrees reflector was used to optimize the field at the subject's location. The vertically polarized E field was incident on the dorsal side of the phantoms and human volunteers. The dosimetry plan required measurement of stationary probe drift, field strengths as a function of distance, electric and magnetic field maps at 200, 225, and 250 cm from the dipole antenna, and specific absorption rate (SAR) measurements using a human phantom, as well as theoretical predictions of SAR with the finite difference time domain (FDTD) method. A NBS (National Bureau of Standards, now NIST, National Institute of Standards and Technology, Boulder, CO) 10 cm loop antenna was positioned 150 cm to the right, 100 cm above and 60 cm behind the subject (toward the transmitting antenna) and was read prior to each subject's exposure and at 5 min intervals during all RF exposures. Transmitter stability was determined by measuring plate voltage, plate current, screen voltage and grid voltage for the driver and final amplifiers before and at 5 min intervals throughout the RF exposures. These dosimetry measurements assured accurate and consistent exposures. FDTD calculations were used to determine SAR distribution in a seated human subject. This study reports the necessary dosimetry to precisely control exposure levels for studies of the physiological consequences of human volunteer exposures to 220 MHz.

  15. Augmented Reality-Based Simulators as Discovery Learning Tools: An Empirical Study

    ERIC Educational Resources Information Center

    Ibáñez, María-Blanca; Di-Serio, Ángela; Villarán-Molina, Diego; Delgado-Kloos, Carlos

    2015-01-01

    This paper reports empirical evidence on having students use AR-SaBEr, a simulation tool based on augmented reality (AR), to discover the basic principles of electricity through a series of experiments. AR-SaBEr was enhanced with knowledge-based support and inquiry-based scaffolding mechanisms, which proved useful for discovery learning in…

  16. Empirical and Theoretical Evidence for the Role of MgSO4 Contact Ion-Pairs in Thermochemical Sulfate Reduction

    NASA Astrophysics Data System (ADS)

    Ellis, G. S.; Zhang, T.; Ma, Q.; Tang, Y.

    2006-12-01

    While the process of thermochemical sulfate reduction (TSR) has been recognized by geochemists for nearly fifty years, it has proven extremely difficult to simulate in the laboratory under conditions resembling those encountered in nature. Published estimates of the kinetic parameters that describe the rate of the TSR reaction vary widely and are inconsistent with geologic observations. Consequently, the prediction of the hydrogen sulfide (H2S) generation potential of a reservoir prior to drilling remains a major challenge for the oil industry. New experimental and theoretical evidence indicate that magnesium plays a significant role in controlling the rate of TSR in petroleum reservoirs. A novel reaction pathway for TSR is proposed that involves the reduction of sulfate as aqueous MgSO4 contact ion-pairs prior to the H2S-catalyzed TSR mechanism that is generally accepted. Ab initio quantum chemical calculations have been applied to this model in order to locate a potential transition state and to determine the activation energy for the contact ion- pair reaction (56 kcal/mol). Detailed experimental work shows that the rate of TSR increases significantly with increasing concentration of H2S, which may help to explain why previous estimates of TSR activation energies were so divergent. Preliminary experimental evidence indicates that H2S catalysis of TSR is a multi-step process, involving the formation of labile organic sulfur compounds that, in turn, generate sulfur radicals upon thermal decomposition. A new conceptual model for understanding the process of TSR in geologic environments has been developed that involves an H2S-threshold concentration required to sustain rapid sulfate reduction rates. Although this approach appears to be more consistent with field observations than previous mechanisms, validation of this model requires detailed integration with other geologic data in basin models. These findings may explain the common association of H2S-rich petroleum

  17. A Theoretical and Empirical Investigation of Professional Development's Impact on Self- and Collective Efficacy by School Accountability Status

    ERIC Educational Resources Information Center

    Moon, Gail S.

    2012-01-01

    This quantitative study used the Schools and Staffing Survey of 2007-2008, a school-based stratified probability-proportionate-to-size sample of all American schools, to explore the relationships of professional development to teachers' self- and collective efficacy by school accountability status as measured by adequate yearly progress (AYP). In…

  18. An Empirical Analysis of Knowledge Based Hypertext Navigation

    PubMed Central

    Snell, J.R.; Boyle, C.

    1990-01-01

    Our purpose is to investigate the effectiveness of knowledge-based navigation in a dermatology hypertext network. The chosen domain is a set of dermatology class notes implemented in Hypercard and SINS. The study measured time, number of moves, and success rates for subjects to find solutions to ten questions. The subjects were required to navigate within a dermatology hypertext network in order to find the solutions to a question. Our results indicate that knowledge-based navigation can assist the user in finding information of interest in a fewer number of node visits (moves) than with traditional button-based browsing or keyword searching. The time necessary to find an item of interest was lower for traditional-based methods. There was no difference in success rates for the two test groups.

  19. A new entropy based on a group-theoretical structure

    NASA Astrophysics Data System (ADS)

    Curado, Evaldo M. F.; Tempesta, Piergiulio; Tsallis, Constantino

    2016-03-01

    A multi-parametric version of the nonadditive entropy Sq is introduced. This new entropic form, denoted by S a , b , r, possesses many interesting statistical properties, and it reduces to the entropy Sq for b = 0, a = r : = 1 - q (hence Boltzmann-Gibbs entropy SBG for b = 0, a = r → 0). The construction of the entropy S a , b , r is based on a general group-theoretical approach recently proposed by one of us, Tempesta (2016). Indeed, essentially all the properties of this new entropy are obtained as a consequence of the existence of a rational group law, which expresses the structure of S a , b , r with respect to the composition of statistically independent subsystems. Depending on the choice of the parameters, the entropy S a , b , r can be used to cover a wide range of physical situations, in which the measure of the accessible phase space increases say exponentially with the number of particles N of the system, or even stabilizes, by increasing N, to a limiting value. This paves the way to the use of this entropy in contexts where the size of the phase space does not increase as fast as the number of its constituting particles (or subsystems) increases.

  20. Organizing the public health-clinical health interface: theoretical bases.

    PubMed

    St-Pierre, Michèle; Reinharz, Daniel; Gauthier, Jacques-Bernard

    2006-01-01

    This article addresses the issue of the interface between public health and clinical health within the context of the search for networking approaches geared to a more integrated delivery of health services. The articulation of an operative interface is complicated by the fact that the definition of networking modalities involves complex intra- and interdisciplinary and intra- and interorganizational systems across which a new transversal dynamics of intervention practices and exchanges between service structures must be established. A better understanding of the situation is reached by shedding light on the rationale underlying the organizational methods that form the bases of the interface between these two sectors of activity. The Quebec experience demonstrates that neither the structural-functionalist approach, which emphasizes remodelling establishment structures and functions as determinants of integration, nor the structural-constructivist approach, which prioritizes distinct fields of practice in public health and clinical health, adequately serves the purpose of networking and integration. Consequently, a theoretical reframing is imperative. In this regard, structuration theory, which fosters the simultaneous study of methods of inter-structure coordination and inter-actor cooperation, paves the way for a better understanding of the situation and, in turn, to the emergence of new integration possibilities.

  1. Theoretical and empirical evidence for the relationship between stem diameter variations and the water status of mature temperate trees

    NASA Astrophysics Data System (ADS)

    Dietrich, Lars; Zweifel, Roman; Kahmen, Ansgar

    2017-04-01

    Assessing a trees' water status is essential to evaluate its water status during drought. In particular for mature trees it is extremely difficult to monitor the water status throughout the growing season because of the difficulty of canopy access. Daily variations of stem diameter (SDV) are discussed to provide a powerful alternative in measuring a trees' water status. SDV have been shown to incorporate both radial growth and the diurnal shrinkage and swelling of bark tissue, which is caused by daytime transpiration and nighttime refilling, respectively. During dry periods, bark tissue that is depleted in water cannot entirely refill at night resulting in a progressive overall shrinkage of the tree's stem diameter often called tree water deficit (TWD). Comprehensive comparisons of SDV-based values for TWD and reliable values of stem water potential are yet missing for mature trees. As such, TWD has not yet been fully established as a simple and continuous proxy for a trees' water status. Using a canopy crane situated in Northern Switzerland, we calculated TWD based on SDV for six Central European forest tree species during one moist (2014) and one exceptionally dry (2015) growing season and compared these values to the trees' branch water potential. We found a tight relationship between branch water potential and TWD in all six species. We further employed four different mathematical approaches to calculate TWD and tested what approach yielded the best relationship with water potential. Most approaches resulted in significant relationships (p < 0.001) for the different species. However, one TWD variable showed the highest explanatory power (R2) across the six species and both years (up to 86 % explained variation). Intriguingly, this variable does not account for radial growth during periods of shrinkage in its calculation indicating that plastic growth is impeded in such times. The relationship between TWD and stem water potential can best be explained by

  2. Theoretical and empirical study of single-substance, upward two-phase flow in a constant-diameter adiabatic pipe

    SciTech Connect

    Laoulache, R.N.; Maeder, P.F.; DiPippo, R.

    1987-05-01

    A scheme is developed to describe the upward flow of a two-phase mixture of a single substance in a vertical adiabatic constant area pipe. The scheme is based on dividing the mixture into a homogeneous core surrounded by a liquid film. This core may be a mixture of bubbles in a contiguous liquid phase, or a mixture of droplets in a contiguous vapor phase. The core is turbulent, whereas the liquid film may be laminar or turbulent. The working fluid is Dichlorotetrafluoroethane CClF/sub 2/-CClF/sub 2/ known as refrigerant 114 (R-114); the two-phase mixture is generated from the single phase substance by the process of flashing. In this study, the effect of the Froude and Reynolds numbers on the liquid film characteristics is examined. An expression for an interfacial friction coefficient between the turbulent core and the liquid film is developed; it is similar to Darcy's friction coefficient for a single phase flow in a rough pipe. Results indicate that for the range of Reynolds and Froude numbers considered, the liquid film is likely to be turbulent rather than laminar. The study also shows that two-dimensional effects are important, and the flow is never fully developed either in the film or the core. In addition, the new approach for the turbulent film is capable of predicting a local net flow rate that may be upward, downward, stationary, or stalled. An actual steam-water geothermal well is simulated. A similarity theory is used to predict the steam-water mixture pressure and temperature starting with laboratory measurements on the flow of R-114. Results indicate that the theory can be used to predict the pressure gradient in the two-phase region based on laboratory measurements.

  3. Theoretical and empirical study of single-substance, upward two-phase flow in a constant-diameter adiabatic pipe

    SciTech Connect

    Laoulache, R.N.; Maeder, P.F.; DiPippo, R.

    1987-05-01

    A Scheme is developed to describe the upward flow of a two-phase mixture of a single substance in a vertical adiabatic constant area pipe. The scheme is based on dividing the mixture into a homogeneous core surrounded by a liquid film. This core may be a mixture of bubbles in a contiguous liquid phase, or a mixture of droplets in a contiguous vapor phase. Emphasis is placed upon the latter case since the range of experimental measurements of pressure, temperature, and void fraction collected in this study fall in the slug-churn''- annular'' flow regimes. The core is turbulent, whereas the liquid film may be laminar or turbulent. Turbulent stresses are modeled by using Prandtl's mixing-length theory. The working fluid is Dichlorotetrafluoroethane CCIF{sub 2}-CCIF{sub 2} known as refrigerant 114 (R-114); the two-phase mixture is generated from the single phase substance by the process of flashing. In this study, the effect of the Froude and Reynolds numbers on the liquid film characteristics is examined. The compressibility is accounted for through the acceleration pressure gradient of the core and not directly through the Mach number. An expression for an interfacial friction coefficient between the turbulent core and the liquid film is developed; it is similar to Darcy's friction coefficient for a single phase flow in a rough pipe. Finally, an actual steam-water geothermal well is simulated; it is based on actual field data from New Zealand. A similarity theory is used to predict the steam-water mixture pressure and temperature starting with laboratory measurements on the flow of R-114.

  4. Performance-Based Service Quality Model: An Empirical Study on Japanese Universities

    ERIC Educational Resources Information Center

    Sultan, Parves; Wong, Ho

    2010-01-01

    Purpose: This paper aims to develop and empirically test the performance-based higher education service quality model. Design/methodology/approach: The study develops 67-item instrument for measuring performance-based service quality with a particular focus on the higher education sector. Scale reliability is confirmed using the Cronbach's alpha.…

  5. Performance-Based Service Quality Model: An Empirical Study on Japanese Universities

    ERIC Educational Resources Information Center

    Sultan, Parves; Wong, Ho

    2010-01-01

    Purpose: This paper aims to develop and empirically test the performance-based higher education service quality model. Design/methodology/approach: The study develops 67-item instrument for measuring performance-based service quality with a particular focus on the higher education sector. Scale reliability is confirmed using the Cronbach's alpha.…

  6. Empirically Based School Interventions Targeted at Academic and Mental Health Functioning

    ERIC Educational Resources Information Center

    Hoagwood, Kimberly E.; Olin, S. Serene; Kerker, Bonnie D.; Kratochwill, Thomas R.; Crowe, Maura; Saka, Noa

    2007-01-01

    This review examines empirically based studies of school-based mental health interventions. The review identified 64 out of more than 2,000 articles published between 1990 and 2006 that met methodologically rigorous criteria for inclusion. Of these 64 articles, only 24 examined both mental health "and" educational outcomes. The majority of…

  7. Assessing differential expression in two-color microarrays: a resampling-based empirical Bayes approach.

    PubMed

    Li, Dongmei; Le Pape, Marc A; Parikh, Nisha I; Chen, Will X; Dye, Timothy D

    2013-01-01

    Microarrays are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. Multiple testing methods in microarray data analysis aim at controlling both Type I and Type II error rates; however, real microarray data do not always fit their distribution assumptions. Smyth's ubiquitous parametric method, for example, inadequately accommodates violations of normality assumptions, resulting in inflated Type I error rates. The Significance Analysis of Microarrays, another widely used microarray data analysis method, is based on a permutation test and is robust to non-normally distributed data; however, the Significance Analysis of Microarrays method fold change criteria are problematic, and can critically alter the conclusion of a study, as a result of compositional changes of the control data set in the analysis. We propose a novel approach, combining resampling with empirical Bayes methods: the Resampling-based empirical Bayes Methods. This approach not only reduces false discovery rates for non-normally distributed microarray data, but it is also impervious to fold change threshold since no control data set selection is needed. Through simulation studies, sensitivities, specificities, total rejections, and false discovery rates are compared across the Smyth's parametric method, the Significance Analysis of Microarrays, and the Resampling-based empirical Bayes Methods. Differences in false discovery rates controls between each approach are illustrated through a preterm delivery methylation study. The results show that the Resampling-based empirical Bayes Methods offer significantly higher specificity and lower false discovery rates compared to Smyth's parametric method when data are not normally distributed. The Resampling-based empirical Bayes Methods also offers higher statistical power than the Significance Analysis of Microarrays method when the proportion of significantly differentially

  8. The Empirical Investigation of Perspective-Based Reading

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Green, Scott; Laitenberger, Oliver; Shull, Forrest; Sorumgard, Sivert; Zelkowitz, Marvin V.

    1996-01-01

    We consider reading techniques a fundamental means of achieving high quality software. Due to the lack of research in this area, we are experimenting with the application and comparison of various reading techniques. This paper deals with our experiences with Perspective-Based Reading (PBR), a particular reading technique for requirements documents. The goal of PBR is to provide operational scenarios where members of a review team read a document from a particular perspective (e.g., tester, developer, user). Our assumption is that the combination of different perspectives provides better coverage of the document than the same number of readers using their usual technique.

  9. Toward an Empirically-Based Parametric Explosion Spectral Model

    DTIC Science & Technology

    2011-09-01

    Figure 6. Analysis of Vp/Vs () ratio from USGS database (Wood, 2007) at Pahute Mesa and Yucca Flat. The ratio as a function of depth...from Leonard and Johnson (1987) and Ferguson (1988) are shown for Pahute Mesa and Yucca Flat, respectively. Based on the distribution, we estimate...constant Vp/Vs ratios of 1.671 and 1.871 () at Pahute Mesa and Yucca Flat, respectively. In order to obtain the shear modulus and shear

  10. The Empirical Investigation of Perspective-Based Reading

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Green, Scott; Laitenberger, Oliver; Shull, Forrest; Sorumgard, Sivert; Zelkowitz, Marvin V.

    1996-01-01

    We consider reading techniques a fundamental means of achieving high quality software. Due to the lack of research in this area, we are experimenting with the application and comparison of various reading techniques. This paper deals with our experiences with Perspective-Based Reading (PBR), a particular reading technique for requirements documents. The goal of PBR is to provide operational scenarios where members of a review team read a document from a particular perspective (e.g., tester, developer, user). Our assumption is that the combination of different perspectives provides better coverage of the document than the same number of readers using their usual technique.

  11. Theoretical and Empirical Bases of Character Development in Adolescence: A View of the Issues.

    PubMed

    Seider, Scott; Jayawickreme, Eranda; Lerner, Richard M

    2017-03-11

    Traditional models of character development have conceptualized character as a set of psychological attributes that motivate or enable individuals to function as competent moral agents. In this special section, we present seven articles, including two commentaries, that seek to make innovative conceptual and methodological contributions to traditional understandings in the extant scholarship of character and character development in youth. In the introduction to this special section, we provide overviews of these contributions, and discuss the implications of these articles both to the current scholarship and to applications aimed at promoting character and positive youth development.

  12. An Empirical Investigation of a Theoretically Based Measure of Perceived Wellness

    ERIC Educational Resources Information Center

    Harari, Marc J.; Waehler, Charles A.; Rogers, James R.

    2005-01-01

    The Perceived Wellness Survey (PWS; T. Adams, 1995; T. Adams, J. Bezner, & M. Steinhardt, 1997) is a recently developed instrument intended to operationalize the comprehensive Perceived Wellness Model (T. Adams, J. Bezner, & M. Steinhardt, 1997), an innovative model that attempts to include the balance of multiple life activities in its evaluation…

  13. An Empirical Investigation of a Theoretically Based Measure of Perceived Wellness

    ERIC Educational Resources Information Center

    Harari, Marc J.; Waehler, Charles A.; Rogers, James R.

    2005-01-01

    The Perceived Wellness Survey (PWS; T. Adams, 1995; T. Adams, J. Bezner, & M. Steinhardt, 1997) is a recently developed instrument intended to operationalize the comprehensive Perceived Wellness Model (T. Adams, J. Bezner, & M. Steinhardt, 1997), an innovative model that attempts to include the balance of multiple life activities in its evaluation…

  14. Reliability-Based Scatter Factors. Volume 1. Theoretical and Empirical Results

    DTIC Science & Technology

    1978-03-01

    of S is given by s5 n FS(s) = [ s (6)(rn/n) + sCX In deriving Eq. 6, the fact has been used that 2n(ý/8)a is distributed as X2 with 2n degrees of...summarize such observations for selected values of R, a and n. The last column in Tables 1 - 3 lists the value of Freudenthal’s scatter factor

  15. Lightning Detection Efficiency Analysis Process: Modeling Based on Empirical Data

    NASA Technical Reports Server (NTRS)

    Rompala, John T.

    2005-01-01

    A ground based lightning detection system employs a grid of sensors, which record and evaluate the electromagnetic signal produced by a lightning strike. Several detectors gather information on that signal s strength, time of arrival, and behavior over time. By coordinating the information from several detectors, an event solution can be generated. That solution includes the signal s point of origin, strength and polarity. Determination of the location of the lightning strike uses algorithms based on long used techniques of triangulation. Determination of the event s original signal strength relies on the behavior of the generated magnetic field over distance and time. In general the signal from the event undergoes geometric dispersion and environmental attenuation as it progresses. Our knowledge of that radial behavior together with the strength of the signal received by detecting sites permits an extrapolation and evaluation of the original strength of the lightning strike. It also limits the detection efficiency (DE) of the network. For expansive grids and with a sparse density of detectors, the DE varies widely over the area served. This limits the utility of the network in gathering information on regional lightning strike density and applying it to meteorological studies. A network of this type is a grid of four detectors in the Rondonian region of Brazil. The service area extends over a million square kilometers. Much of that area is covered by rain forests. Thus knowledge of lightning strike characteristics over the expanse is of particular value. I have been developing a process that determines the DE over the region [3]. In turn, this provides a way to produce lightning strike density maps, corrected for DE, over the entire region of interest. This report offers a survey of that development to date and a record of present activity.

  16. Implementing community-based provider participation in research: an empirical study

    PubMed Central

    2012-01-01

    Background Since 2003, the United States National Institutes of Health (NIH) has sought to restructure the clinical research enterprise in the United States by promoting collaborative research partnerships between academically-based investigators and community-based physicians. By increasing community-based provider participation in research (CBPPR), the NIH seeks to advance the science of discovery by conducting research in clinical settings where most people get their care, and accelerate the translation of research results into everyday clinical practice. Although CBPPR is seen as a promising strategy for promoting the use of evidence-based clinical services in community practice settings, few empirical studies have examined the organizational factors that facilitate or hinder the implementation of CBPPR. The purpose of this study is to explore the organizational start-up and early implementation of CBPPR in community-based practice. Methods We used longitudinal, case study research methods and an organizational model of innovation implementation to theoretically guide our study. Our sample consisted of three community practice settings that recently joined the National Cancer Institute’s (NCI) Community Clinical Oncology Program (CCOP) in the United States. Data were gathered through site visits, telephone interviews, and archival documents from January 2008 to May 2011. Results The organizational model for innovation implementation was useful in identifying and investigating the organizational factors influencing start-up and early implementation of CBPPR in CCOP organizations. In general, the three CCOP organizations varied in the extent to which they achieved consistency in CBPPR over time and across physicians. All three CCOP organizations demonstrated mixed levels of organizational readiness for change. Hospital management support and resource availability were limited across CCOP organizations early on, although they improved in one CCOP organization

  17. Effect of balancing selection on spatial genetic structure within populations: theoretical investigations on the self-incompatibility locus and empirical studies in Arabidopsis halleri

    PubMed Central

    Leducq, J-B; Llaurens, V; Castric, V; Saumitou-Laprade, P; Hardy, O J; Vekemans, X

    2011-01-01

    The effect of selection on patterns of genetic structure within and between populations may be studied by contrasting observed patterns at the genes targeted by selection with those of unlinked neutral marker loci. Local directional selection on target genes will produce stronger population genetic structure than at neutral loci, whereas the reverse is expected for balancing selection. However, theoretical predictions on the intensity of this signal under precise models of balancing selection are still lacking. Using negative frequency-dependent selection acting on self-incompatibility systems in plants as a model of balancing selection, we investigated the effect of such selection on patterns of spatial genetic structure within a continuous population. Using numerical simulations, we tested the effect of the type of self-incompatibility system, the number of alleles at the self-incompatibility locus and the dominance interactions among them, the extent of gene dispersal, and the immigration rate on spatial genetic structure at the selected locus and at unlinked neutral loci. We confirm that frequency-dependent selection is expected to reduce the extent of spatial genetic structure as compared to neutral loci, particularly in situations with low number of alleles at the self-incompatibility locus, high frequency of codominant interactions among alleles, restricted gene dispersal and restricted immigration from outside populations. Hence the signature of selection on spatial genetic structure is expected to vary across species and populations, and we show that empirical data from the literature as well as data reported here on three natural populations of the herb Arabidopsis halleri confirm these theoretical results. PMID:20531450

  18. Towards an Empirically Based Parametric Explosion Spectral Model

    SciTech Connect

    Ford, S R; Walter, W R; Ruppert, S; Matzel, E; Hauk, T; Gok, R

    2009-08-31

    Small underground nuclear explosions need to be confidently detected, identified, and characterized in regions of the world where they have never before been tested. The focus of our work is on the local and regional distances (< 2000 km) and phases (Pn, Pg, Sn, Lg) necessary to see small explosions. We are developing a parametric model of the nuclear explosion seismic source spectrum that is compatible with the earthquake-based geometrical spreading and attenuation models developed using the Magnitude Distance Amplitude Correction (MDAC) techniques (Walter and Taylor, 2002). The explosion parametric model will be particularly important in regions without any prior explosion data for calibration. The model is being developed using the available body of seismic data at local and regional distances for past nuclear explosions at foreign and domestic test sites. Parametric modeling is a simple and practical approach for widespread monitoring applications, prior to the capability to carry out fully deterministic modeling. The achievable goal of our parametric model development is to be able to predict observed local and regional distance seismic amplitudes for event identification and yield determination in regions with incomplete or no prior history of underground nuclear testing. The relationship between the parametric equations and the geologic and containment conditions will assist in our physical understanding of the nuclear explosion source.

  19. Toward an Empirically-based Parametric Explosion Spectral Model

    NASA Astrophysics Data System (ADS)

    Ford, S. R.; Walter, W. R.; Ruppert, S.; Matzel, E.; Hauk, T. F.; Gok, R.

    2010-12-01

    Small underground nuclear explosions need to be confidently detected, identified, and characterized in regions of the world where they have never occurred. We develop a parametric model of the nuclear explosion seismic source spectrum derived from regional phases (Pn, Pg, and Lg) that is compatible with earthquake-based geometrical spreading and attenuation. Earthquake spectra are fit with a generalized version of the Brune spectrum, which is a three-parameter model that describes the long-period level, corner-frequency, and spectral slope at high-frequencies. These parameters are then correlated with near-source geology and containment conditions. There is a correlation of high gas-porosity (low strength) with increased spectral slope. However, there are trade-offs between the slope and corner-frequency, which we try to independently constrain using Mueller-Murphy relations and coda-ratio techniques. The relationship between the parametric equation and the geologic and containment conditions will assist in our physical understanding of the nuclear explosion source, and aid in the prediction of observed local and regional distance seismic amplitudes for event identification and yield determination in regions with incomplete or no prior history of underground nuclear testing.

  20. The Empirical Investigation of Perspective-Based Reading

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Green, Scott; Laitenberger, Oliver; Shull, Forrest; Sorumgard, Sivert; Zelkowitz, Marvin V.

    1995-01-01

    We consider reading techniques a fundamental means of achieving high quality software. Due to lack of research in this area, we are experimenting with the application and comparison of various reading techniques. This paper deals with our experiences with Perspective Based Reading (PBR) a particular reading technique for requirement documents. The goal of PBR is to provide operation scenarios where members of a review team read a document from a particular perspective (eg., tester, developer, user). Our assumption is that the combination of different perspective provides better coverage of the document than the same number of readers using their usual technique. To test the efficacy of PBR, we conducted two runs of a controlled experiment in the environment of NASA GSFC Software Engineering Laboratory (SEL), using developers from the environment. The subjects read two types of documents, one generic in nature and the other from the NASA Domain, using two reading techniques, PBR and their usual technique. The results from these experiment as well as the experimental design, are presented and analyzed. When there is a statistically significant distinction, PBR performs better than the subjects' usual technique. However, PBR appears to be more effective on the generic documents than on the NASA documents.

  1. The Empirical Investigation of Perspective-Based Reading

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Green, Scott; Laitenberger, Oliver; Shull, Forrest; Sorumgard, Sivert; Zelkowitz, Marvin V.

    1995-01-01

    We consider reading techniques a fundamental means of achieving high quality software. Due to lack of research in this area, we are experimenting with the application and comparison of various reading techniques. This paper deals with our experiences with Perspective Based Reading (PBR) a particular reading technique for requirement documents. The goal of PBR is to provide operation scenarios where members of a review team read a document from a particular perspective (eg., tester, developer, user). Our assumption is that the combination of different perspective provides better coverage of the document than the same number of readers using their usual technique. To test the efficacy of PBR, we conducted two runs of a controlled experiment in the environment of NASA GSFC Software Engineering Laboratory (SEL), using developers from the environment. The subjects read two types of documents, one generic in nature and the other from the NASA Domain, using two reading techniques, PBR and their usual technique. The results from these experiment as well as the experimental design, are presented and analyzed. When there is a statistically significant distinction, PBR performs better than the subjects' usual technique. However, PBR appears to be more effective on the generic documents than on the NASA documents.

  2. Development of theoretical oxygen saturation calibration curve based on optical density ratio and optical simulation approach

    NASA Astrophysics Data System (ADS)

    Jumadi, Nur Anida; Beng, Gan Kok; Ali, Mohd Alauddin Mohd; Zahedi, Edmond; Morsin, Marlia

    2017-09-01

    The implementation of surface-based Monte Carlo simulation technique for oxygen saturation (SaO2) calibration curve estimation is demonstrated in this paper. Generally, the calibration curve is estimated either from the empirical study using animals as the subject of experiment or is derived from mathematical equations. However, the determination of calibration curve using animal is time consuming and requires expertise to conduct the experiment. Alternatively, an optical simulation technique has been used widely in the biomedical optics field due to its capability to exhibit the real tissue behavior. The mathematical relationship between optical density (OD) and optical density ratios (ODR) associated with SaO2 during systole and diastole is used as the basis of obtaining the theoretical calibration curve. The optical properties correspond to systolic and diastolic behaviors were applied to the tissue model to mimic the optical properties of the tissues. Based on the absorbed ray flux at detectors, the OD and ODR were successfully calculated. The simulation results of optical density ratio occurred at every 20 % interval of SaO2 is presented with maximum error of 2.17 % when comparing it with previous numerical simulation technique (MC model). The findings reveal the potential of the proposed method to be used for extended calibration curve study using other wavelength pair.

  3. Empirical wind retrieval model based on SAR spectrum measurements

    NASA Astrophysics Data System (ADS)

    Panfilova, Maria; Karaev, Vladimir; Balandina, Galina; Kanevsky, Mikhail; Portabella, Marcos; Stoffelen, Ad

    ambiguity from polarimetric SAR. A criterion based on the complex correlation coefficient between the VV and VH signals sign is applied to select the wind direction. An additional quality control on the wind speed value retrieved with the spectral method is applied. Here, we use the direction obtained with the spectral method and the backscattered signal for CMOD wind speed estimate. The algorithm described above may be refined by the use of numerous SAR data and wind measurements. In the present preliminary work the first results of SAR images combined with in situ data processing are presented. Our results are compared to the results obtained using previously developed models CMOD, C-2PO for VH polarization and statistical wind retrieval approaches [1]. Acknowledgments. This work is supported by the Russian Foundation of Basic Research (grants 13-05-00852-a). [1] M. Portabella, A. Stoffelen, J. A. Johannessen, Toward an optimal inversion method for synthetic aperture radar wind retrieval, Journal of geophysical research, V. 107, N C8, 2002

  4. A Rigorous Test of the Fit of the Circumplex Model to Big Five Personality Data: Theoretical and Methodological Issues and Two Large Sample Empirical Tests.

    PubMed

    DeGeest, David Scott; Schmidt, Frank

    2015-01-01

    Our objective was to apply the rigorous test developed by Browne (1992) to determine whether the circumplex model fits Big Five personality data. This test has yet to be applied to personality data. Another objective was to determine whether blended items explained correlations among the Big Five traits. We used two working adult samples, the Eugene-Springfield Community Sample and the Professional Worker Career Experience Survey. Fit to the circumplex was tested via Browne's (1992) procedure. Circumplexes were graphed to identify items with loadings on multiple traits (blended items), and to determine whether removing these items changed five-factor model (FFM) trait intercorrelations. In both samples, the circumplex structure fit the FFM traits well. Each sample had items with dual-factor loadings (8 items in the first sample, 21 in the second). Removing blended items had little effect on construct-level intercorrelations among FFM traits. We conclude that rigorous tests show that the fit of personality data to the circumplex model is good. This finding means the circumplex model is competitive with the factor model in understanding the organization of personality traits. The circumplex structure also provides a theoretically and empirically sound rationale for evaluating intercorrelations among FFM traits. Even after eliminating blended items, FFM personality traits remained correlated.

  5. Use of an Empirically Based Marriage Education Program by Religious Organizations: Results of a Dissemination Trial

    ERIC Educational Resources Information Center

    Markman, Howard J.; Whitton, Sarah W.; Kline, Galena H.; Stanley, Scott M.; Thompson, Huette; St. Peters, Michelle; Leber, Douglas B.; Olmos-Gallo, P. Antonio; Prado, Lydia; Williams, Tamara; Gilbert, Katy; Tonelli, Laurie; Bobulinski, Michelle; Cordova, Allen

    2004-01-01

    We present an evaluation of the extent to which an empirically based couples' intervention program was successfully disseminated in the community. Clergy and lay leaders from 27 religious organizations who were trained to deliver the Prevention and Relationship Enhancement Program (PREP) were contacted approximately yearly for 5 years following…

  6. Empirical vs. Expected IRT-Based Reliability Estimation in Computerized Multistage Testing (MST)

    ERIC Educational Resources Information Center

    Zhang, Yanwei; Breithaupt, Krista; Tessema, Aster; Chuah, David

    2006-01-01

    Two IRT-based procedures to estimate test reliability for a certification exam that used both adaptive (via a MST model) and non-adaptive design were considered in this study. Both procedures rely on calibrated item parameters to estimate error variance. In terms of score variance, one procedure (Method 1) uses the empirical ability distribution…

  7. Untangling the Evidence: Introducing an Empirical Model for Evidence-Based Library and Information Practice

    ERIC Educational Resources Information Center

    Gillespie, Ann

    2014-01-01

    Introduction: This research is the first to investigate the experiences of teacher-librarians as evidence-based practice. An empirically derived model is presented in this paper. Method: This qualitative study utilised the expanded critical incident approach, and investigated the real-life experiences of fifteen Australian teacher-librarians,…

  8. Deriving Empirically-Based Design Guidelines for Advanced Learning Technologies that Foster Disciplinary Comprehension

    ERIC Educational Resources Information Center

    Poitras, Eric; Trevors, Gregory

    2012-01-01

    Planning, conducting, and reporting leading-edge research requires professionals who are capable of highly skilled reading. This study reports the development of an empirically informed computer-based learning environment designed to foster the acquisition of reading comprehension strategies that mediate expertise in the social sciences. Empirical…

  9. Development of an Empirically Based Questionnaire to Investigate Young Students' Ideas about Nature of Science

    ERIC Educational Resources Information Center

    Chen, Sufen; Chang, Wen-Hua; Lieu, Sang-Chong; Kao, Huey-Lien; Huang, Mao-Tsai; Lin, Shu-Fen

    2013-01-01

    This study developed an empirically based questionnaire to monitor young learners' conceptions of nature of science (NOS). The questionnaire, entitled Students' Ideas about Nature of Science (SINOS), measured views on theory-ladenness, use of creativity and imagination, tentativeness of scientific knowledge, durability of scientific knowledge,…

  10. Feasibility of an Empirically Based Program for Parents of Preschoolers with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Dababnah, Sarah; Parish, Susan L.

    2016-01-01

    This article reports on the feasibility of implementing an existing empirically based program, "The Incredible Years," tailored to parents of young children with autism spectrum disorder. Parents raising preschool-aged children (aged 3-6?years) with autism spectrum disorder (N?=?17) participated in a 15-week pilot trial of the…

  11. Task-Based Language Teaching: An Empirical Study of Task Transfer

    ERIC Educational Resources Information Center

    Benson, Susan D.

    2016-01-01

    Since the 1980s, task-based language teaching (TBLT) has enjoyed considerable interest from researchers of second language acquisition (SLA), resulting in a growing body of empirical evidence to support how and to what extent this approach can promote language learning. Although transferability and generalizability are critical assumptions for…

  12. An Empirically Based Method of Q-Matrix Validation for the DINA Model: Development and Applications

    ERIC Educational Resources Information Center

    de la Torre, Jimmy

    2008-01-01

    Most model fit analyses in cognitive diagnosis assume that a Q matrix is correct after it has been constructed, without verifying its appropriateness. Consequently, any model misfit attributable to the Q matrix cannot be addressed and remedied. To address this concern, this paper proposes an empirically based method of validating a Q matrix used…

  13. Implementing Evidence-Based Practice: A Review of the Empirical Research Literature

    ERIC Educational Resources Information Center

    Gray, Mel; Joy, Elyssa; Plath, Debbie; Webb, Stephen A.

    2013-01-01

    The article reports on the findings of a review of empirical studies examining the implementation of evidence-based practice (EBP) in the human services. Eleven studies were located that defined EBP as a research-informed, clinical decision-making process and identified barriers and facilitators to EBP implementation. A thematic analysis of the…

  14. Implementing Evidence-Based Practice: A Review of the Empirical Research Literature

    ERIC Educational Resources Information Center

    Gray, Mel; Joy, Elyssa; Plath, Debbie; Webb, Stephen A.

    2013-01-01

    The article reports on the findings of a review of empirical studies examining the implementation of evidence-based practice (EBP) in the human services. Eleven studies were located that defined EBP as a research-informed, clinical decision-making process and identified barriers and facilitators to EBP implementation. A thematic analysis of the…

  15. Task-Based Language Teaching: An Empirical Study of Task Transfer

    ERIC Educational Resources Information Center

    Benson, Susan D.

    2016-01-01

    Since the 1980s, task-based language teaching (TBLT) has enjoyed considerable interest from researchers of second language acquisition (SLA), resulting in a growing body of empirical evidence to support how and to what extent this approach can promote language learning. Although transferability and generalizability are critical assumptions for…

  16. Satellite-based empirical models linking river plume dynamics with hypoxic area andvolume

    EPA Science Inventory

    Satellite-based empirical models explaining hypoxic area and volume variation were developed for the seasonally hypoxic (O2 < 2 mg L−1) northern Gulf of Mexico adjacent to the Mississippi River. Annual variations in midsummer hypoxic area and ...

  17. Satellite-based empirical models linking river plume dynamics with hypoxic area andvolume

    EPA Science Inventory

    Satellite-based empirical models explaining hypoxic area and volume variation were developed for the seasonally hypoxic (O2 < 2 mg L−1) northern Gulf of Mexico adjacent to the Mississippi River. Annual variations in midsummer hypoxic area and ...

  18. An Empirically-Based Statewide System for Identifying Quality Pre-Kindergarten Programs

    ERIC Educational Resources Information Center

    Williams, Jeffrey M.; Landry, Susan H.; Anthony, Jason L.; Swank, Paul R.; Crawford, April D.

    2012-01-01

    This study presents an empirically-based statewide system that links information about pre-kindergarten programs with children's school readiness scores to certify pre-kindergarten classrooms as promoting school readiness. Over 8,000 children from 1,255 pre-kindergarten classrooms were followed longitudinally for one year. Pre-kindergarten quality…

  19. ECG baseline wander correction based on mean-median filter and empirical mode decomposition.

    PubMed

    Xin, Yi; Chen, Yu; Hao, Wei Tuo

    2014-01-01

    A novel approach of ECG baseline wander correction based on mean-median filter and empirical mode decomposition is presented in this paper. The low frequency parts of the original signals were removed by the mean median filter in a nonlinear way to obtain the baseline wander estimation, then its series of IMFs were sifted by t-test after empirical mode decomposition. The proposed method, tested by the ECG signals in MIT-BIH Arrhythmia database and European ST_T database, is more effective compared with other baseline wander removal methods.

  20. I-SOLV: a new surface-based empirical model for computing solvation free energies.

    PubMed

    Wang, Renxiao; Lin, Fu; Xu, Yong; Cheng, Tiejun

    2007-07-01

    We have developed a new empirical model, I-SOLV, for computing solvation free energies of neutral organic molecules. It computes the solvation free energy of a solute molecule by summing up the contributions from its component atoms. The contribution from a certain atom is determined by the solvent-accessible surface area as well as the surface tension of this atom. A total of 49 atom types are implemented in our model for classifying C, N, O, S, P, F, Cl, Br and I in common organic molecules. Their surface tensions are parameterized by using a data set of 532 neutral organic molecules with experimentally measured solvation free energies. A head-to-head comparison of our model with several other solvation models was performed on a test set of 82 molecules. Our model outperformed other solvation models, including widely used PB/SA and GB/SA models, with a mean unsigned error as low as 0.39 kcal/mol. Our study has demonstrated again that well-developed empirical solvation models are not necessarily less accurate than more sophisticated theoretical models. Empirical models may serve as appealing alternatives due to their simplicity and accuracy.

  1. Plasmid stability analysis based on a new theoretical model employing stochastic simulations

    PubMed Central

    Werbowy, Olesia; Werbowy, Sławomir

    2017-01-01

    Here, we present a simple theoretical model to study plasmid stability, based on one input parameter which is the copy number of plasmids present in a host cell. The Monte Carlo approach was used to analyze random fluctuations affecting plasmid replication and segregation leading to gradual reduction in the plasmid population within the host cell. This model was employed to investigate maintenance of pEC156 derivatives, a high-copy number ColE1-type Escherichia coli plasmid that carries an EcoVIII restriction-modification system. Plasmid stability was examined in selected Escherichia coli strains (MG1655, wild-type; MG1655 pcnB, and hyper-recombinogenic JC8679 sbcA). We have compared the experimental data concerning plasmid maintenance with the simulations and found that the theoretical stability patterns exhibited an excellent agreement with those empirically tested. In our simulations, we have investigated the influence of replication fails (α parameter) and uneven partition as a consequence of multimer resolution fails (δ parameter), and the post-segregation killing factor (β parameter). All of these factors act at the same time and affect plasmid inheritance at different levels. In case of pEC156-derivatives we concluded that multimerization is a major determinant of plasmid stability. Our data indicate that even small changes in the fidelity of segregation can have serious effects on plasmid stability. Use of the proposed mathematical model can provide a valuable description of plasmid maintenance, as well as enable prediction of the probability of the plasmid loss. PMID:28846713

  2. Theoretical magnetograms based on quantitative simulation of a magnetospheric substorm

    NASA Technical Reports Server (NTRS)

    Chen, C.-K.; Wolf, R. A.; Karty, J. L.; Harel, M.

    1982-01-01

    Substorm currents derived from the Rice University computer simulation of the September 19, 1976 substorm event are used to compute theoretical magnetograms as a function of universal time for various stations, integrating the Biot-Savart law over a maze of about 2700 wires and bands that carry the ring, Birkeland and horizontal ionospheric currents. A comparison of theoretical results with corresponding observations leads to a claim of general agreement, especially for stations at high and middle magnetic latitudes. Model results suggest that the ground magnetic field perturbations arise from complicated combinations of different kinds of currents, and that magnetic field disturbances due to different but related currents cancel each other out despite the inapplicability of Fukushima's (1973) theorem. It is also found that the dawn-dusk asymmetry in the horizontal magnetic field disturbance component at low latitudes is due to a net downward Birkeland current at noon, a net upward current at midnight, and, generally, antisunward-flowing electrojets.

  3. Theoretical magnetograms based on quantitative simulation of a magnetospheric substorm

    NASA Technical Reports Server (NTRS)

    Chen, C.-K.; Wolf, R. A.; Karty, J. L.; Harel, M.

    1982-01-01

    Substorm currents derived from the Rice University computer simulation of the September 19, 1976 substorm event are used to compute theoretical magnetograms as a function of universal time for various stations, integrating the Biot-Savart law over a maze of about 2700 wires and bands that carry the ring, Birkeland and horizontal ionospheric currents. A comparison of theoretical results with corresponding observations leads to a claim of general agreement, especially for stations at high and middle magnetic latitudes. Model results suggest that the ground magnetic field perturbations arise from complicated combinations of different kinds of currents, and that magnetic field disturbances due to different but related currents cancel each other out despite the inapplicability of Fukushima's (1973) theorem. It is also found that the dawn-dusk asymmetry in the horizontal magnetic field disturbance component at low latitudes is due to a net downward Birkeland current at noon, a net upward current at midnight, and, generally, antisunward-flowing electrojets.

  4. PDE-based Non-Linear Diffusion Techniques for Denoising Scientific and Industrial Images: An Empirical Study

    SciTech Connect

    Weeratunga, S K; Kamath, C

    2001-12-20

    Removing noise from data is often the first step in data analysis. Denoising techniques should not only reduce the noise, but do so without blurring or changing the location of the edges. Many approaches have been proposed to accomplish this; in this paper, they focus on one such approach, namely the use of non-linear diffusion operators. This approach has been studied extensively from a theoretical viewpoint ever since the 1987 work of Perona and Malik showed that non-linear filters outperformed the more traditional linear Canny edge detector. They complement this theoretical work by investigating the performance of several isotropic diffusion operators on test images from scientific domains. They explore the effects of various parameters such as the choice of diffusivity function, explicit and implicit methods for the discretization of the PDE, and approaches for the spatial discretization of the non-linear operator etc. They also compare these schemes with simple spatial filters and the more complex wavelet-based shrinkage techniques. The empirical results show that, with an appropriate choice of parameters, diffusion-based schemes can be as effective as competitive techniques.

  5. Theoretical Investigations of Plasma-Based Accelerators and Other Advanced Accelerator Concepts

    SciTech Connect

    Shuets, G.

    2004-05-21

    Theoretical investigations of plasma-based accelerators and other advanced accelerator concepts. The focus of the work was on the development of plasma based and structure based accelerating concepts, including laser-plasma, plasma channel, and microwave driven plasma accelerators.

  6. TheoReTS - An information system for theoretical spectra based on variational predictions from molecular potential energy and dipole moment surfaces

    NASA Astrophysics Data System (ADS)

    Rey, Michaël; Nikitin, Andrei V.; Babikov, Yurii L.; Tyuterev, Vladimir G.

    2016-09-01

    Knowledge of intensities of rovibrational transitions of various molecules and theirs isotopic species in wide spectral and temperature ranges is essential for the modeling of optical properties of planetary atmospheres, brown dwarfs and for other astrophysical applications. TheoReTS ("Theoretical Reims-Tomsk Spectral data") is an Internet accessible information system devoted to ab initio based rotationally resolved spectra predictions for some relevant molecular species. All data were generated from potential energy and dipole moment surfaces computed via high-level electronic structure calculations using variational methods for vibration-rotation energy levels and transitions. When available, empirical corrections to band centers were applied, all line intensities remaining purely ab initio. The current TheoReTS implementation contains information on four-to-six atomic molecules, including phosphine, methane, ethylene, silane, methyl-fluoride, and their isotopic species 13CH4 , 12CH3D , 12CH2D2 , 12CD4 , 13C2H4, … . Predicted hot methane line lists up to T = 2000 K are included. The information system provides the associated software for spectra simulation including absorption coefficient, absorption and emission cross-sections, transmittance and radiance. The simulations allow Lorentz, Gauss and Voight line shapes. Rectangular, triangular, Lorentzian, Gaussian, sinc and sinc squared apparatus function can be used with user-defined specifications for broadening parameters and spectral resolution. All information is organized as a relational database with the user-friendly graphical interface according to Model-View-Controller architectural tools. The full-featured web application is written on PHP using Yii framework and C++ software modules. In case of very large high-temperature line lists, a data compression is implemented for fast interactive spectra simulations of a quasi-continual absorption due to big line density. Applications for the TheoReTS may

  7. An Empirical Kaiser Criterion.

    PubMed

    Braeken, Johan; van Assen, Marcel A L M

    2016-03-31

    In exploratory factor analysis (EFA), most popular methods for dimensionality assessment such as the screeplot, the Kaiser criterion, or-the current gold standard-parallel analysis, are based on eigenvalues of the correlation matrix. To further understanding and development of factor retention methods, results on population and sample eigenvalue distributions are introduced based on random matrix theory and Monte Carlo simulations. These results are used to develop a new factor retention method, the Empirical Kaiser Criterion. The performance of the Empirical Kaiser Criterion and parallel analysis is examined in typical research settings, with multiple scales that are desired to be relatively short, but still reliable. Theoretical and simulation results illustrate that the new Empirical Kaiser Criterion performs as well as parallel analysis in typical research settings with uncorrelated scales, but much better when scales are both correlated and short. We conclude that the Empirical Kaiser Criterion is a powerful and promising factor retention method, because it is based on distribution theory of eigenvalues, shows good performance, is easily visualized and computed, and is useful for power analysis and sample size planning for EFA. (PsycINFO Database Record

  8. Multi-focus image fusion based on window empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Qin, Xinqiang; Zheng, Jiaoyue; Hu, Gang; Wang, Jiao

    2017-09-01

    In order to improve multi-focus image fusion quality, a novel fusion algorithm based on window empirical mode decomposition (WEMD) is proposed. This WEMD is an improved form of bidimensional empirical mode decomposition (BEMD), due to its decomposition process using the adding window principle, effectively resolving the signal concealment problem. We used WEMD for multi-focus image fusion, and formulated different fusion rules for bidimensional intrinsic mode function (BIMF) components and the residue component. For fusion of the BIMF components, the concept of the Sum-modified-Laplacian was used and a scheme based on the visual feature contrast adopted; when choosing the residue coefficients, a pixel value based on the local visibility was selected. We carried out four groups of multi-focus image fusion experiments and compared objective evaluation criteria with other three fusion methods. The experimental results show that the proposed fusion approach is effective and performs better at fusing multi-focus images than some traditional methods.

  9. Effectiveness of a Theoretically-Based Judgment and Decision Making Intervention for Adolescents

    PubMed Central

    Knight, Danica K.; Dansereau, Donald F.; Becan, Jennifer E.; Rowan, Grace A.; Flynn, Patrick M.

    2014-01-01

    Although adolescents demonstrate capacity for rational decision making, their tendency to be impulsive, place emphasis on peers, and ignore potential consequences of their actions often translates into higher risk-taking including drug use, illegal activity, and physical harm. Problems with judgment and decision making contribute to risky behavior and are core issues for youth in treatment. Based on theoretical and empirical advances in cognitive science, the Treatment Readiness and Induction Program (TRIP) represents a curriculum-based decision making intervention that can be easily inserted into a variety of content-oriented modalities as well as administered as a separate therapeutic course. The current study examined the effectiveness of TRIP for promoting better judgment among 519 adolescents (37% female; primarily Hispanic and Caucasian) in residential substance abuse treatment. Change over time in decision making and premeditation (i.e., thinking before acting) was compared among youth receiving standard operating practice (n = 281) versus those receiving standard practice plus TRIP (n = 238). Change in TRIP-specific content knowledge was examined among clients receiving TRIP. Premeditation improved among youth in both groups; TRIP clients showed greater improvement in decision making. TRIP clients also reported significant increases over time in self-awareness, positive-focused thinking (e.g., positive self-talk, goal setting), and recognition of the negative effects of drug use. While both genders showed significant improvement, males showed greater gains in metacognitive strategies (i.e., awareness of one’s own cognitive process) and recognition of the negative effects of drug use. These results suggest that efforts to teach core thinking strategies and apply/practice them through independent intervention modules may benefit adolescents when used in conjunction with content-based programs designed to change problematic behaviors. PMID:24760288

  10. Effectiveness of a theoretically-based judgment and decision making intervention for adolescents.

    PubMed

    Knight, Danica K; Dansereau, Donald F; Becan, Jennifer E; Rowan, Grace A; Flynn, Patrick M

    2015-05-01

    Although adolescents demonstrate capacity for rational decision making, their tendency to be impulsive, place emphasis on peers, and ignore potential consequences of their actions often translates into higher risk-taking including drug use, illegal activity, and physical harm. Problems with judgment and decision making contribute to risky behavior and are core issues for youth in treatment. Based on theoretical and empirical advances in cognitive science, the Treatment Readiness and Induction Program (TRIP) represents a curriculum-based decision making intervention that can be easily inserted into a variety of content-oriented modalities as well as administered as a separate therapeutic course. The current study examined the effectiveness of TRIP for promoting better judgment among 519 adolescents (37 % female; primarily Hispanic and Caucasian) in residential substance abuse treatment. Change over time in decision making and premeditation (i.e., thinking before acting) was compared among youth receiving standard operating practice (n = 281) versus those receiving standard practice plus TRIP (n = 238). Change in TRIP-specific content knowledge was examined among clients receiving TRIP. Premeditation improved among youth in both groups; TRIP clients showed greater improvement in decision making. TRIP clients also reported significant increases over time in self-awareness, positive-focused thinking (e.g., positive self-talk, goal setting), and recognition of the negative effects of drug use. While both genders showed significant improvement, males showed greater gains in metacognitive strategies (i.e., awareness of one's own cognitive process) and recognition of the negative effects of drug use. These results suggest that efforts to teach core thinking strategies and apply/practice them through independent intervention modules may benefit adolescents when used in conjunction with content-based programs designed to change problematic behaviors.

  11. Theoretic base of Edge Local Mode triggering by vertical displacements

    NASA Astrophysics Data System (ADS)

    Wang, Z. T.; He, Z. X.; Wang, Z. H.; Wu, N.; Tang, C. J.

    2015-05-01

    Vertical instability is studied with R-dependent displacement. For Solovev's configuration, the stability boundary of the vertical instability is calculated. The pressure gradient is a destabilizing factor which is contrary to Rebhan's result. Equilibrium parallel current density, j// , at plasma boundary is a drive of the vertical instability similar to Peeling-ballooning modes; however, the vertical instability cannot be stabilized by the magnetic shear which tends towards infinity near the separatrix. The induced current observed in the Edge Local Mode (ELM) triggering experiment by vertical modulation is derived. The theory provides some theoretic explanation for the mitigation of type-I ELMS on ASDEX Upgrade. The principle could be also used for ITER.

  12. Theoretic base of Edge Local Mode triggering by vertical displacements

    SciTech Connect

    Wang, Z. T.; He, Z. X.; Wang, Z. H.; Wu, N.; Tang, C. J.

    2015-05-15

    Vertical instability is studied with R-dependent displacement. For Solovev's configuration, the stability boundary of the vertical instability is calculated. The pressure gradient is a destabilizing factor which is contrary to Rebhan's result. Equilibrium parallel current density, j{sub //}, at plasma boundary is a drive of the vertical instability similar to Peeling-ballooning modes; however, the vertical instability cannot be stabilized by the magnetic shear which tends towards infinity near the separatrix. The induced current observed in the Edge Local Mode (ELM) triggering experiment by vertical modulation is derived. The theory provides some theoretic explanation for the mitigation of type-I ELMS on ASDEX Upgrade. The principle could be also used for ITER.

  13. Subsystem-based theoretical spectroscopy of biomolecules and biomolecular assemblies.

    PubMed

    Neugebauer, Johannes

    2009-12-21

    The absorption properties of chromophores in biomolecular systems are subject to several fine-tuning mechanisms. Specific interactions with the surrounding protein environment often lead to significant changes in the excitation energies, but bulk dielectric effects can also play an important role. Moreover, strong excitonic interactions can occur in systems with several chromophores at close distances. For interpretation purposes, it is often desirable to distinguish different types of environmental effects, such as geometrical, electrostatic, polarization, and response (or differential polarization) effects. Methods that can be applied for theoretical analyses of such effects are reviewed herein, ranging from continuum and point-charge models to explicit quantum chemical subsystem methods for environmental effects. Connections to physical model theories are also outlined. Prototypical applications to optical spectra and excited states of fluorescent proteins, biomolecular photoreceptors, and photosynthetic protein complexes are discussed.

  14. ['Walkability' and physical activity - results of empirical studies based on the 'Neighbourhood Environment Walkability Scale (NEWS)'].

    PubMed

    Rottmann, M; Mielck, A

    2014-02-01

    'Walkability' is mainly assessed by the NEWS questionnaire (Neighbourhood Environment Walkability Scale); in Germany this questionnaire is widely unknown. We now try to fill this gap by providing a systematic overview of empirical studies based on the NEWS. A systematic review was conducted concerning original papers including empirical analyses based on the NEWS. The results are summarised and presented in tables. Altogether 31 publications could be identified. Most of them focus on associations with the variable 'physical activity', and they often report significant associations with at least some of the scales included in the NEWS. Due to methodological differences between the studies it is difficult to compare the results. The concept of 'walkability' should also be established in the German public health discussion. A number of methodological challenges remain to be solved, such as the identification of those scales and items in the NEWS that show the strongest associations with individual health behaviours. © Georg Thieme Verlag KG Stuttgart · New York.

  15. Self-adaptive image denoising based on bidimensional empirical mode decomposition (BEMD).

    PubMed

    Guo, Song; Luan, Fangjun; Song, Xiaoyu; Li, Changyou

    2014-01-01

    To better analyze images with the Gaussian white noise, it is necessary to remove the noise before image processing. In this paper, we propose a self-adaptive image denoising method based on bidimensional empirical mode decomposition (BEMD). Firstly, normal probability plot confirms that 2D-IMF of Gaussian white noise images decomposed by BEMD follow the normal distribution. Secondly, energy estimation equation of the ith 2D-IMF (i=2,3,4,......) is proposed referencing that of ith IMF (i=2,3,4,......) obtained by empirical mode decomposition (EMD). Thirdly, the self-adaptive threshold of each 2D-IMF is calculated. Eventually, the algorithm of the self-adaptive image denoising method based on BEMD is described. From the practical perspective, this is applied for denoising of the magnetic resonance images (MRI) of the brain. And the results show it has a better denoising performance compared with other methods.

  16. An Empirical Study for Impacts of Measurement Errors on EHR based Association Studies

    PubMed Central

    Duan, Rui; Cao, Ming; Wu, Yonghui; Huang, Jing; Denny, Joshua C; Xu, Hua; Chen, Yong

    2016-01-01

    Over the last decade, Electronic Health Records (EHR) systems have been increasingly implemented at US hospitals. Despite their great potential, the complex and uneven nature of clinical documentation and data quality brings additional challenges for analyzing EHR data. A critical challenge is the information bias due to the measurement errors in outcome and covariates. We conducted empirical studies to quantify the impacts of the information bias on association study. Specifically, we designed our simulation studies based on the characteristics of the Electronic Medical Records and Genomics (eMERGE) Network. Through simulation studies, we quantified the loss of power due to misclassifications in case ascertainment and measurement errors in covariate status extraction, with respect to different levels of misclassification rates, disease prevalence, and covariate frequencies. These empirical findings can inform investigators for better understanding of the potential power loss due to misclassification and measurement errors under a variety of conditions in EHR based association studies. PMID:28269935

  17. A theoretical drought classification method for the multivariate drought index based on distribution properties of standardized drought indices

    NASA Astrophysics Data System (ADS)

    Hao, Zengchao; Hao, Fanghua; Singh, Vijay P.; Xia, Youlong; Ouyang, Wei; Shen, Xinyi

    2016-06-01

    Drought indices have been commonly used to characterize different properties of drought and the need to combine multiple drought indices for accurate drought monitoring has been well recognized. Based on linear combinations of multiple drought indices, a variety of multivariate drought indices have recently been developed for comprehensive drought monitoring to integrate drought information from various sources. For operational drought management, it is generally required to determine thresholds of drought severity for drought classification to trigger a mitigation response during a drought event to aid stakeholders and policy makers in decision making. Though the classification of drought categories based on the univariate drought indices has been well studied, drought classification method for the multivariate drought index has been less explored mainly due to the lack of information about its distribution property. In this study, a theoretical drought classification method is proposed for the multivariate drought index, based on a linear combination of multiple indices. Based on the distribution property of the standardized drought index, a theoretical distribution of the linear combined index (LDI) is derived, which can be used for classifying drought with the percentile approach. Application of the proposed method for drought classification of LDI, based on standardized precipitation index (SPI), standardized soil moisture index (SSI), and standardized runoff index (SRI) is illustrated with climate division data from California, United States. Results from comparison with the empirical methods show a satisfactory performance of the proposed method for drought classification.

  18. Bacterial clonal diagnostics as a tool for evidence-based empiric antibiotic selection.

    PubMed

    Tchesnokova, Veronika; Avagyan, Hovhannes; Rechkina, Elena; Chan, Diana; Muradova, Mariya; Haile, Helen Ghirmai; Radey, Matthew; Weissman, Scott; Riddell, Kim; Scholes, Delia; Johnson, James R; Sokurenko, Evgeni V

    2017-01-01

    Despite the known clonal distribution of antibiotic resistance in many bacteria, empiric (pre-culture) antibiotic selection still relies heavily on species-level cumulative antibiograms, resulting in overuse of broad-spectrum agents and excessive antibiotic/pathogen mismatch. Urinary tract infections (UTIs), which account for a large share of antibiotic use, are caused predominantly by Escherichia coli, a highly clonal pathogen. In an observational clinical cohort study of urgent care patients with suspected UTI, we assessed the potential for E. coli clonal-level antibiograms to improve empiric antibiotic selection. A novel PCR-based clonotyping assay was applied to fresh urine samples to rapidly detect E. coli and the urine strain's clonotype. Based on a database of clonotype-specific antibiograms, the acceptability of various antibiotics for empiric therapy was inferred using a 20%, 10%, and 30% allowed resistance threshold. The test's performance characteristics and possible effects on prescribing were assessed. The rapid test identified E. coli clonotypes directly in patients' urine within 25-35 minutes, with high specificity and sensitivity compared to culture. Antibiotic selection based on a clonotype-specific antibiogram could reduce the relative likelihood of antibiotic/pathogen mismatch by ≥ 60%. Compared to observed prescribing patterns, clonal diagnostics-guided antibiotic selection could safely double the use of trimethoprim/sulfamethoxazole and minimize fluoroquinolone use. In summary, a rapid clonotyping test showed promise for improving empiric antibiotic prescribing for E. coli UTI, including reversing preferential use of fluoroquinolones over trimethoprim/sulfamethoxazole. The clonal diagnostics approach merges epidemiologic surveillance, antimicrobial stewardship, and molecular diagnostics to bring evidence-based medicine directly to the point of care.

  19. Bacterial clonal diagnostics as a tool for evidence-based empiric antibiotic selection

    PubMed Central

    Tchesnokova, Veronika; Avagyan, Hovhannes; Rechkina, Elena; Chan, Diana; Muradova, Mariya; Haile, Helen Ghirmai; Radey, Matthew; Weissman, Scott; Riddell, Kim; Scholes, Delia; Johnson, James R.

    2017-01-01

    Despite the known clonal distribution of antibiotic resistance in many bacteria, empiric (pre-culture) antibiotic selection still relies heavily on species-level cumulative antibiograms, resulting in overuse of broad-spectrum agents and excessive antibiotic/pathogen mismatch. Urinary tract infections (UTIs), which account for a large share of antibiotic use, are caused predominantly by Escherichia coli, a highly clonal pathogen. In an observational clinical cohort study of urgent care patients with suspected UTI, we assessed the potential for E. coli clonal-level antibiograms to improve empiric antibiotic selection. A novel PCR-based clonotyping assay was applied to fresh urine samples to rapidly detect E. coli and the urine strain's clonotype. Based on a database of clonotype-specific antibiograms, the acceptability of various antibiotics for empiric therapy was inferred using a 20%, 10%, and 30% allowed resistance threshold. The test's performance characteristics and possible effects on prescribing were assessed. The rapid test identified E. coli clonotypes directly in patients’ urine within 25–35 minutes, with high specificity and sensitivity compared to culture. Antibiotic selection based on a clonotype-specific antibiogram could reduce the relative likelihood of antibiotic/pathogen mismatch by ≥ 60%. Compared to observed prescribing patterns, clonal diagnostics-guided antibiotic selection could safely double the use of trimethoprim/sulfamethoxazole and minimize fluoroquinolone use. In summary, a rapid clonotyping test showed promise for improving empiric antibiotic prescribing for E. coli UTI, including reversing preferential use of fluoroquinolones over trimethoprim/sulfamethoxazole. The clonal diagnostics approach merges epidemiologic surveillance, antimicrobial stewardship, and molecular diagnostics to bring evidence-based medicine directly to the point of care. PMID:28350870

  20. Measuring microscopic evolution processes of complex networks based on empirical data

    NASA Astrophysics Data System (ADS)

    Chi, Liping

    2015-04-01

    Aiming at understanding the microscopic mechanism of complex systems in real world, we perform the measurement that characterizes the evolution properties on two empirical data sets. In the Autonomous Systems Internet data, the network size keeps growing although the system suffers a high rate of node deletion (r = 0.4) and link deletion (q = 0.81). However, the average degree keeps almost unchanged during the whole time range. At each time step the external links attached to a new node are about c = 1.1 and the internal links added between existing nodes are approximately m = 8. For the Scientific Collaboration data, it is a cumulated result of all the authors from 1893 up to the considered year. There is no deletion of nodes and links, r = q = 0. The external and internal links at each time step are c = 1.04 and m = 0, correspondingly. The exponents of degree distribution p(k) ∼ k-γ of these two empirical datasets γdata are in good agreement with that obtained theoretically γtheory. The results indicate that these evolution quantities may provide an insight into capturing the microscopic dynamical processes that govern the network topology.

  1. Information Theoretic Similarity Measures for Content Based Image Retrieval.

    ERIC Educational Resources Information Center

    Zachary, John; Iyengar, S. S.

    2001-01-01

    Content-based image retrieval is based on the idea of extracting visual features from images and using them to index images in a database. Proposes similarity measures and an indexing algorithm based on information theory that permits an image to be represented as a single number. When used in conjunction with vectors, this method displays…

  2. Scaling up explanation generation: Large-scale knowledge bases and empirical studies

    SciTech Connect

    Lester, J.C.; Porter, B.W.

    1996-12-31

    To explain complex phenomena, an explanation system must be able to select information from a formal representation of domain knowledge, organize the selected information into multisentential discourse plans, and realize the discourse plans in text. Although recent years have witnessed significant progress in the development of sophisticated computational mechanisms for explanation, empirical results have been limited. This paper reports on a seven year effort to empirically study explanation generation from semantically rich, large-scale knowledge bases. We first describe Knight, a robust explanation system that constructs multi-sentential and multi-paragraph explanations from the Biology Knowledge Base, a large-scale knowledge base in the domain of botanical anatomy, physiology, and development. We then introduce the Two Panel evaluation methodology and describe how Knight`s performance was assessed with this methodology in the most extensive empirical evaluation conducted on an explanation system. In this evaluation, Knight scored within {open_quotes}half a grade{close_quote} of domain experts, and its performance exceeded that of one of the domain experts.

  3. Exploring multi/full polarised SAR imagery for understanding surface soil moisture and roughness by using semi-empirical and theoretical models and field experiments

    NASA Astrophysics Data System (ADS)

    Dong, Lu; Marzahn, Philip; Ludwig, Ralf

    2010-05-01

    -range digital photogrammetry for surface roughness retrieval. A semi-empirical model is tested and a theoretical model AIEM is utilised for further understanding. Results demonstrate that the semi-empirical soil moisture retrieval algorithm, which was developed in studies in humid climate conditions, must be carefully adapted to the drier Mediterranean environment. Modifying the approach by incorporating regional field data, led to a considerable improvement of the algorithms performance. In addition, it is found that the current representation of soil surface roughness in the AIEM is insufficient to account for the specific heterogeneities on the field scale. The findings in this study indicate the necessity for future research, which must be extended to a more integrated combination of current sensors, e.g. ENVISAT/ASAR, ALOS/PALSAR and Radarsat-2 imagery and advanced development of soil moisture retrieval model for multi/full polarised radar imagery.

  4. An empirically based model for knowledge management in health care organizations.

    PubMed

    Sibbald, Shannon L; Wathen, C Nadine; Kothari, Anita

    2016-01-01

    Knowledge management (KM) encompasses strategies, processes, and practices that allow an organization to capture, share, store, access, and use knowledge. Ideal KM combines different sources of knowledge to support innovation and improve performance. Despite the importance of KM in health care organizations (HCOs), there has been very little empirical research to describe KM in this context. This study explores KM in HCOs, focusing on the status of current intraorganizational KM. The intention is to provide insight for future studies and model development for effective KM implementation in HCOs. A qualitative methods approach was used to create an empirically based model of KM in HCOs. Methods included (a) qualitative interviews (n = 24) with senior leadership to identify types of knowledge important in these roles plus current information-seeking behaviors/needs and (b) in-depth case study with leaders in new executive positions (n = 2). The data were collected from 10 HCOs. Our empirically based model for KM was assessed for face and content validity. The findings highlight the paucity of formal KM in our sample HCOs. Organizational culture, leadership, and resources are instrumental in supporting KM processes. An executive's knowledge needs are extensive, but knowledge assets are often limited or difficult to acquire as much of the available information is not in a usable format. We propose an empirically based model for KM to highlight the importance of context (internal and external), and knowledge seeking, synthesis, sharing, and organization. Participants who reviewed the model supported its basic components and processes, and potential for incorporating KM into organizational processes. Our results articulate ways to improve KM, increase organizational learning, and support evidence-informed decision-making. This research has implications for how to better integrate evidence and knowledge into organizations while considering context and the role of

  5. The Theoretical Astrophysical Observatory: Cloud-based Mock Galaxy Catalogs

    NASA Astrophysics Data System (ADS)

    Bernyk, Maksym; Croton, Darren J.; Tonini, Chiara; Hodkinson, Luke; Hassan, Amr H.; Garel, Thibault; Duffy, Alan R.; Mutch, Simon J.; Poole, Gregory B.; Hegarty, Sarah

    2016-03-01

    We introduce the Theoretical Astrophysical Observatory (TAO), an online virtual laboratory that houses mock observations of galaxy survey data. Such mocks have become an integral part of the modern analysis pipeline. However, building them requires expert knowledge of galaxy modeling and simulation techniques, significant investment in software development, and access to high performance computing. These requirements make it difficult for a small research team or individual to quickly build a mock catalog suited to their needs. To address this TAO offers access to multiple cosmological simulations and semi-analytic galaxy formation models from an intuitive and clean web interface. Results can be funnelled through science modules and sent to a dedicated supercomputer for further processing and manipulation. These modules include the ability to (1) construct custom observer light cones from the simulation data cubes; (2) generate the stellar emission from star formation histories, apply dust extinction, and compute absolute and/or apparent magnitudes; and (3) produce mock images of the sky. All of TAO’s features can be accessed without any programming requirements. The modular nature of TAO opens it up for further expansion in the future.

  6. THE THEORETICAL ASTROPHYSICAL OBSERVATORY: CLOUD-BASED MOCK GALAXY CATALOGS

    SciTech Connect

    Bernyk, Maksym; Croton, Darren J.; Tonini, Chiara; Hodkinson, Luke; Hassan, Amr H.; Garel, Thibault; Duffy, Alan R.; Mutch, Simon J.; Poole, Gregory B.; Hegarty, Sarah

    2016-03-15

    We introduce the Theoretical Astrophysical Observatory (TAO), an online virtual laboratory that houses mock observations of galaxy survey data. Such mocks have become an integral part of the modern analysis pipeline. However, building them requires expert knowledge of galaxy modeling and simulation techniques, significant investment in software development, and access to high performance computing. These requirements make it difficult for a small research team or individual to quickly build a mock catalog suited to their needs. To address this TAO offers access to multiple cosmological simulations and semi-analytic galaxy formation models from an intuitive and clean web interface. Results can be funnelled through science modules and sent to a dedicated supercomputer for further processing and manipulation. These modules include the ability to (1) construct custom observer light cones from the simulation data cubes; (2) generate the stellar emission from star formation histories, apply dust extinction, and compute absolute and/or apparent magnitudes; and (3) produce mock images of the sky. All of TAO’s features can be accessed without any programming requirements. The modular nature of TAO opens it up for further expansion in the future.

  7. Intrinsic fluorescence of protein in turbid media using empirical relation based on Monte Carlo lookup table

    NASA Astrophysics Data System (ADS)

    Einstein, Gnanatheepam; Udayakumar, Kanniyappan; Aruna, Prakasarao; Ganesan, Singaravelu

    2017-03-01

    Fluorescence of Protein has been widely used in diagnostic oncology for characterizing cellular metabolism. However, the intensity of fluorescence emission is affected due to the absorbers and scatterers in tissue, which may lead to error in estimating exact protein content in tissue. Extraction of intrinsic fluorescence from measured fluorescence has been achieved by different methods. Among them, Monte Carlo based method yields the highest accuracy for extracting intrinsic fluorescence. In this work, we have attempted to generate a lookup table for Monte Carlo simulation of fluorescence emission by protein. Furthermore, we fitted the generated lookup table using an empirical relation. The empirical relation between measured and intrinsic fluorescence is validated using tissue phantom experiments. The proposed relation can be used for estimating intrinsic fluorescence of protein for real-time diagnostic applications and thereby improving the clinical interpretation of fluorescence spectroscopic data.

  8. Theoretical performance analysis for CMOS based high resolution detectors.

    PubMed

    Jain, Amit; Bednarek, Daniel R; Rudin, Stephen

    2013-03-06

    High resolution imaging capabilities are essential for accurately guiding successful endovascular interventional procedures. Present x-ray imaging detectors are not always adequate due to their inherent limitations. The newly-developed high-resolution micro-angiographic fluoroscope (MAF-CCD) detector has demonstrated excellent clinical image quality; however, further improvement in performance and physical design may be possible using CMOS sensors. We have thus calculated the theoretical performance of two proposed CMOS detectors which may be used as a successor to the MAF. The proposed detectors have a 300 μm thick HL-type CsI phosphor, a 50 μm-pixel CMOS sensor with and without a variable gain light image intensifier (LII), and are designated MAF-CMOS-LII and MAF-CMOS, respectively. For the performance evaluation, linear cascade modeling was used. The detector imaging chains were divided into individual stages characterized by one of the basic processes (quantum gain, binomial selection, stochastic and deterministic blurring, additive noise). Ranges of readout noise and exposure were used to calculate the detectors' MTF and DQE. The MAF-CMOS showed slightly better MTF than the MAF-CMOS-LII, but the MAF-CMOS-LII showed far better DQE, especially for lower exposures. The proposed detectors can have improved MTF and DQE compared with the present high resolution MAF detector. The performance of the MAF-CMOS is excellent for the angiography exposure range; however it is limited at fluoroscopic levels due to additive instrumentation noise. The MAF-CMOS-LII, having the advantage of the variable LII gain, can overcome the noise limitation and hence may perform exceptionally for the full range of required exposures; however, it is more complex and hence more expensive.

  9. HIRS-AMTS satellite sounding system test - Theoretical and empirical vertical resolving power. [High resolution Infrared Radiation Sounder - Advanced Moisture and Temperature Sounder

    NASA Technical Reports Server (NTRS)

    Thompson, O. E.

    1982-01-01

    The present investigation is concerned with the vertical resolving power of satellite-borne temperature sounding instruments. Information is presented on the capabilities of the High Resolution Infrared Radiation Sounder (HIRS) and a proposed sounding instrument called the Advanced Moisture and Temperature Sounder (AMTS). Two quite different methods for assessing the vertical resolving power of satellite sounders are discussed. The first is the theoretical method of Conrath (1972) which was patterned after the work of Backus and Gilbert (1968) The Backus-Gilbert-Conrath (BGC) approach includes a formalism for deriving a retrieval algorithm for optimizing the vertical resolving power. However, a retrieval algorithm constructed in the BGC optimal fashion is not necessarily optimal as far as actual temperature retrievals are concerned. Thus, an independent criterion for vertical resolving power is discussed. The criterion is based on actual retrievals of signal structure in the temperature field.

  10. Band structure calculation of GaSe-based nanostructures using empirical pseudopotential method

    NASA Astrophysics Data System (ADS)

    Osadchy, A. V.; Volotovskiy, S. G.; Obraztsova, E. D.; Savin, V. V.; Golovashkin, D. L.

    2016-08-01

    In this paper we present the results of band structure computer simulation of GaSe- based nanostructures using the empirical pseudopotential method. Calculations were performed using a specially developed software that allows performing simulations using cluster computing. Application of this method significantly reduces the demands on computing resources compared to traditional approaches based on ab-initio techniques and provides receiving the adequate comparable results. The use of cluster computing allows to obtain information for structures that require an explicit account of a significant number of atoms, such as quantum dots and quantum pillars.

  11. An Empirical Study of Plan-Based Representations of Pascal and Fortran Code.

    DTIC Science & Technology

    1987-06-01

    COMPUTING LABORATORY lReport No. CCL-0687-0 00 IAN EMPIRICAL STUDY OF PLAN-BASED REPRESENTATIONS OF PASCAL AND FORTRAN CODE Scott P. Robertson Chiung-Chen Yu...82173 ’, " Office of Naval Research Contract No. N00014-86-K-0876 Work Unit No. NR 4424203-01 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED...researchers have argued recenitly that programmers utilize a plan-based representation when composing or comprehending program code. In a series of studies we

  12. An Empirical Typology of Residential Care/Assisted Living Based on a Four-State Study

    ERIC Educational Resources Information Center

    Park, Nan Sook; Zimmerman, Sheryl; Sloane, Philip D.; Gruber-Baldini, Ann L.; Eckert, J. Kevin

    2006-01-01

    Purpose: Residential care/assisted living describes diverse facilities providing non-nursing home care to a heterogeneous group of primarily elderly residents. This article derives typologies of assisted living based on theoretically and practically grounded evidence. Design and Methods: We obtained data from the Collaborative Studies of Long-Term…

  13. Development of Empirically Based Time-to-death Curves for Combat Casualty Deaths in Iraq and Afghanistan

    DTIC Science & Technology

    2015-01-01

    Naval Health Research Center Development of Empirically Based Time-to- death Curves for Combat Casualty Deaths In Iraq and Afghanistan Edwin...10.1177/1548512914531353 dms.sagepub.com Development of empirically based time-to- death curves for combat casualty deaths in Iraq and Afghanistan...casualties with life-threatening injuries. The curves developed from that research were based on a small dataset (n = 160, with 26 deaths and 134

  14. Theoretical Foundations of "Competitive Team-Based Learning"

    ERIC Educational Resources Information Center

    Hosseini, Seyed Mohammad Hassan

    2010-01-01

    This paper serves as a platform to precisely substantiate the success of "Competitive Team-Based Learning" (CTBL) as an effective and rational educational approach. To that end, it brings to the fore part of the (didactic) theories and hypotheses which in one way or another delineate and confirm the mechanisms under which successful…

  15. A Theoretical Approach to School-based HIV Prevention.

    ERIC Educational Resources Information Center

    DeMuth, Diane; Symons, Cynthia Wolford

    1989-01-01

    Presents examples of appropriate intervention strategies for professionals working with school-based human immunodeficiency virus (HIV) prevention among adolescents. A multidisciplinary approach is advisable because influencing adolescent sexual behavior is a complex matter. Consistent, continuous messages through multiple channels and by multiple…

  16. Why Problem-Based Learning Works: Theoretical Foundations

    ERIC Educational Resources Information Center

    Marra, Rose M.; Jonassen, David H.; Palmer, Betsy; Luft, Steve

    2014-01-01

    Problem-based learning (PBL) is an instructional method where student learning occurs in the context of solving an authentic problem. PBL was initially developed out of an instructional need to help medical school students learn their basic sciences knowledge in a way that would be more lasting while helping to develop clinical skills…

  17. Why Problem-Based Learning Works: Theoretical Foundations

    ERIC Educational Resources Information Center

    Marra, Rose M.; Jonassen, David H.; Palmer, Betsy; Luft, Steve

    2014-01-01

    Problem-based learning (PBL) is an instructional method where student learning occurs in the context of solving an authentic problem. PBL was initially developed out of an instructional need to help medical school students learn their basic sciences knowledge in a way that would be more lasting while helping to develop clinical skills…

  18. Fault Diagnosis of Rotating Machinery Based on an Adaptive Ensemble Empirical Mode Decomposition

    PubMed Central

    Lei, Yaguo; Li, Naipeng; Lin, Jing; Wang, Sizhe

    2013-01-01

    The vibration based signal processing technique is one of the principal tools for diagnosing faults of rotating machinery. Empirical mode decomposition (EMD), as a time-frequency analysis technique, has been widely used to process vibration signals of rotating machinery. But it has the shortcoming of mode mixing in decomposing signals. To overcome this shortcoming, ensemble empirical mode decomposition (EEMD) was proposed accordingly. EEMD is able to reduce the mode mixing to some extent. The performance of EEMD, however, depends on the parameters adopted in the EEMD algorithms. In most of the studies on EEMD, the parameters were selected artificially and subjectively. To solve the problem, a new adaptive ensemble empirical mode decomposition method is proposed in this paper. In the method, the sifting number is adaptively selected, and the amplitude of the added noise changes with the signal frequency components during the decomposition process. The simulation, the experimental and the application results demonstrate that the adaptive EEMD provides the improved results compared with the original EEMD in diagnosing rotating machinery. PMID:24351666

  19. Fault diagnosis of rotating machinery based on an adaptive ensemble empirical mode decomposition.

    PubMed

    Lei, Yaguo; Li, Naipeng; Lin, Jing; Wang, Sizhe

    2013-12-09

    The vibration based signal processing technique is one of the principal tools for diagnosing faults of rotating machinery. Empirical mode decomposition (EMD), as a time-frequency analysis technique, has been widely used to process vibration signals of rotating machinery. But it has the shortcoming of mode mixing in decomposing signals. To overcome this shortcoming, ensemble empirical mode decomposition (EEMD) was proposed accordingly. EEMD is able to reduce the mode mixing to some extent. The performance of EEMD, however, depends on the parameters adopted in the EEMD algorithms. In most of the studies on EEMD, the parameters were selected artificially and subjectively. To solve the problem, a new adaptive ensemble empirical mode decomposition method is proposed in this paper. In the method, the sifting number is adaptively selected, and the amplitude of the added noise changes with the signal frequency components during the decomposition process. The simulation, the experimental and the application results demonstrate that the adaptive EEMD provides the improved results compared with the original EEMD in diagnosing rotating machinery.

  20. Theoretical analysis of C-F bond cleavage mediated by cob[I]alamin-based structures.

    PubMed

    Cortés-Arriagada, D; Toro-Labbe, A; Mora, J R; Rincón, L; Mereau, R; Torres, F J

    2017-08-17

    In the present work, C-F bond cleavage mediated by the super-reduced form of cobalamin (i.e., Co(I)Cbl) was theoretically studied at the ONIOM(BP86/6-311++G(d,p):PM6) + SMD level of theory. Dispersion effects were introduced by employing Grimme's empirical dispersion at the ONIOM(BP86-D/6-311++G(d,p):PM6) + SMD level. In the first stage of the study, cobalamin was characterized in terms of the coordination number of the central cobalt atom. The ONIOM(BP86/6-311++G(d,p):PM6) results showed that the base-off form of the system is slightly more stable than its base-on counterpart (ΔE = E base-off - E base-on ~ -2 kcal/mol). The inclusion of dispersive forces in the description of the system stabilizes the base-on form, which becomes as stable as its base-off counterpart. Moreover, in the latter case, the energy barrier separating both structures was found to be negligible, with a computed value of 1.02 kcal/mol. In the second stage of the work, the reaction Co(I)Cbl + CH3F → MeCbl + F(-) was studied considering the base-off and the base-on forms of Co(I)Cbl. The reaction that occurs in the presence of the base-on form of Co(I)Cbl was found to be kinetically more favorable (ΔE (≠) = 13.7 kcal/mol) than that occurring in the presence of the base-off form (ΔE (≠) = 41.2 kcal/mol). Further reaction-force analyses of the processes showed that the energy barrier to C-F bond cleavage arises largely due to structural rearrangements when the reaction occurs on the base-on form of the Co(I)Cbl complex, but is mainly due to electronic rearrangements when the reaction takes place on the base-off form of the complex. The latter behavior emerges from differences in the synchronicity of the bond strengthening/weakening processes along the reaction path; the base-on mode of Co(I)Cbl is able to decrease the synchronicity of the chemical events. This work gives new molecular-level insights into the role of Cbl-based systems in the cleavage of C-F bonds

  1. Improving the theoretical underpinnings of process-based hydrologic models

    NASA Astrophysics Data System (ADS)

    Clark, Martyn P.; Schaefli, Bettina; Schymanski, Stanislaus J.; Samaniego, Luis; Luce, Charles H.; Jackson, Bethanna M.; Freer, Jim E.; Arnold, Jeffrey R.; Moore, R. Dan; Istanbulluoglu, Erkan; Ceola, Serena

    2016-03-01

    In this Commentary, we argue that it is possible to improve the physical realism of hydrologic models by making better use of existing hydrologic theory. We address the following questions: (1) what are some key elements of current hydrologic theory; (2) how can those elements best be incorporated where they may be missing in current models; and (3) how can we evaluate competing hydrologic theories across scales and locations? We propose that hydrologic science would benefit from a model-based community synthesis effort to reframe, integrate, and evaluate different explanations of hydrologic behavior, and provide a controlled avenue to find where understanding falls short.

  2. Empirical and physics based mathematical models of uranium hydride decomposition kinetics with quantified uncertainties.

    SciTech Connect

    Salloum, Maher N.; Gharagozloo, Patricia E.

    2013-10-01

    Metal particle beds have recently become a major technique for hydrogen storage. In order to extract hydrogen from such beds, it is crucial to understand the decomposition kinetics of the metal hydride. We are interested in obtaining a a better understanding of the uranium hydride (UH3) decomposition kinetics. We first developed an empirical model by fitting data compiled from different experimental studies in the literature and quantified the uncertainty resulting from the scattered data. We found that the decomposition time range predicted by the obtained kinetics was in a good agreement with published experimental results. Secondly, we developed a physics based mathematical model to simulate the rate of hydrogen diffusion in a hydride particle during the decomposition. We used this model to simulate the decomposition of the particles for temperatures ranging from 300K to 1000K while propagating parametric uncertainty and evaluated the kinetics from the results. We compared the kinetics parameters derived from the empirical and physics based models and found that the uncertainty in the kinetics predicted by the physics based model covers the scattered experimental data. Finally, we used the physics-based kinetics parameters to simulate the effects of boundary resistances and powder morphological changes during decomposition in a continuum level model. We found that the species change within the bed occurring during the decomposition accelerates the hydrogen flow by increasing the bed permeability, while the pressure buildup and the thermal barrier forming at the wall significantly impede the hydrogen extraction.

  3. Knowledge-based control and case-based diagnosis based upon empirical knowledge and fuzzy logic for the SBR plant.

    PubMed

    Bae, H; Seo, H Y; Kim, S; Kim, Y

    2006-01-01

    Because biological wastewater treatment plants (WWTPs) involve a long time-delay and various disturbances, in general, skilled operators manually control the plant based on empirical knowledge. And operators usually diagnose the plant using similar cases experienced in the past. For the effective management of the plant, system automation has to be accomplished based upon operating recipes. This paper introduces automatic control and diagnosis based upon the operator's knowledge. Fuzzy logic was employed to design this knowledge-based controller because fuzzy logic can convert the linguistic information to rules. The controller can manage the influent and external carbon in considering the loading rate. The input of the controller is not the loading rate but the dissolved oxygen (DO) lag-time, which has a strong relation to the loading rate. This approach can replace an expensive sensor, which measures the loading rate and ammonia concentration in the reactor, with a cheaper DO sensor. The proposed controller can assure optimal operation and prevent the over-feeding problem. Case-based diagnosis was achieved by the analysis of profile patterns collected from the past. A new test profile was diagnosed by comparing it with template patterns containing normal and abnormal cases. The proposed control and diagnostic system will guarantee the effective and stable operation of WWTPs.

  4. Theoretically predicted Fox-7 based new high energy density molecules

    NASA Astrophysics Data System (ADS)

    Ghanta, Susanta

    2016-08-01

    Computational investigation of CHNO based high energy density molecules (HEDM) are designed with FOX-7 (1, 1-dinitro 2, 2-diamino ethylene) skeleton. We report structures, stability and detonation properties of these new molecules. A systematic analysis is presented for the crystal density, activation energy for nitro to nitrite isomerisation and the C-NO2 bond dissociation energy of these molecules. The Atoms in molecules (AIM) calculations have been performed to interpret the intra-molecular weak H-bonding interactions and the stability of C-NO2 bonds. The structure optimization, frequency and bond dissociation energy calculations have been performed at B3LYP level of theory by using G03 quantum chemistry package. Some of the designed molecules are found to be more promising HEDM than FOX-7 molecule, and are proposed to be candidate for synthetic purpose.

  5. Empirical Equation Based Chirality (n, m) Assignment of Semiconducting Single Wall Carbon Nanotubes from Resonant Raman Scattering Data

    PubMed Central

    Arefin, Md Shamsul

    2012-01-01

    This work presents a technique for the chirality (n, m) assignment of semiconducting single wall carbon nanotubes by solving a set of empirical equations of the tight binding model parameters. The empirical equations of the nearest neighbor hopping parameters, relating the term (2n− m) with the first and second optical transition energies of the semiconducting single wall carbon nanotubes, are also proposed. They provide almost the same level of accuracy for lower and higher diameter nanotubes. An algorithm is presented to determine the chiral index (n, m) of any unknown semiconducting tube by solving these empirical equations using values of radial breathing mode frequency and the first or second optical transition energy from resonant Raman spectroscopy. In this paper, the chirality of 55 semiconducting nanotubes is assigned using the first and second optical transition energies. Unlike the existing methods of chirality assignment, this technique does not require graphical comparison or pattern recognition between existing experimental and theoretical Kataura plot.

  6. Ab initio based empirical potential applied to tungsten at high pressure

    NASA Astrophysics Data System (ADS)

    Ehemann, Robert C.; Nicklas, Jeremy W.; Park, Hyoungki; Wilkins, John W.

    2017-05-01

    Density-functional theory forces, stresses, and energies comprise a database from which the optimal parameters of a spline-based empirical potential combining Stillinger-Weber and modified embedded-atom forms are determined. The accuracy of the potential is demonstrated by calculations of ideal shear, stacking fault, vacancy migration, elastic constants, and phonons all between 0 and 100 GPa. Consistency with existing models and experiments is demonstrated by predictions of screw dislocation core structure and deformation twinning in a tungsten nanorod. Last, the potential is used to study the stabilization of fcc tungsten at high pressure.

  7. Evaluating Process Quality Based on Change Request Data - An Empirical Study of the Eclipse Project

    NASA Astrophysics Data System (ADS)

    Schackmann, Holger; Schaefer, Henning; Lichter, Horst

    The information routinely collected in change request management systems contains valuable information for monitoring of the process quality. However this data is currently utilized in a very limited way. This paper presents an empirical study of the process quality in the product portfolio of the Eclipse project. It is based on a systematic approach for the evaluation of process quality characteristics using change request data. Results of the study offer insights into the development process of Eclipse. Moreover the study allows assessing applicability and limitations of the proposed approach for the evaluation of process quality.

  8. An ISAR imaging algorithm for the space satellite based on empirical mode decomposition theory

    NASA Astrophysics Data System (ADS)

    Zhao, Tao; Dong, Chun-zhu

    2014-11-01

    Currently, high resolution imaging of the space satellite is a popular topic in the field of radar technology. In contrast with regular targets, the satellite target often moves along with its trajectory and simultaneously its solar panel substrate changes the direction toward the sun to obtain energy. Aiming at the imaging problem, a signal separating and imaging approach based on the empirical mode decomposition (EMD) theory is proposed, and the approach can realize separating the signal of two parts in the satellite target, the main body and the solar panel substrate and imaging for the target. The simulation experimentation can demonstrate the validity of the proposed method.

  9. A theoretically based determination of bowen-ratio fetch requirements

    USGS Publications Warehouse

    Stannard, D.I.

    1997-01-01

    Determination of fetch requirements for accurate Bowen-ratio measurements of latent- and sensible-heat fluxes is more involved than for eddy-correlation measurements because Bowen-ratio sensors are located at two heights, rather than just one. A simple solution to the diffusion equation is used to derive an expression for Bowen-ratio fetch requirements, downwind of a step change in surface fluxes. These requirements are then compared to eddy-correlation fetch requirements based on the same diffusion equation solution. When the eddy-correlation and upper Bowen-ratio sensor heights are equal, and the available energy upwind and downwind of the step change is constant, the Bowen-ratio method requires less fetch than does eddy correlation. Differences in fetch requirements between the two methods are greatest over relatively smooth surfaces. Bowen-ratio fetch can be reduced significantly by lowering the lower sensor, as well as the upper sensor. The Bowen-ratio fetch model was tested using data from a field experiment where multiple Bowen-ratio systems were deployed simultaneously at various fetches and heights above a field of bermudagrass. Initial comparisons were poor, but improved greatly when the model was modified (and operated numerically) to account for the large roughness of the upwind cotton field.

  10. Awareness-based game-theoretic space resource management

    NASA Astrophysics Data System (ADS)

    Chen, Genshe; Chen, Huimin; Pham, Khanh; Blasch, Erik; Cruz, Jose B., Jr.

    2009-05-01

    Over recent decades, the space environment becomes more complex with a significant increase in space debris and a greater density of spacecraft, which poses great difficulties to efficient and reliable space operations. In this paper we present a Hierarchical Sensor Management (HSM) method to space operations by (a) accommodating awareness modeling and updating and (b) collaborative search and tracking space objects. The basic approach is described as follows. Firstly, partition the relevant region of interest into district cells. Second, initialize and model the dynamics of each cell with awareness and object covariance according to prior information. Secondly, explicitly assign sensing resources to objects with user specified requirements. Note that when an object has intelligent response to the sensing event, the sensor assigned to observe an intelligent object may switch from time-to-time between a strong, active signal mode and a passive mode to maximize the total amount of information to be obtained over a multi-step time horizon and avoid risks. Thirdly, if all explicitly specified requirements are satisfied and there are still more sensing resources available, we assign the additional sensing resources to objects without explicitly specified requirements via an information based approach. Finally, sensor scheduling is applied to each sensor-object or sensor-cell pair according to the object type. We demonstrate our method with realistic space resources management scenario using NASA's General Mission Analysis Tool (GMAT) for space object search and track with multiple space borne observers.

  11. Systematic Review of Empirically Evaluated School-Based Gambling Education Programs.

    PubMed

    Keen, Brittany; Blaszczynski, Alex; Anjoul, Fadi

    2017-03-01

    Adolescent problem gambling prevalence rates are reportedly five times higher than in the adult population. Several school-based gambling education programs have been developed in an attempt to reduce problem gambling among adolescents; however few have been empirically evaluated. The aim of this review was to report the outcome of studies empirically evaluating gambling education programs across international jurisdictions. A systematic review following guidelines outlined in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Statement searching five academic databases: PubMed, Scopus, Medline, PsycINFO, and ERIC, was conducted. A total of 20 papers and 19 studies were included after screening and exclusion criteria were applied. All studies reported intervention effects on cognitive outcomes such as knowledge, perceptions, and beliefs. Only nine of the studies attempted to measure intervention effects on behavioural outcomes, and only five of those reported significant changes in gambling behaviour. Of these five, methodological inadequacies were commonly found including brief follow-up periods, lack of control comparison in post hoc analyses, and inconsistencies and misclassifications in the measurement of gambling behaviour, including problem gambling. Based on this review, recommendations are offered for the future development and evaluation of school-based gambling education programs relating to both methodological and content design and delivery considerations.

  12. A theoretical model of drumlin formation based on observations at Múlajökull, Iceland

    NASA Astrophysics Data System (ADS)

    Iverson, Neal R.; McCracken, Reba; Zoet, Lucas; Benediktsson, Ívar; Schomacker, Anders; Johnson, Mark; Finlayson, Andrew; Phillips, Emrys; Everest, Jeremy

    2016-04-01

    Theoretical models of drumlin formation have generally been developed in isolation from observations in modern drumlin forming environments - a major limitation on the empiricism necessary to confidently formulate models and test them. Observations at a rare modern drumlin field exposed by the recession of the Icelandic surge-type glacier, Múlajökull, allow an empirically-grounded and physically-based model of drumlin formation to be formulated and tested. Till fabrics based on anisotropy of magnetic susceptibility and clast orientations, along with stratigraphic observations and results of ground penetrating radar, indicate that drumlin relief results from basal till deposition on drumlins and erosion between them. These data also indicate that surges cause till deposition both on and between drumlins and provide no evidence of the longitudinally compressive or extensional strain in till that would be expected if flux divergence in a deforming bed were significant. Over 2000 measurements of till density, together with consolidation tests on the till, indicate that effective stresses on the bed were higher between drumlins than within them. This observation agrees with evidence that subglacial water drainage during normal flow of the glacier is through channels in low areas between drumlins and that crevasse swarms, which reduce total normal stresses on the bed, are coincident with drumlins. In the new model slip of ice over a bed with a sinusoidal perturbation, crevasse swarms, and flow of subglacial water toward R-channels that bound the bed undulation during periods of normal flow result in effective stresses that increase toward channels and decrease from the stoss to the lee sides of the undulation. This effective-stress pattern causes till entrainment and erosion by regelation infiltration (Rempel, 2008, JGR, 113) that peaks at the heads of incipient drumlins and near R-channels, while bed shear is inhibited by effective stresses too high to allow

  13. Dip-separated structural filtering using seislet transform and adaptive empirical mode decomposition based dip filter

    NASA Astrophysics Data System (ADS)

    Chen, Yangkang

    2016-07-01

    The seislet transform has been demonstrated to have a better compression performance for seismic data compared with other well-known sparsity promoting transforms, thus it can be used to remove random noise by simply applying a thresholding operator in the seislet domain. Since the seislet transform compresses the seismic data along the local structures, the seislet thresholding can be viewed as a simple structural filtering approach. Because of the dependence on a precise local slope estimation, the seislet transform usually suffers from low compression ratio and high reconstruction error for seismic profiles that have dip conflicts. In order to remove the limitation of seislet thresholding in dealing with conflicting-dip data, I propose a dip-separated filtering strategy. In this method, I first use an adaptive empirical mode decomposition based dip filter to separate the seismic data into several dip bands (5 or 6). Next, I apply seislet thresholding to each separated dip component to remove random noise. Then I combine all the denoised components to form the final denoised data. Compared with other dip filters, the empirical mode decomposition based dip filter is data-adaptive. One only needs to specify the number of dip components to be separated. Both complicated synthetic and field data examples show superior performance of my proposed approach than the traditional alternatives. The dip-separated structural filtering is not limited to seislet thresholding, and can also be extended to all those methods that require slope information.

  14. Why Culture Matters: An Empirically-Based Pre-Deployment Training Program

    DTIC Science & Technology

    2005-09-01

    and psychomotor) which can be viewed as categories that describe the goals of a learner -centered training process (AFM 36-2234 1993; Cook 2000; AFM...to train to the higher levels of learning, it enables learner -centered training and not just subjective or preference-based teacher-oriented topical...external • Thought Patterns • Analytic - Relational • Theoretical learning and knowledge - Experiential or kinesthetic learning and knowledge

  15. Towards high performing hospital enterprise systems: an empirical and literature based design framework

    NASA Astrophysics Data System (ADS)

    dos Santos Fradinho, Jorge Miguel

    2014-05-01

    Our understanding of enterprise systems (ES) is gradually evolving towards a sense of design which leverages multidisciplinary bodies of knowledge that may bolster hybrid research designs and together further the characterisation of ES operation and performance. This article aims to contribute towards ES design theory with its hospital enterprise systems design (HESD) framework, which reflects a rich multidisciplinary literature and two in-depth hospital empirical cases from the US and UK. In doing so it leverages systems thinking principles and traditionally disparate bodies of knowledge to bolster the theoretical evolution and foundation of ES. A total of seven core ES design elements are identified and characterised with 24 main categories and 53 subcategories. In addition, it builds on recent work which suggests that hospital enterprises are comprised of multiple internal ES configurations which may generate different levels of performance. Multiple sources of evidence were collected including electronic medical records, 54 recorded interviews, observation, and internal documents. Both in-depth cases compare and contrast higher and lower performing ES configurations. Following literal replication across in-depth cases, this article concludes that hospital performance can be improved through an enriched understanding of hospital ES design.

  16. Semi-empirical versus process-based sea-level projections for the twenty-first century

    NASA Astrophysics Data System (ADS)

    Orlić, Mirko; Pasarić, Zoran

    2013-08-01

    Two dynamical methods are presently used to project sea-level changes during the next century. The process-based method relies on coupled atmosphere-ocean models to estimate the effects of thermal expansion and on sea-level models combined with certain empirical relationships to determine the influence of land-ice mass changes. The semi-empirical method uses various physically motivated relationships between temperature and sea level, with parameters determined from the data, to project total sea level. However, semi-empirical projections far exceed process-based projections. Here, we test the robustness of semi-empirical projections to the underlying assumptions about the inertial and equilibrium responses of sea level to temperature forcing and the impacts of groundwater depletion and dam retention during the twentieth century. Our results show that these projections are sensitive to the dynamics considered and the terrestrial-water corrections applied. For B1, which is a moderate climate-change scenario, the lowest semi-empirical projection of sea-level rise over the twenty-first century equals 62+/-14cm. The average value is substantially smaller than previously published semi-empirical projections and is therefore closer to the corresponding process-based values. The standard deviation is larger than the uncertainties of process-based estimates.

  17. Polarizable Empirical Force Field for Hexopyranose Monosaccharides Based on the Classical Drude Oscillator

    PubMed Central

    2015-01-01

    A polarizable empirical force field based on the classical Drude oscillator is presented for the hexopyranose form of selected monosaccharides. Parameter optimization targeted quantum mechanical (QM) dipole moments, solute–water interaction energies, vibrational frequencies, and conformational energies. Validation of the model was based on experimental data on crystals, densities of aqueous-sugar solutions, diffusion constants of glucose, and rotational preferences of the exocylic hydroxymethyl of d-glucose and d-galactose in aqueous solution as well as additional QM data. Notably, the final model involves a single electrostatic model for all sixteen diastereomers of the monosaccharides, indicating the transferability of the polarizable model. The presented parameters are anticipated to lay the foundation for a comprehensive polarizable force field for saccharides that will be compatible with the polarizable Drude parameters for lipids and proteins, allowing for simulations of glycolipids and glycoproteins. PMID:24564643

  18. New Denoising Method Based on Empirical Mode Decomposition and Improved Thresholding Function

    NASA Astrophysics Data System (ADS)

    Mohguen, Wahiba; RaïsEl'hadiBekka

    2017-01-01

    This paper presents a new denoising method called EMD-ITF that was based on Empirical Mode Decomposition (EMD) and the Improved Thresholding Function (ITF) algorithms. EMD was applied to decompose adaptively a noisy signal into intrinsic mode functions (IMFs). Then, all the noisy IMFs were thresholded by applying the improved thresholding function to suppress noise and improve the signal to noise ratio (SNR). The method was tested on simulated and real data and the results were compared to the EMD-Based signal denoising methods using the soft thresholding. The results showed the superior performance of the new EMD-ITF denoising over the traditional approach. The performance were evaluated in terms of SNR in dB, and Mean Square Error (MSE).

  19. Tissue artifact removal from respiratory signals based on empirical mode decomposition.

    PubMed

    Liu, Shaopeng; Gao, Robert X; John, Dinesh; Staudenmayer, John; Freedson, Patty

    2013-05-01

    On-line measurement of respiration plays an important role in monitoring human physical activities. Such measurement commonly employs sensing belts secured around the rib cage and abdomen of the test object. Affected by the movement of body tissues, respiratory signals typically have a low signal-to-noise ratio. Removing tissue artifacts therefore is critical to ensuring effective respiration analysis. This paper presents a signal decomposition technique for tissue artifact removal from respiratory signals, based on the empirical mode decomposition (EMD). An algorithm based on the mutual information and power criteria was devised to automatically select appropriate intrinsic mode functions for tissue artifact removal and respiratory signal reconstruction. Performance of the EMD-algorithm was evaluated through simulations and real-life experiments (N = 105). Comparison with low-pass filtering that has been conventionally applied confirmed the effectiveness of the technique in tissue artifacts removal.

  20. Tissue Artifact Removal from Respiratory Signals Based on Empirical Mode Decomposition

    PubMed Central

    Liu, Shaopeng; Gao, Robert X.; John, Dinesh; Staudenmayer, John; Freedson, Patty

    2013-01-01

    On-line measurement of respiration plays an important role in monitoring human physical activities. Such measurement commonly employs sensing belts secured around the rib cage and abdomen of the test object. Affected by the movement of body tissues, respiratory signals typically have a low signal-to-noise ratio. Removing tissue artifacts therefore is critical to ensuring effective respiration analysis. This paper presents a signal decomposition technique for tissue artifact removal from respiratory signals, based on the empirical mode decomposition (EMD). An algorithm based on the mutual information and power criteria was devised to automatically select appropriate intrinsic mode functions (IMFs) for tissue artifact removal and respiratory signal reconstruction. Performance of the EMD-algorithm was evaluated through simulations and real-life experiments (N = 105). Comparison with low-pass filtering that has been conventionally applied confirmed the effectiveness of the technique in tissue artifacts removal. PMID:23325303

  1. Network-based empirical Bayes methods for linear models with applications to genomic data.

    PubMed

    Li, Caiyan; Wei, Zhi; Li, Hongzhe

    2010-03-01

    Empirical Bayes methods are widely used in the analysis of microarray gene expression data in order to identify the differentially expressed genes or genes that are associated with other general phenotypes. Available methods often assume that genes are independent. However, genes are expected to function interactively and to form molecular modules to affect the phenotypes. In order to account for regulatory dependency among genes, we propose in this paper a network-based empirical Bayes method for analyzing genomic data in the framework of linear models, where the dependency of genes is modeled by a discrete Markov random field defined on a predefined biological network. This method provides a statistical framework for integrating the known biological network information into the analysis of genomic data. We present an iterated conditional mode algorithm for parameter estimation and for estimating the posterior probabilities using Gibbs sampling. We demonstrate the application of the proposed methods using simulations and analysis of a human brain aging microarray gene expression data set.

  2. Polarizable empirical force field for acyclic polyalcohols based on the classical Drude oscillator.

    PubMed

    He, Xibing; Lopes, Pedro E M; Mackerell, Alexander D

    2013-10-01

    A polarizable empirical force field for acyclic polyalcohols based on the classical Drude oscillator is presented. The model is optimized with an emphasis on the transferability of the developed parameters among molecules of different sizes in this series and on the condensed-phase properties validated against experimental data. The importance of the explicit treatment of electronic polarizability in empirical force fields is demonstrated in the cases of this series of molecules with vicinal hydroxyl groups that can form cooperative intra- and intermolecular hydrogen bonds. Compared to the CHARMM additive force field, improved treatment of the electrostatic interactions avoids overestimation of the gas-phase dipole moments resulting in significant improvement in the treatment of the conformational energies and leads to the correct balance of intra- and intermolecular hydrogen bonding of glycerol as evidenced by calculated heat of vaporization being in excellent agreement with experiment. Computed condensed phase data, including crystal lattice parameters and volumes and densities of aqueous solutions are in better agreement with experimental data as compared to the corresponding additive model. Such improvements are anticipated to significantly improve the treatment of polymers in general, including biological macromolecules.

  3. Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition

    PubMed Central

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.

    2015-01-01

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714

  4. An empirical model for the plasma environment along Titan's orbit based on Cassini plasma observations

    NASA Astrophysics Data System (ADS)

    Smith, H. Todd; Rymer, Abigail M.

    2014-07-01

    Prior to Cassini's arrival at Saturn, the nitrogen-rich dense atmosphere of Titan was considered as a significant, if not dominant, source of heavy ions in Saturn's magnetosphere. While nitrogen was detected in Saturn's magnetosphere based on Cassini observations, Enceladus instead of Titan appears to be the primary source. However, it is difficult to imagine that Titan's dense atmosphere is not a source of nitrogen. In this paper, we apply the Rymer et al.'s (2009) Titan plasma environment categorization model to the plasma environment along Titan's orbit when Titan is not present. We next categorize the Titan encounters that occurred since Rymer et al. (2009). We also produce an empirical model for applying the probabilistic occurrence of each plasma environment as a function of Saturn local time (SLT). Finally, we summarized the electron energy spectra in order to allow one to calculate more accurate electron-impact interaction rates for each plasma environment category. The combination of this full categorization versus SLT and empirical model for the electron spectrum is critical for understanding the magnetospheric plasma and will allow for more accurate modeling of the Titan plasma torus.

  5. Polarizable Empirical Force Field for Acyclic Poly-Alcohols Based on the Classical Drude Oscillator

    PubMed Central

    He, Xibing; Lopes, Pedro E. M.; MacKerell, Alexander D.

    2014-01-01

    A polarizable empirical force field for acyclic polyalcohols based on the classical Drude oscillator is presented. The model is optimized with an emphasis on the transferability of the developed parameters among molecules of different sizes in this series and on the condensed-phase properties validated against experimental data. The importance of the explicit treatment of electronic polarizability in empirical force fields is demonstrated in the cases of this series of molecules with vicinal hydroxyl groups that can form cooperative intra- and intermolecular hydrogen bonds. Compared to the CHARMM additive force field, improved treatment of the electrostatic interactions avoids overestimation of the gas-phase dipole moments, results in significant improvement in the treatment of the conformational energies, and leads to the correct balance of intra- and intermolecular hydrogen bonding of glycerol as evidenced by calculated heat of vaporization being in excellent agreement with experiment. Computed condensed phase data, including crystal lattice parameters and volumes and densities of aqueous solutions are in better agreement with experimental data as compared to the corresponding additive model. Such improvements are anticipated to significantly improve the treatment of polymers in general, including biological macromolecules. PMID:23703219

  6. Empirically based Suggested Insights into the Concept of False-Self Defense: Contributions From a Study on Normalization of Children With Disabilities.

    PubMed

    Eichengreen, Adva; Hoofien, Dan; Bachar, Eytan

    2016-02-01

    The concept of the false self has been used widely in psychoanalytic theory and practice but seldom in empirical research. In this empirically based study, elevated features of false-self defense were hypothetically associated with risk factors attendant on processes of rehabilitation and integration of children with disabilities, processes that encourage adaptation of the child to the able-bodied environment. Self-report questionnaires and in-depth interviews were conducted with 88 deaf and hard-of-hearing students and a comparison group of 88 hearing counterparts. Results demonstrate that despite the important contribution of rehabilitation and integration to the well-being of these children, these efforts may put the child at risk of increased use of the false-self defense. The empirical findings suggest two general theoretical conclusions: (1) The Winnicottian concept of the environment, usually confined to the parent-child relationship, can be understood more broadly as including cultural, social, and rehabilitational variables that both influence the parent-child relationship and operate independently of it. (2) The monolithic conceptualization of the false self may be more accurately unpacked to reveal two distinct subtypes: the compliant and the split false self. © 2016 by the American Psychoanalytic Association.

  7. An Empirical Evaluation of Puzzle-Based Learning as an Interest Approach for Teaching Introductory Computer Science

    ERIC Educational Resources Information Center

    Merrick, K. E.

    2010-01-01

    This correspondence describes an adaptation of puzzle-based learning to teaching an introductory computer programming course. Students from two offerings of the course--with and without the puzzle-based learning--were surveyed over a two-year period. Empirical results show that the synthesis of puzzle-based learning concepts with existing course…

  8. Empirical tests of natural selection-based evolutionary accounts of ADHD: a systematic review.

    PubMed

    Thagaard, Marthe S; Faraone, Stephen V; Sonuga-Barke, Edmund J; Østergaard, Søren D

    2016-10-01

    ADHD is a prevalent and highly heritable mental disorder associated with significant impairment, morbidity and increased rates of mortality. This combination of high prevalence and high morbidity/mortality seen in ADHD and other mental disorders presents a challenge to natural selection-based models of human evolution. Several hypotheses have been proposed in an attempt to resolve this apparent paradox. The aim of this study was to review the evidence for these hypotheses. We conducted a systematic review of the literature on empirical investigations of natural selection-based evolutionary accounts for ADHD in adherence with the PRISMA guideline. The PubMed, Embase, and PsycINFO databases were screened for relevant publications, by combining search terms covering evolution/selection with search terms covering ADHD. The search identified 790 records. Of these, 15 full-text articles were assessed for eligibility, and three were included in the review. Two of these reported on the evolution of the seven-repeat allele of the ADHD-associated dopamine receptor D4 gene, and one reported on the results of a simulation study of the effect of suggested ADHD-traits on group survival. The authors of the three studies interpreted their findings as favouring the notion that ADHD-traits may have been associated with increased fitness during human evolution. However, we argue that none of the three studies really tap into the core symptoms of ADHD, and that their conclusions therefore lack validity for the disorder. This review indicates that the natural selection-based accounts of ADHD have not been subjected to empirical test and therefore remain hypothetical.

  9. Stomatal regulation based on competition for water, stochastic rainfall, and xylem hydraulic vulnerability - a new theoretical model

    NASA Astrophysics Data System (ADS)

    Lu, Y.; Duursma, R.; Farrior, C.; Medlyn, B. E.

    2016-12-01

    Stomata control the exchange of soil water for atmospheric CO2, which is one of the most important resource trade-offs for plants. This trade-off has been studied a lot but not in the context of competition. Based on the theory of evolutionarily stable strategy, we search for the uninvadable (or the ESS) response of stomatal conductance to soil water content under stochastic rainfall, with which the dominant plant population should never be invaded by any rare mutants in the water competition due to a higher fitness. In this study, we define the fitness as the difference between the long-term average photosynthetic carbon gain and a carbon cost of stomatal opening. This cost has traditionally been considered an unknown constant. Here we extend this framework by assuming it as the energy required for xylem embolism refilling. With regard to the refilling process, we explore 2 questions 1) to what extent the embolized xylem vessels can be repaired via refilling; and 2) whether this refilling is immediate or has a time delay following the formation of xylem embolism. We compare various assumptions in a total of 5 scenarios and find that the ESS exists only if the xylem damage can be repaired completely. Then, with this ESS, we estimate annual vegetation photosynthesis and water consumption and compare them with empirical results. In conclusion, this study provides a different insight from the existing empirical and mechanistic models as well as the theoretical models based on the optimization theory. In addition, as the model result is a simple quantitative relation between stomatal conductance and soil water content, it can be easily incorporated into other vegetation function models.

  10. Signal enhancement based on complex curvelet transform and complementary ensemble empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Dong, Lieqian; Wang, Deying; Zhang, Yimeng; Zhou, Datong

    2017-09-01

    Signal enhancement is a necessary step in seismic data processing. In this paper we utilize the complementary ensemble empirical mode decomposition (CEEMD) and complex curvelet transform (CCT) methods to separate signal from random noise further to improve the signal to noise (S/N) ratio. Firstly, the original data with noise is decomposed into a series of intrinsic mode function (IMF) profiles with the aid of CEEMD. Then the IMFs with noise are transformed into CCT domain. By choosing different thresholds which are based on the noise level difference of each IMF profile, the noise in original data can be suppressed. Finally, we illustrate the effectiveness of the approach by simulated and field datasets.

  11. A Human ECG Identification System Based on Ensemble Empirical Mode Decomposition

    PubMed Central

    Zhao, Zhidong; Yang, Lei; Chen, Diandian; Luo, Yi

    2013-01-01

    In this paper, a human electrocardiogram (ECG) identification system based on ensemble empirical mode decomposition (EEMD) is designed. A robust preprocessing method comprising noise elimination, heartbeat normalization and quality measurement is proposed to eliminate the effects of noise and heart rate variability. The system is independent of the heart rate. The ECG signal is decomposed into a number of intrinsic mode functions (IMFs) and Welch spectral analysis is used to extract the significant heartbeat signal features. Principal component analysis is used reduce the dimensionality of the feature space, and the K-nearest neighbors (K-NN) method is applied as the classifier tool. The proposed human ECG identification system was tested on standard MIT-BIH ECG databases: the ST change database, the long-term ST database, and the PTB database. The system achieved an identification accuracy of 95% for 90 subjects, demonstrating the effectiveness of the proposed method in terms of accuracy and robustness. PMID:23698274

  12. Developing empirically based, culturally grounded drug prevention interventions for indigenous youth populations.

    PubMed

    Okamoto, Scott K; Helm, Susana; Pel, Suzanne; McClain, Latoya L; Hill, Amber P; Hayashida, Janai K P

    2014-01-01

    This article describes the relevance of a culturally grounded approach toward drug prevention development for indigenous youth populations. This approach builds drug prevention from the "ground up" (i.e., from the values, beliefs, and worldviews of the youth that are the intended consumers of the program) and is contrasted with efforts that focus on adapting existing drug prevention interventions to fit the norms of different youth ethnocultural groups. The development of an empirically based drug prevention program focused on rural Native Hawaiian youth is described as a case example of culturally grounded drug prevention development for indigenous youth; the impact of this effort on the validity of the intervention and on community engagement and investment in the development of the program are discussed. Finally, implications of this approach for behavioral health services and the development of an indigenous prevention science are discussed.

  13. Riemann Liouvelle Fractional Integral based Empirical Mode Decomposition for ECG Denoising.

    PubMed

    Jain, Shweta; Bajaj, Varun; Kumar, Anil

    2017-09-18

    Electrocardiograph (ECG) denoising is the most important step in diagnosis of heart related diseases, as the diagnosis gets influenced with noises. In this paper, a new method for ECG denoising is proposed, which incorporates empirical mode decomposition algorithm and Riemann Liouvelle (RL) fractional integral filtering. In the proposed method, noisy ECG signal is decomposed into its intrinsic mode functions (IMFs); from which noisy IMFs are identified by proposed noisy-IMFs identification methodology. RL fractional integral filtering is applied on noisy IMFs to get denoised IMFs; ECG signal is reconstructed with denoised IMFs and remaining signal dominant IMFs to obtain noise-free ECG signal. Proposed methodology is tested with MIT-BIH arrhythmia database. Its performance, in terms of signal to noise ratio (SNR) and mean square error (MSE), is compared with other related fractional integral and EMD based ECG denoising methods. The obtained results by proposed method prove that the proposed method gives efficient noise removal performance.

  14. Confidence Interval Estimation for Sensitivity to the Early Diseased Stage Based on Empirical Likelihood.

    PubMed

    Dong, Tuochuan; Tian, Lili

    2015-01-01

    Many disease processes can be divided into three stages: the non-diseased stage: the early diseased stage, and the fully diseased stage. To assess the accuracy of diagnostic tests for such diseases, various summary indexes have been proposed, such as volume under the surface (VUS), partial volume under the surface (PVUS), and the sensitivity to the early diseased stage given specificity and the sensitivity to the fully diseased stage (P2). This paper focuses on confidence interval estimation for P2 based on empirical likelihood. Simulation studies are carried out to assess the performance of the new methods compared to the existing parametric and nonparametric ones. A real dataset from Alzheimer's Disease Neuroimaging Initiative (ADNI) is analyzed.

  15. On the pathophysiology of migraine--links for "empirically based treatment" with neurofeedback.

    PubMed

    Kropp, Peter; Siniatchkin, Michael; Gerber, Wolf-Dieter

    2002-09-01

    Psychophysiological data support the concept that migraine is the result of cortical hypersensitivity, hyperactivity, and a lack of habituation. There is evidence that this is a brain-stem related information processing dysfunction. This cortical activity reflects a periodicity between 2 migraine attacks and it may be due to endogenous or exogenous factors. In the few days preceding the next attack slow cortical potentials are highest and habituation delay experimentally recorded during contingent negative variation is at a maximum. These striking features of slow cortical potentials are predictors of the next attack. The pronounced negativity can be fed back to the patient. The data support the hypothesis that a change in amplitudes of slow cortical potentials is caused by altered habituation during the recording session. This kind of neurofeedback can be characterized as "empirically based" because it improves habituation and it proves to be clinically efficient.

  16. The children of divorce parenting intervention: outcome evaluation of an empirically based program.

    PubMed

    Wolchik, S A; West, S G; Westover, S; Sandler, I N; Martin, A; Lustig, J; Tein, J Y; Fisher, J

    1993-06-01

    Examined efficacy of an empirically based intervention using 70 divorced mothers who participated in a 12-session program or a wait-list condition. The program targeted five putative mediators: quality of the mother-child relationship, discipline, negative divorce events, contact with fathers, and support from nonparental adults. Posttest comparisons showed higher quality mother-child relationships and discipline, fewer negative divorce events, and better mental health outcomes for program participants than controls. More positive program effects occurred for mothers' than children's reports of variables and for families with poorest initial levels of functioning. Analyses indicated that improvement in the mother-child relationship partially mediated the effects of the program on mental health.

  17. Multi-faults decoupling on turbo-expander using differential-based ensemble empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Li, Hongguang; Li, Ming; Li, Cheng; Li, Fucai; Meng, Guang

    2017-09-01

    This paper dedicates on the multi-faults decoupling of turbo-expander rotor system using Differential-based Ensemble Empirical Mode Decomposition (DEEMD). DEEMD is an improved version of DEMD to resolve the imperfection of mode mixing. The nonlinear behaviors of the turbo-expander considering temperature gradient with crack, rub-impact and pedestal looseness faults are investigated respectively, so that the baseline for the multi-faults decoupling can be established. DEEMD is then utilized on the vibration signals of the rotor system with coupling faults acquired by numerical simulation, and the results indicate that DEEMD can successfully decouple the coupling faults, which is more efficient than EEMD. DEEMD is also applied on the vibration signal of the misalignment coupling with rub-impact fault obtained during the adjustment of the experimental system. The conclusion shows that DEEMD can decompose the practical multi-faults signal and the industrial prospect of DEEMD is verified as well.

  18. Empirical likelihood based detection procedure for change point in mean residual life functions under random censorship.

    PubMed

    Chen, Ying-Ju; Ning, Wei; Gupta, Arjun K

    2016-05-01

    The mean residual life (MRL) function is one of the basic parameters of interest in survival analysis that describes the expected remaining time of an individual after a certain age. The study of changes in the MRL function is practical and interesting because it may help us to identify some factors such as age and gender that may influence the remaining lifetimes of patients after receiving a certain surgery. In this paper, we propose a detection procedure based on the empirical likelihood for the changes in MRL functions with right censored data. Two real examples are also given: Veterans' administration lung cancer study and Stanford heart transplant to illustrate the detecting procedure. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Co-ordinating Co-operation in Complex Information Flows: A Theoretical Analysis and Empirical Description of Competence-determined Leadership. No. 61.

    ERIC Educational Resources Information Center

    Rasmussen, Ole Elstrup

    "Scanator" (a modern, ecological psychophysics encompassing a cohesive set of theories and methods for the study of mental functions) provides the basis for a study of "competence," the capacity for making sense in complex situations. The paper develops a functional model that forms a theoretical expression of the phenomenon of…

  20. Genetic-program-based data mining for hybrid decision-theoretic algorithms and theories

    NASA Astrophysics Data System (ADS)

    Smith, James F., III

    2005-03-01

    A genetic program (GP) based data mining (DM) procedure has been developed that automatically creates decision theoretic algorithms. A GP is an algorithm that uses the theory of evolution to automatically evolve other computer programs or mathematical expressions. The output of the GP is a computer program or mathematical expression that is optimal in the sense that it maximizes a fitness function. The decision theoretic algorithms created by the DM algorithm are typically designed for making real-time decisions about the behavior of systems. The database that is mined by the DM typically consists of many scenarios characterized by sensor output and labeled by experts as to the status of the scenario. The DM procedure will call a GP as a data mining function. The GP incorporates the database and expert"s rules into its fitness function to evolve an optimal decision theoretic algorithm. A decision theoretic algorithm created through this process will be discussed as well as validation efforts showing the utility of the decision theoretic algorithm created by the DM process. GP based data mining to determine equations related to scientific theories and automatic simplification methods based on computer algebra will also be discussed.

  1. Modeling invariant object processing based on tight integration of simulated and empirical data in a Common Brain Space

    PubMed Central

    Peters, Judith C.; Reithler, Joel; Goebel, Rainer

    2012-01-01

    Recent advances in Computer Vision and Experimental Neuroscience provided insights into mechanisms underlying invariant object recognition. However, due to the different research aims in both fields models tended to evolve independently. A tighter integration between computational and empirical work may contribute to cross-fertilized development of (neurobiologically plausible) computational models and computationally defined empirical theories, which can be incrementally merged into a comprehensive brain model. After reviewing theoretical and empirical work on invariant object perception, this article proposes a novel framework in which neural network activity and measured neuroimaging data are interfaced in a common representational space. This enables direct quantitative comparisons between predicted and observed activity patterns within and across multiple stages of object processing, which may help to clarify how high-order invariant representations are created from low-level features. Given the advent of columnar-level imaging with high-resolution fMRI, it is time to capitalize on this new window into the brain and test which predictions of the various object recognition models are supported by this novel empirical evidence. PMID:22408617

  2. Simulation-Based Probabilistic Tsunami Hazard Analysis: Empirical and Robust Hazard Predictions

    NASA Astrophysics Data System (ADS)

    De Risi, Raffaele; Goda, Katsuichiro

    2017-08-01

    Probabilistic tsunami hazard analysis (PTHA) is the prerequisite for rigorous risk assessment and thus for decision-making regarding risk mitigation strategies. This paper proposes a new simulation-based methodology for tsunami hazard assessment for a specific site of an engineering project along the coast, or, more broadly, for a wider tsunami-prone region. The methodology incorporates numerous uncertain parameters that are related to geophysical processes by adopting new scaling relationships for tsunamigenic seismic regions. Through the proposed methodology it is possible to obtain either a tsunami hazard curve for a single location, that is the representation of a tsunami intensity measure (such as inundation depth) versus its mean annual rate of occurrence, or tsunami hazard maps, representing the expected tsunami intensity measures within a geographical area, for a specific probability of occurrence in a given time window. In addition to the conventional tsunami hazard curve that is based on an empirical statistical representation of the simulation-based PTHA results, this study presents a robust tsunami hazard curve, which is based on a Bayesian fitting methodology. The robust approach allows a significant reduction of the number of simulations and, therefore, a reduction of the computational effort. Both methods produce a central estimate of the hazard as well as a confidence interval, facilitating the rigorous quantification of the hazard uncertainties.

  3. Embedding empirical mode decomposition within an FPGA-based design: challenges and progress

    NASA Astrophysics Data System (ADS)

    Jones, Jonathan D.; Pei, Jin-Song; Wright, Joseph P.

    2011-04-01

    This paper presents further advancements made in an ongoing project following a series of presentations made at the same SPIE conference in the past. Compared with traditional microprocessor-based systems, rapidly advancing field-programmable gate array (FPGA) technology offers a more powerful, efficient and flexible hardware platform. An FPGA-based design is developed to classify three types of nonlinearities (including linear, hardening and softening) of a single-degree-of-freedom (SDOF) system subjected to free vibration. This significantly advances the team's previous work on using FPGAs for wireless structural health monitoring. The classification is achieved by embedding two important algorithms - empirical mode decomposition (EMD) and backbone curve analysis. A series of systematic efforts is made to embed EMD, which involves cubic spline fitting, in an FPGA-based hardware design. Throughout the process, we take advantage of concurrent operation and strive for a trade-off between computational efficiency and resource utilization. We have started to pursue our work in the context of FPGA-based computation. In particular, handling fixed-point precision is framed under data-path optimization. Our approach for data-path optimization is necessarily manual and thus may not guarantee an optimal design. Nonetheless, our study could provide a baseline case for future work using analytical data-path optimization for this and numerous other powerful algorithms for wireless structural health monitoring.

  4. Web-based application for Data INterpolation Empirical Orthogonal Functions (DINEOF) analysis

    NASA Astrophysics Data System (ADS)

    Tomazic, Igor; Alvera-Azcarate, Aida; Barth, Alexander; Beckers, Jean-Marie

    2014-05-01

    DINEOF (Data INterpolating Empirical Orthogonal Functions) is a powerful tool based on EOF decomposition developed at the University of Liege/GHER for the reconstruction of missing data in satellite datasets, as well as for the reduction of noise and detection of outliers. DINEOF is openly available as a series of Fortran routines to be compiled by the user, and as binaries (that can be run directly without any compilation) both for Windows and Linux platforms. In order to facilitate the use of DINEOF and increase the number of interested users, we developed a web-based interface for DINEOF with the necessary parameters available to run high-quality DINEOF analysis. This includes choosing variable within selected dataset, defining a domain, time range, filtering criteria based on available variables in the dataset (e.g. quality flag, satellite zenith angle …) and defining necessary DINEOF parameters. Results, including reconstructed data and calculated EOF modes will be disseminated in NetCDF format using OpenDAP and WMS server allowing easy visualisation and analysis. First, we will include several satellite datasets of sea surface temperature and chlorophyll concentration obtained from MyOcean data centre and already remapped to the regular grid (L3C). Later, based on user's request, we plan to extend number of datasets available for reconstruction.

  5. Simulation-Based Probabilistic Tsunami Hazard Analysis: Empirical and Robust Hazard Predictions

    NASA Astrophysics Data System (ADS)

    De Risi, Raffaele; Goda, Katsuichiro

    2017-06-01

    Probabilistic tsunami hazard analysis (PTHA) is the prerequisite for rigorous risk assessment and thus for decision-making regarding risk mitigation strategies. This paper proposes a new simulation-based methodology for tsunami hazard assessment for a specific site of an engineering project along the coast, or, more broadly, for a wider tsunami-prone region. The methodology incorporates numerous uncertain parameters that are related to geophysical processes by adopting new scaling relationships for tsunamigenic seismic regions. Through the proposed methodology it is possible to obtain either a tsunami hazard curve for a single location, that is the representation of a tsunami intensity measure (such as inundation depth) versus its mean annual rate of occurrence, or tsunami hazard maps, representing the expected tsunami intensity measures within a geographical area, for a specific probability of occurrence in a given time window. In addition to the conventional tsunami hazard curve that is based on an empirical statistical representation of the simulation-based PTHA results, this study presents a robust tsunami hazard curve, which is based on a Bayesian fitting methodology. The robust approach allows a significant reduction of the number of simulations and, therefore, a reduction of the computational effort. Both methods produce a central estimate of the hazard as well as a confidence interval, facilitating the rigorous quantification of the hazard uncertainties.

  6. Empirical source strength correlations for rans-based acoustic analogy methods

    NASA Astrophysics Data System (ADS)

    Kube-McDowell, Matthew Tyndall

    JeNo is a jet noise prediction code based on an acoustic analogy method developed by Mani, Gliebe, Balsa, and Khavaran. Using the flow predictions from a standard Reynolds-averaged Navier-Stokes computational fluid dynamics solver, JeNo predicts the overall sound pressure level and angular spectra for high-speed hot jets over a range of observer angles, with a processing time suitable for rapid design purposes. JeNo models the noise from hot jets as a combination of two types of noise sources; quadrupole sources dependent on velocity fluctuations, which represent the major noise of turbulent mixing, and dipole sources dependent on enthalpy fluctuations, which represent the effects of thermal variation. These two sources are modeled by JeNo as propagating independently into the far-field, with no cross-correlation at the observer location. However, high-fidelity computational fluid dynamics solutions demonstrate that this assumption is false. In this thesis, the theory, assumptions, and limitations of the JeNo code are briefly discussed, and a modification to the acoustic analogy method is proposed in which the cross-correlation of the two primary noise sources is allowed to vary with the speed of the jet and the observer location. As a proof-of-concept implementation, an empirical correlation correction function is derived from comparisons between JeNo's noise predictions and a set of experimental measurements taken for the Air Force Aero-Propulsion Laboratory. The empirical correlation correction is then applied to JeNo's predictions of a separate data set of hot jets tested at NASA's Glenn Research Center. Metrics are derived to measure the qualitative and quantitative performance of JeNo's acoustic predictions, and the empirical correction is shown to provide a quantitative improvement in the noise prediction at low observer angles with no freestream flow, and a qualitative improvement in the presence of freestream flow. However, the results also demonstrate

  7. The Role of Social Network Technologies in Online Health Promotion: A Narrative Review of Theoretical and Empirical Factors Influencing Intervention Effectiveness.

    PubMed

    Balatsoukas, Panos; Kennedy, Catriona M; Buchan, Iain; Powell, John; Ainsworth, John

    2015-06-11

    Social network technologies have become part of health education and wider health promotion—either by design or happenstance. Social support, peer pressure, and information sharing in online communities may affect health behaviors. If there are positive and sustained effects, then social network technologies could increase the effectiveness and efficiency of many public health campaigns. Social media alone, however, may be insufficient to promote health. Furthermore, there may be unintended and potentially harmful consequences of inaccurate or misleading health information. Given these uncertainties, there is a need to understand and synthesize the evidence base for the use of online social networking as part of health promoting interventions to inform future research and practice. Our aim was to review the research on the integration of expert-led health promotion interventions with online social networking in order to determine the extent to which the complementary benefits of each are understood and used. We asked, in particular, (1) How is effectiveness being measured and what are the specific problems in effecting health behavior change?, and (2) To what extent is the designated role of social networking grounded in theory? The narrative synthesis approach to literature review was used to analyze the existing evidence. We searched the indexed scientific literature using keywords associated with health promotion and social networking. The papers included were only those making substantial study of both social networking and health promotion—either reporting the results of the intervention or detailing evidence-based plans. General papers about social networking and health were not included. The search identified 162 potentially relevant documents after review of titles and abstracts. Of these, 42 satisfied the inclusion criteria after full-text review. Six studies described randomized controlled trials (RCTs) evaluating the effectiveness of online social

  8. The Role of Social Network Technologies in Online Health Promotion: A Narrative Review of Theoretical and Empirical Factors Influencing Intervention Effectiveness

    PubMed Central

    Kennedy, Catriona M; Buchan, Iain; Powell, John; Ainsworth, John

    2015-01-01

    Background Social network technologies have become part of health education and wider health promotion—either by design or happenstance. Social support, peer pressure, and information sharing in online communities may affect health behaviors. If there are positive and sustained effects, then social network technologies could increase the effectiveness and efficiency of many public health campaigns. Social media alone, however, may be insufficient to promote health. Furthermore, there may be unintended and potentially harmful consequences of inaccurate or misleading health information. Given these uncertainties, there is a need to understand and synthesize the evidence base for the use of online social networking as part of health promoting interventions to inform future research and practice. Objective Our aim was to review the research on the integration of expert-led health promotion interventions with online social networking in order to determine the extent to which the complementary benefits of each are understood and used. We asked, in particular, (1) How is effectiveness being measured and what are the specific problems in effecting health behavior change?, and (2) To what extent is the designated role of social networking grounded in theory? Methods The narrative synthesis approach to literature review was used to analyze the existing evidence. We searched the indexed scientific literature using keywords associated with health promotion and social networking. The papers included were only those making substantial study of both social networking and health promotion—either reporting the results of the intervention or detailing evidence-based plans. General papers about social networking and health were not included. Results The search identified 162 potentially relevant documents after review of titles and abstracts. Of these, 42 satisfied the inclusion criteria after full-text review. Six studies described randomized controlled trials (RCTs) evaluating

  9. An Empirical Study on Washback Effects of the Internet-Based College English Test Band 4 in China

    ERIC Educational Resources Information Center

    Wang, Chao; Yan, Jiaolan; Liu, Bao

    2014-01-01

    Based on Bailey's washback model, in respect of participants, process and products, the present empirical study was conducted to find the actual washback effects of the internet-based College English Test Band 4 (IB CET-4). The methods adopted are questionnaires, class observation, interview and the analysis of both the CET-4 teaching and testing…

  10. Written institutional ethics policies on euthanasia: an empirical-based organizational-ethical framework.

    PubMed

    Lemiengre, Joke; Dierckx de Casterlé, Bernadette; Schotsmans, Paul; Gastmans, Chris

    2014-05-01

    As euthanasia has become a widely debated issue in many Western countries, hospitals and nursing homes especially are increasingly being confronted with this ethically sensitive societal issue. The focus of this paper is how healthcare institutions can deal with euthanasia requests on an organizational level by means of a written institutional ethics policy. The general aim is to make a critical analysis whether these policies can be considered as organizational-ethical instruments that support healthcare institutions to take their institutional responsibility for dealing with euthanasia requests. By means of an interpretative analysis, we conducted a process of reinterpretation of results of former Belgian empirical studies on written institutional ethics policies on euthanasia in dialogue with the existing international literature. The study findings revealed that legal regulations, ethical and care-oriented aspects strongly affected the development, the content, and the impact of written institutional ethics policies on euthanasia. Hence, these three cornerstones-law, care and ethics-constituted the basis for the empirical-based organizational-ethical framework for written institutional ethics policies on euthanasia that is presented in this paper. However, having a euthanasia policy does not automatically lead to more legal transparency, or to a more professional and ethical care practice. The study findings suggest that the development and implementation of an ethics policy on euthanasia as an organizational-ethical instrument should be considered as a dynamic process. Administrators and ethics committees must take responsibility to actively create an ethical climate supporting care providers who have to deal with ethical dilemmas in their practice.

  11. An Empirical Model of Solar Indices and Hemispheric Power based on DMSP/SSUSI Data

    NASA Astrophysics Data System (ADS)

    Shaikh, D.; Jones, J.

    2014-09-01

    Aurorae are produced by the collision of charged energetic particles, typically the electrons, with the Earths neutral atmosphere particularly in the high latitude regions. These particles originate predominantly from the solar wind that traverses through the Earths magnetosphere and precipitates into the Earths atmosphere thereby resulting in emission of radiation in various frequency ranges. During this process, energetic electrons deposit their kinetic energy (10s of KeV) in the upper atmosphere. The rate of electron kinetic energy deposited over the northern or southern region is called electron hemispheric power (Hpe), measured in Gigawatt (GW). Since the origin and dynamics of these precipitating charged particles is intimately connected to the kinetic and magnetic activities taking place in our Sun, they can be used as a proxy to determine many physical processes that drive the space weather on our Earth. In this paper, we examine correlations that can possibly exist between the Hpe component and various other geomagnetic parameters such as kp, Ap, solar flux and sun spot numbers. For this purpose, we evaluate a year (2012) of data from the Special Sensor Ultraviolet Spectrographic Imager (SSUSI) of the Defense Meteorological Satellite Program (DMSP) Flight 18 - satellite. We find substantially strong correlations between the Hpe and Kp, Ap, the Sun spot number (SSN) and the solar flux density. The practical application of our empirical model is multifold. (i) We can determine/forecast Kp index directly from the electron flux density and use it to drive a variety of space weather models that heavily rely on the Kp input. (ii) The Kp and Ap forecasts from our empirical correlation model could be complementary to the traditional ground-based magnetometer data.

  12. Target detection for low cost uncooled MWIR cameras based on empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Piñeiro-Ave, José; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando; Artés-Rodríguez, Antonio

    2014-03-01

    In this work, a novel method for detecting low intensity fast moving objects with low cost Medium Wavelength Infrared (MWIR) cameras is proposed. The method is based on background subtraction in a video sequence obtained with a low density Focal Plane Array (FPA) of the newly available uncooled lead selenide (PbSe) detectors. Thermal instability along with the lack of specific electronics and mechanical devices for canceling the effect of distortion make background image identification very difficult. As a result, the identification of targets is performed in low signal to noise ratio (SNR) conditions, which may considerably restrict the sensitivity of the detection algorithm. These problems are addressed in this work by means of a new technique based on the empirical mode decomposition, which accomplishes drift estimation and target detection. Given that background estimation is the most important stage for detecting, a previous denoising step enabling a better drift estimation is designed. Comparisons are conducted against a denoising technique based on the wavelet transform and also with traditional drift estimation methods such as Kalman filtering and running average. The results reported by the simulations show that the proposed scheme has superior performance.

  13. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User's Head Movement.

    PubMed

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-08-31

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest.

  14. Problem decomposition and domain-based parallelism via group theoretic principles

    SciTech Connect

    Makai, M.; Orechwa, Y.

    1997-10-01

    A systematic approach based on group theoretic principles, is presented for the decomposition of the solution algorithm of boundary value problems specified over symmetric domains, which is amenable to implementation for parallel computation. The principles are applied to the linear transport equation in general, and the decomposition is demonstrated for a square node in particular.

  15. Theoretical Bases for Teacher- and Peer-Delivered Sexual Health Promotion

    ERIC Educational Resources Information Center

    Wight, Daniel

    2008-01-01

    Purpose: This paper seeks to explore the theoretical bases for teacher-delivered and peer-delivered sexual health promotion and education. Design/methodology/approach: The first section briefly outlines the main theories informing sexual health interventions for young people, and the second discusses their implications for modes of delivery.…

  16. Study on the Theoretical Foundation of Business English Curriculum Design Based on ESP and Needs Analysis

    ERIC Educational Resources Information Center

    Zhu, Wenzhong; Liu, Dan

    2014-01-01

    Based on a review of the literature on ESP and needs analysis, this paper is intended to offer some theoretical supports and inspirations for BE instructors to develop BE curricula for business contexts. It discusses how the theory of need analysis can be used in Business English curriculum design, and proposes some principles of BE curriculum…

  17. A queueing-theoretic analysis of the threshold-based exhaustive data-backup scheduling policy

    NASA Astrophysics Data System (ADS)

    Claeys, Dieter; Dorsman, Jan-Pieter; Saxena, Apoorv; Walraevens, Joris; Bruneel, Herwig

    2017-07-01

    We analyse the threshold-based exhaustive data backup scheduling mechanism by means of a queueing-theoretic approach. Data packets that have not yet been backed up are modelled by customers waiting for service (back-up). We obtain the probability generating function of the system content (backlog size) at random slot boundaries in steady state.

  18. Effects of a Theoretically Based Large-Scale Reading Intervention in a Multicultural Urban School District

    ERIC Educational Resources Information Center

    Sadoski, Mark; Willson, Victor L.

    2006-01-01

    In 1997, Lindamood-Bell Learning Processes partnered with Pueblo School District 60 (PSD60), a heavily minority urban district with many Title I schools, to implement a theoretically based initiative designed to improve Colorado Student Assessment Program reading scores. In this study, the authors examined achievement in Grades 3-5 during the…

  19. Theoretical Bases for Teacher- and Peer-Delivered Sexual Health Promotion

    ERIC Educational Resources Information Center

    Wight, Daniel

    2008-01-01

    Purpose: This paper seeks to explore the theoretical bases for teacher-delivered and peer-delivered sexual health promotion and education. Design/methodology/approach: The first section briefly outlines the main theories informing sexual health interventions for young people, and the second discusses their implications for modes of delivery.…

  20. Effects of a Theoretically Based Large-Scale Reading Intervention in a Multicultural Urban School District

    ERIC Educational Resources Information Center

    Sadoski, Mark; Willson, Victor L.

    2006-01-01

    In 1997, Lindamood-Bell Learning Processes partnered with Pueblo School District 60 (PSD60), a heavily minority urban district with many Title I schools, to implement a theoretically based initiative designed to improve Colorado Student Assessment Program reading scores. In this study, the authors examined achievement in Grades 3-5 during the…

  1. A Theoretically Based, Easy-to-Use Tool for Promoting Goal-Setting Behaviors in Youths

    ERIC Educational Resources Information Center

    James, Anthony G.

    2017-01-01

    Extension youth development professionals benefit from having theoretically based, easy-to-use tools for promoting goal-setting behaviors in youths. The Youth Goal-Setting Map provides practitioners with a mechanism for helping youth develop attributes that place them on a pathway to thriving. This article provides the Youth Goal-Setting Map tool,…

  2. Empirical Evaluation Indicators in Thai Higher Education: Theory-Based Multidimensional Learners' Assessment

    ERIC Educational Resources Information Center

    Sritanyarat, Dawisa; Russ-Eft, Darlene

    2016-01-01

    This study proposed empirical indicators which can be validated and adopted in higher education institutions to evaluate quality of teaching and learning, and to serve as an evaluation criteria for human resource management and development of higher institutions in Thailand. The main purpose of this study was to develop empirical indicators of a…

  3. Theoretical and Experimental Study on Secondary Piezoelectric Effect Based on PZT-5

    NASA Astrophysics Data System (ADS)

    Zhang, Z. H.; Sun, B. Y.; Shi1, L. P.

    2006-10-01

    The purpose of this paper is to confirm the existence of secondary and multiple piezoelectric effect theoretically and experimentally. Based on Heckmann model showing the relationship among mechanical, electric and heat energy and the physical model on mechanical, electric, heat, and magnetic energy, theoretical analysis of multiple piezoelectric effect is made through four kinds of piezoelectric equations. Experimental research of secondary direct piezoelectric effect is conducted through adopting PZT-5 piles. The result of the experiment indicates that charge generated by secondary direct piezoelectric effect as well as displacement caused by first converse piezoelectric effect keeps fine linearity with the applied voltage.

  4. Impact of Inadequate Empirical Therapy on the Mortality of Patients with Bloodstream Infections: a Propensity Score-Based Analysis

    PubMed Central

    Retamar, Pilar; Portillo, María M.; López-Prieto, María Dolores; Rodríguez-López, Fernando; de Cueto, Marina; García, María V.; Gómez, María J.; del Arco, Alfonso; Muñoz, Angel; Sánchez-Porto, Antonio; Torres-Tortosa, Manuel; Martín-Aspas, Andrés; Arroyo, Ascensión; García-Figueras, Carolina; Acosta, Federico; Corzo, Juan E.; León-Ruiz, Laura; Escobar-Lara, Trinidad

    2012-01-01

    The impact of the adequacy of empirical therapy on outcome for patients with bloodstream infections (BSI) is key for determining whether adequate empirical coverage should be prioritized over other, more conservative approaches. Recent systematic reviews outlined the need for new studies in the field, using improved methodologies. We assessed the impact of inadequate empirical treatment on the mortality of patients with BSI in the present-day context, incorporating recent methodological recommendations. A prospective multicenter cohort including all BSI episodes in adult patients was performed in 15 hospitals in Andalucía, Spain, over a 2-month period in 2006 to 2007. The main outcome variables were 14- and 30-day mortality. Adjusted analyses were performed by multivariate analysis and propensity score-based matching. Eight hundred one episodes were included. Inadequate empirical therapy was administered in 199 (24.8%) episodes; mortality at days 14 and 30 was 18.55% and 22.6%, respectively. After controlling for age, Charlson index, Pitt score, neutropenia, source, etiology, and presentation with severe sepsis or shock, inadequate empirical treatment was associated with increased mortality at days 14 and 30 (odds ratios [ORs], 2.12 and 1.56; 95% confidence intervals [95% CI], 1.34 to 3.34 and 1.01 to 2.40, respectively). The adjusted ORs after a propensity score-based matched analysis were 3.03 and 1.70 (95% CI, 1.60 to 5.74 and 0.98 to 2.98, respectively). In conclusion, inadequate empirical therapy is independently associated with increased mortality in patients with BSI. Programs to improve the quality of empirical therapy in patients with suspicion of BSI and optimization of definitive therapy should be implemented. PMID:22005999

  5. Contextual effects in school-based violence prevention programs: a conceptual framework and empirical review.

    PubMed

    Ozer, Emily J

    2006-05-01

    This paper reviews the theoretical and practical importance of studying contextual factors in school-based violence prevention programs and provides a framework for evaluating factors at the classroom, school, and community/district level. Sixty-two published papers describing 38 different programs were reviewed; of these 16 were identified that reported data on contextual effects or discussed possible contextual effects on the intervention. The small number of studies precludes definitive conclusions regarding contextual effects in school-based violence prevention programs, but suggests (a) some evidence for contextual effects on program outcomes, and (b) interdependence of context and implementation factors in influencing outcomes.Editors' Strategic Implications: This review suggests that contextual effects are important to school violence prevention, as context can influence outcomes directly and through interactions with implementation factors. Consequently, characteristics of the classroom, school, and community contexts should be considered by practitioners when implementing prevention programs and measured by researchers studying the processes and outcomes of these programs.

  6. Evaluation of Physically and Empirically Based Models for the Estimation of Green Roof Evapotranspiration

    NASA Astrophysics Data System (ADS)

    Digiovanni, K. A.; Montalto, F. A.; Gaffin, S.; Rosenzweig, C.

    2010-12-01

    Green roofs and other urban green spaces can provide a variety of valuable benefits including reduction of the urban heat island effect, reduction of stormwater runoff, carbon sequestration, oxygen generation, air pollution mitigation etc. As many of these benefits are directly linked to the processes of evaporation and transpiration, accurate and representative estimation of urban evapotranspiration (ET) is a necessary tool for predicting and quantifying such benefits. However, many common ET estimation procedures were developed for agricultural applications, and thus carry inherent assumptions that may only be rarely applicable to urban green spaces. Various researchers have identified the estimation of expected urban ET rates as critical, yet poorly studied components of urban green space performance prediction and cite that further evaluation is needed to reconcile differences in predictions from varying ET modeling approaches. A small scale green roof lysimeter setup situated on the green roof of the Ethical Culture Fieldston School in the Bronx, NY has been the focus of ongoing monitoring initiated in June 2009. The experimental setup includes a 0.6 m by 1.2 m Lysimeter replicating the anatomy of the 500 m2 green roof of the building, with a roof membrane, drainage layer, 10 cm media depth, and planted with a variety of Sedum species. Soil moisture sensors and qualitative runoff measurements are also recorded in the Lysimeter, while a weather station situated on the rooftop records climatologic data. Direct quantification of actual evapotranspiration (AET) from the green roof weighing lysimeter was achieved through a mass balance approaches during periods absent of precipitation and drainage. A comparison of AET to estimates of potential evapotranspiration (PET) calculated from empirically and physically based ET models was performed in order to evaluate the applicability of conventional ET equations for the estimation of ET from green roofs. Results have

  7. Web-Based versus Classroom-Based Instruction: An Empirical Comparison of Student Performance

    ERIC Educational Resources Information Center

    Thrasher, Evelyn H.; Coleman, Phillip D.; Atkinson, J. Kirk

    2012-01-01

    Higher education expenditures are being increasingly targeted toward distance learning, with a large portion focused specifically on web-based instruction (WBI). WBI and classroom-based instruction (CBI) tend to offer students diverse options for their education. Thus, it is imperative that colleges and universities have ample, accurate…

  8. Is Project Based Learning More Effective than Direct Instruction in School Science Classrooms? An Analysis of the Empirical Research Evidence

    NASA Astrophysics Data System (ADS)

    Dann, Clifford

    An increasingly loud call by parents, school administrators, teachers, and even business leaders for "authentic learning", emphasizing both group-work and problem solving, has led to growing enthusiasm for inquiry-based learning over the past decade. Although "inquiry" can be defined in many ways, a curriculum called "project-based learning" has recently emerged as the inquiry practice-of-choice with roots in the educational constructivism that emerged in the mid-twentieth century. Often, project-based learning is framed as an alternative instructional strategy to direct instruction for maximizing student content knowledge. This study investigates the empirical evidence for such a comparison while also evaluating the overall quality of the available studies in the light of accepted standards for educational research. Specifically, this thesis investigates what the body of quantitative research says about the efficacy of project-based learning vs. direct instruction when considering student acquisition of content knowledge in science classrooms. Further, existing limitations of the research pertaining to project based learning and secondary school education are explored. The thesis concludes with a discussion of where and how we should focus our empirical efforts in the future. The research revealed that the available empirical research contains flaws in both design and instrumentation. In particular, randomization is poor amongst all the studies considered. The empirical evidence indicates that project-based learning curricula improved student content knowledge but that, while the results were statistically significant, increases in raw test scores were marginal.

  9. Computing theoretical rates of part C eligibility based on developmental delays.

    PubMed

    Rosenberg, Steven A; Ellison, Misoo C; Fast, Bruce; Robinson, Cordelia C; Lazar, Radu

    2013-02-01

    Part C early intervention is a nationwide program that serves infants and toddlers who have developmental delays. This article presents a methodology for computing a theoretical estimate of the proportion of children who are likely to be eligible for Part C services based on delays in any of the 5 developmental domains (cognitive, motor, communication, social-emotional and adaptive) that are assessed to determine eligibility. Rates of developmental delays were estimated from a multivariate normal cumulative distribution function. This approach calculates theoretical rates of occurrence for conditions that are defined in terms of standard deviations from the mean on several variables that are approximately normally distributed. Evidence is presented to suggest that the procedures described produce accurate estimates of rates of child developmental delays. The methodology used in this study provides a useful tool for computing theoretical rates of occurrence of developmental delays that make children candidates for early intervention.

  10. Theoretical analysis of cell separation based on cell surface marker density.

    PubMed

    Chalmers, J J; Zborowski, M; Moore, L; Mandal, S; Fang, B B; Sun, L

    1998-07-05

    A theoretical analysis was performed to determine the number of fractions a multidisperse, immunomagnetically labeled cell population can be separated into based on the surface marker (antigen) density. A number of assumptions were made in this analysis: that there is a proportionality between the number of surface markers on the cell surface and the number of immunomagnetic labels bound; that this surface marker density is independent of the cell diameter; and that there is only the presence of magnetic and drag forces acting on the cell. Due to the normal distribution of cell diameters, a "randomizing" effect enters into the analysis, and an analogy between the "theoretical plate" analysis of distillation, adsorption, and chromatography can be made. Using the experimentally determined, normal distribution of cell diameters for human lymphocytes and a breast cancer cell line, and fluorescent activated cell screening data of specific surface marker distributions, examples of theoretical plate calculations were made and discussed.

  11. Predicting Protein Secondary Structure Using Consensus Data Mining (CDM) Based on Empirical Statistics and Evolutionary Information.

    PubMed

    Kandoi, Gaurav; Leelananda, Sumudu P; Jernigan, Robert L; Sen, Taner Z

    2017-01-01

    Predicting the secondary structure of a protein from its sequence still remains a challenging problem. The prediction accuracies remain around 80 %, and for very diverse methods. Using evolutionary information and machine learning algorithms in particular has had the most impact. In this chapter, we will first define secondary structures, then we will review the Consensus Data Mining (CDM) technique based on the robust GOR algorithm and Fragment Database Mining (FDM) approach. GOR V is an empirical method utilizing a sliding window approach to model the secondary structural elements of a protein by making use of generalized evolutionary information. FDM uses data mining from experimental structure fragments, and is able to successfully predict the secondary structure of a protein by combining experimentally determined structural fragments based on sequence similarities of the fragments. The CDM method combines predictions from GOR V and FDM in a hierarchical manner to produce consensus predictions for secondary structure. In other words, if sequence fragment are not available, then it uses GOR V to make the secondary structure prediction. The online server of CDM is available at http://gor.bb.iastate.edu/cdm/ .

  12. An empirically based steady state friction law and implications for fault stability

    NASA Astrophysics Data System (ADS)

    Spagnuolo, E.; Nielsen, S.; Violay, M.; Di Toro, G.

    2016-04-01

    Empirically based rate-and-state friction laws (RSFLs) have been proposed to model the dependence of friction forces with slip and time. The relevance of the RSFL for earthquake mechanics is that few constitutive parameters define critical conditions for fault stability (i.e., critical stiffness and frictional fault behavior). However, the RSFLs were determined from experiments conducted at subseismic slip rates (V < 1 cm/s), and their extrapolation to earthquake deformation conditions (V > 0.1 m/s) remains questionable on the basis of the experimental evidence of (1) large dynamic weakening and (2) activation of particular fault lubrication processes at seismic slip rates. Here we propose a modified RSFL (MFL) based on the review of a large published and unpublished data set of rock friction experiments performed with different testing machines. The MFL, valid at steady state conditions from subseismic to seismic slip rates (0.1 µm/s < V < 3 m/s), describes the initiation of a substantial velocity weakening in the 1-20 cm/s range resulting in a critical stiffness increase that creates a peak of potential instability in that velocity regime. The MFL leads to a new definition of fault frictional stability with implications for slip event styles and relevance for models of seismic rupture nucleation, propagation, and arrest.

  13. Electrocardiogram signal denoising based on empirical mode decomposition technique: an overview

    NASA Astrophysics Data System (ADS)

    Han, G.; Lin, B.; Xu, Z.

    2017-03-01

    Electrocardiogram (ECG) signal is nonlinear and non-stationary weak signal which reflects whether the heart is functioning normally or abnormally. ECG signal is susceptible to various kinds of noises such as high/low frequency noises, powerline interference and baseline wander. Hence, the removal of noises from ECG signal becomes a vital link in the ECG signal processing and plays a significant role in the detection and diagnosis of heart diseases. The review will describe the recent developments of ECG signal denoising based on Empirical Mode Decomposition (EMD) technique including high frequency noise removal, powerline interference separation, baseline wander correction, the combining of EMD and Other Methods, EEMD technique. EMD technique is a quite potential and prospective but not perfect method in the application of processing nonlinear and non-stationary signal like ECG signal. The EMD combined with other algorithms is a good solution to improve the performance of noise cancellation. The pros and cons of EMD technique in ECG signal denoising are discussed in detail. Finally, the future work and challenges in ECG signal denoising based on EMD technique are clarified.

  14. Empirical mode decomposition-based motion artifact correction method for functional near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Gu, Yue; Han, Junxia; Liang, Zhenhu; Yan, Jiaqing; Li, Zheng; Li, Xiaoli

    2016-01-01

    Functional near-infrared spectroscopy (fNIRS) is a promising technique for monitoring brain activity. However, it is sensitive to motion artifacts. Many methods have been developed for motion correction, such as spline interpolation, wavelet filtering, and kurtosis-based wavelet filtering. We propose a motion correction method based on empirical mode decomposition (EMD), which is applied to segments of data identified as having motion artifacts. The EMD method is adaptive, data-driven, and well suited for nonstationary data. To test the performance of the proposed EMD method and to compare it with other motion correction methods, we used simulated hemodynamic responses added to real resting-state fNIRS data. The EMD method reduced mean squared error in 79% of channels and increased signal-to-noise ratio in 78% of channels. Moreover, it produced the highest Pearson's correlation coefficient between the recovered signal and the original signal, significantly better than the comparison methods (p<0.01, paired t-test). These results indicate that the proposed EMD method is a first choice method for motion artifact correction in fNIRS.

  15. An empirically based steady state friction law and implications for fault stability

    PubMed Central

    Nielsen, S.; Violay, M.; Di Toro, G.

    2016-01-01

    Abstract Empirically based rate‐and‐state friction laws (RSFLs) have been proposed to model the dependence of friction forces with slip and time. The relevance of the RSFL for earthquake mechanics is that few constitutive parameters define critical conditions for fault stability (i.e., critical stiffness and frictional fault behavior). However, the RSFLs were determined from experiments conducted at subseismic slip rates (V < 1 cm/s), and their extrapolation to earthquake deformation conditions (V > 0.1 m/s) remains questionable on the basis of the experimental evidence of (1) large dynamic weakening and (2) activation of particular fault lubrication processes at seismic slip rates. Here we propose a modified RSFL (MFL) based on the review of a large published and unpublished data set of rock friction experiments performed with different testing machines. The MFL, valid at steady state conditions from subseismic to seismic slip rates (0.1 µm/s < V < 3 m/s), describes the initiation of a substantial velocity weakening in the 1–20 cm/s range resulting in a critical stiffness increase that creates a peak of potential instability in that velocity regime. The MFL leads to a new definition of fault frictional stability with implications for slip event styles and relevance for models of seismic rupture nucleation, propagation, and arrest. PMID:27667875

  16. Feasibility of an empirically based program for parents of preschoolers with autism spectrum disorder.

    PubMed

    Dababnah, Sarah; Parish, Susan L

    2016-01-01

    This article reports on the feasibility of implementing an existing empirically based program, The Incredible Years, tailored to parents of young children with autism spectrum disorder. Parents raising preschool-aged children (aged 3-6 years) with autism spectrum disorder (N = 17) participated in a 15-week pilot trial of the intervention. Quantitative assessments of the program revealed fidelity was generally maintained, with the exception of program-specific videos. Qualitative data from individual post-intervention interviews reported parents benefited most from child emotion regulation strategies, play-based child behavior skills, parent stress management, social support, and visual resources. More work is needed to further refine the program to address parent self-care, partner relationships, and the diverse behavioral and communication challenges of children across the autism spectrum. Furthermore, parent access and retention could potentially be increased by providing in-home childcare vouchers and a range of times and locations in which to offer the program. The findings suggest The Incredible Years is a feasible intervention for parents seeking additional support for child- and family-related challenges and offers guidance to those communities currently using The Incredible Years or other related parenting programs with families of children with autism spectrum disorder.

  17. Enhancement of lung sounds based on empirical mode decomposition and Fourier transform algorithm.

    PubMed

    Mondal, Ashok; Banerjee, Poulami; Somkuwar, Ajay

    2017-02-01

    There is always heart sound (HS) signal interfering during the recording of lung sound (LS) signals. This obscures the features of LS signals and creates confusion on pathological states, if any, of the lungs. In this work, a new method is proposed for reduction of heart sound interference which is based on empirical mode decomposition (EMD) technique and prediction algorithm. In this approach, first the mixed signal is split into several components in terms of intrinsic mode functions (IMFs). Thereafter, HS-included segments are localized and removed from them. The missing values of the gap thus produced, is predicted by a new Fast Fourier Transform (FFT) based prediction algorithm and the time domain LS signal is reconstructed by taking an inverse FFT of the estimated missing values. The experiments have been conducted on simulated and recorded HS corrupted LS signals at three different flow rates and various SNR levels. The performance of the proposed method is evaluated by qualitative and quantitative analysis of the results. It is found that the proposed method is superior to the baseline method in terms of quantitative and qualitative measurement. The developed method gives better results compared to baseline method for different SNR levels. Our method gives cross correlation index (CCI) of 0.9488, signal to deviation ratio (SDR) of 9.8262, and normalized maximum amplitude error (NMAE) of 26.94 for 0 dB SNR value. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. An empirically based steady state friction law and implications for fault stability.

    PubMed

    Spagnuolo, E; Nielsen, S; Violay, M; Di Toro, G

    2016-04-16

    Empirically based rate-and-state friction laws (RSFLs) have been proposed to model the dependence of friction forces with slip and time. The relevance of the RSFL for earthquake mechanics is that few constitutive parameters define critical conditions for fault stability (i.e., critical stiffness and frictional fault behavior). However, the RSFLs were determined from experiments conducted at subseismic slip rates (V < 1 cm/s), and their extrapolation to earthquake deformation conditions (V > 0.1 m/s) remains questionable on the basis of the experimental evidence of (1) large dynamic weakening and (2) activation of particular fault lubrication processes at seismic slip rates. Here we propose a modified RSFL (MFL) based on the review of a large published and unpublished data set of rock friction experiments performed with different testing machines. The MFL, valid at steady state conditions from subseismic to seismic slip rates (0.1 µm/s < V < 3 m/s), describes the initiation of a substantial velocity weakening in the 1-20 cm/s range resulting in a critical stiffness increase that creates a peak of potential instability in that velocity regime. The MFL leads to a new definition of fault frictional stability with implications for slip event styles and relevance for models of seismic rupture nucleation, propagation, and arrest.

  19. Polarizable Empirical Force Field for Nitrogen-containing Heteroaromatic Compounds Based on the Classical Drude Oscillator

    PubMed Central

    Lopes, Pedro E. M.; Lamoureux, Guillaume; MacKerell, Alexander D.

    2009-01-01

    The polarizable empirical CHARMM force field based on the classical Drude oscillator has been extended to the nitrogen-containing heteroaromatic compounds pyridine, pyrimidine, pyrrole, imidazole, indole and purine. Initial parameters for the 6-membered rings were based on benzene with non-bond parameter optimization focused on the nitrogen atoms and adjacent carbons and attached hydrogens. In the case of 5-member rings, parameters were first developed for imidazole and transferred to pyrrole. Optimization of all parameters was performed against an extensive set of quantum mechanical and experimental data. Ab initio data was used for determination of the initial electrostatic parameters, the vibrational analysis, and in the optimization of the relative magnitudes of the Lennard-Jones parameters, through computations of the interactions of dimers of model compounds, model compound-water interactions and interactions of rare gases with model compounds. The absolute values of the Lennard-Jones parameters were determined targeting experimental heats of vaporization, molecular volumes, heats of sublimation, crystal lattice parameters and free energies of hydration. Final scaling of the polarizabilities from the gas phase values by 0.85 was determined by reproduction of the dielectric constants of pyridine and pyrrole. The developed parameter set was extensively validated against additional experimental data such as diffusion constants, heat capacities and isothermal compressibilities, including data as a function of temperature. PMID:19090564

  20. Biomarker-based strategy for early discontinuation of empirical antifungal treatment in critically ill patients: a randomized controlled trial.

    PubMed

    Rouzé, Anahita; Loridant, Séverine; Poissy, Julien; Dervaux, Benoit; Sendid, Boualem; Cornu, Marjorie; Nseir, Saad

    2017-09-22

    The aim of this study was to determine the impact of a biomarker-based strategy on early discontinuation of empirical antifungal treatment. Prospective randomized controlled single-center unblinded study, performed in a mixed ICU. A total of 110 patients were randomly assigned to a strategy in which empirical antifungal treatment duration was determined by (1,3)-β-D-glucan, mannan, and anti-mannan serum assays, performed on day 0 and day 4; or to a routine care strategy, based on international guidelines, which recommend 14 days of treatment. In the biomarker group, early stop recommendation was determined using an algorithm based on the results of biomarkers. The primary outcome was the percentage of survivors discontinuing empirical antifungal treatment early, defined as a discontinuation strictly before day 7. A total of 109 patients were analyzed (one patient withdraw consent). Empirical antifungal treatment was discontinued early in 29 out of 54 patients in the biomarker strategy group, compared with one patient out of 55 in the routine strategy group [54% vs 2%, p < 0.001, OR (95% CI) 62.6 (8.1-486)]. Total duration of antifungal treatment was significantly shorter in the biomarker strategy compared with routine strategy [median (IQR) 6 (4-13) vs 13 (12-14) days, p < 0.0001). No significant difference was found in the percentage of patients with subsequent proven invasive Candida infection, mechanical ventilation-free days, length of ICU stay, cost, and ICU mortality between the two study groups. The use of a biomarker-based strategy increased the percentage of early discontinuation of empirical antifungal treatment among critically ill patients with suspected invasive Candida infection. These results confirm previous findings suggesting that early discontinuation of empirical antifungal treatment had no negative impact on outcome. However, further studies are needed to confirm the safety of this strategy. This trial was registered at Clinical

  1. Functionality of empirical model-based predictive analytics for the early detection of hemodynamic instabilty.

    PubMed

    Summers, Richard L; Pipke, Matt; Wegerich, Stephan; Conkright, Gary; Isom, Kristen C

    2014-01-01

    Background. Monitoring cardiovascular hemodynamics in the modern clinical setting is a major challenge. Increasing amounts of physiologic data must be analyzed and interpreted in the context of the individual patient’s pathology and inherent biologic variability. Certain data-driven analytical methods are currently being explored for smart monitoring of data streams from patients as a first tier automated detection system for clinical deterioration. As a prelude to human clinical trials, an empirical multivariate machine learning method called Similarity-Based Modeling (“SBM”), was tested in an In Silico experiment using data generated with the aid of a detailed computer simulator of human physiology (Quantitative Circulatory Physiology or “QCP”) which contains complex control systems with realistic integrated feedback loops. Methods. SBM is a kernel-based, multivariate machine learning method that that uses monitored clinical information to generate an empirical model of a patient’s physiologic state. This platform allows for the use of predictive analytic techniques to identify early changes in a patient’s condition that are indicative of a state of deterioration or instability. The integrity of the technique was tested through an In Silico experiment using QCP in which the output of computer simulations of a slowly evolving cardiac tamponade resulted in progressive state of cardiovascular decompensation. Simulator outputs for the variables under consideration were generated at a 2-min data rate (0.083Hz) with the tamponade introduced at a point 420 minutes into the simulation sequence. The functionality of the SBM predictive analytics methodology to identify clinical deterioration was compared to the thresholds used by conventional monitoring methods. Results. The SBM modeling method was found to closely track the normal physiologic variation as simulated by QCP. With the slow development of the tamponade, the SBM model are seen to disagree while the

  2. Comparison of subset-based local and FE-based global digital image correlation: Theoretical error analysis and validation

    NASA Astrophysics Data System (ADS)

    Pan, B.; Wang, B.; Lubineau, G.

    2016-07-01

    Subset-based local and finite-element-based (FE-based) global digital image correlation (DIC) approaches are the two primary image matching algorithms widely used for full-field displacement mapping. Very recently, the performances of these different DIC approaches have been experimentally investigated using numerical and real-world experimental tests. The results have shown that in typical cases, where the subset (element) size is no less than a few pixels and the local deformation within a subset (element) can be well approximated by the adopted shape functions, the subset-based local DIC outperforms FE-based global DIC approaches because the former provides slightly smaller root-mean-square errors and offers much higher computation efficiency. Here we investigate the theoretical origin and lay a solid theoretical basis for the previous comparison. We assume that systematic errors due to imperfect intensity interpolation and undermatched shape functions are negligibly small, and perform a theoretical analysis of the random errors or standard deviation (SD) errors in the displacements measured by two local DIC approaches (i.e., a subset-based local DIC and an element-based local DIC) and two FE-based global DIC approaches (i.e., Q4-DIC and Q8-DIC). The equations that govern the random errors in the displacements measured by these local and global DIC approaches are theoretically derived. The correctness of the theoretically predicted SD errors is validated through numerical translation tests under various noise levels. We demonstrate that the SD errors induced by the Q4-element-based local DIC, the global Q4-DIC and the global Q8-DIC are 4, 1.8-2.2 and 1.2-1.6 times greater, respectively, than that associated with the subset-based local DIC, which is consistent with our conclusions from previous work.

  3. Benchmarking of a T-wave alternans detection method based on empirical mode decomposition.

    PubMed

    Blanco-Velasco, Manuel; Goya-Esteban, Rebeca; Cruz-Roldán, Fernando; García-Alberola, Arcadi; Rojo-Álvarez, José Luis

    2017-07-01

    T-wave alternans (TWA) is a fluctuation of the ST-T complex occurring on an every-other-beat basis of the surface electrocardiogram (ECG). It has been shown to be an informative risk stratifier for sudden cardiac death, though the lack of gold standard to benchmark detection methods has promoted the use of synthetic signals. This work proposes a novel signal model to study the performance of a TWA detection. Additionally, the methodological validation of a denoising technique based on empirical mode decomposition (EMD), which is used here along with the spectral method, is also tackled. The proposed test bed system is based on the following guidelines: (1) use of open source databases to enable experimental replication; (2) use of real ECG signals and physiological noise; (3) inclusion of randomized TWA episodes. Both sensitivity (Se) and specificity (Sp) are separately analyzed. Also a nonparametric hypothesis test, based on Bootstrap resampling, is used to determine whether the presence of the EMD block actually improves the performance. The results show an outstanding specificity when the EMD block is used, even in very noisy conditions (0.96 compared to 0.72 for SNR = 8 dB), being always superior than that of the conventional SM alone. Regarding the sensitivity, using the EMD method also outperforms in noisy conditions (0.57 compared to 0.46 for SNR=8 dB), while it decreases in noiseless conditions. The proposed test setting designed to analyze the performance guarantees that the actual physiological variability of the cardiac system is reproduced. The use of the EMD-based block in noisy environment enables the identification of most patients with fatal arrhythmias. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Evaluating the compatibility of physics-based deterministic synthetic ground motion with empirical GMPE

    NASA Astrophysics Data System (ADS)

    Baumann, C.; Dalguer, L. A.

    2012-12-01

    Recent development of deterministic physics-based numerical simulations of earthquakes has contributed to substantial advances in our understanding of different aspects related to the earthquake mechanism and near source ground motion. These models have greater potential for identifying and predicting the variability of near-source ground motions dominated by the source and/or geological effects. These advances have led to increased interest in using suite of physics-based models for reliable prediction of ground motion of future earthquakes for seismic hazard assessment and risk mitigation, particularly in areas where there are few recorded ground motions. But before using synthetic ground motion, it is important to evaluate the reliability of deterministic synthetic ground motions, particularly the upper frequency limit. Current engineering practice usually use ground motion quantities estimated from empirical Ground Motion Predicting Equations (GMPE) such as peak ground acceleration (PGA), peak ground velocity (PGV), peak ground displacement (PGD), and spectral ordinates as input to assess building response for seismic safety of future and existing structures. Therefore it is intuitive and evident to verify the compatibility of synthetic ground motions with current empirical GMPE. In this study we attempt to do it so, to a suite of deterministic ground motion simulation generated by earthquake dynamic rupture models. We focus mainly on determining the upper frequency limit in which the synthetic ground motions are compatible to GMPE. For that purpose we have generated suite of earthquake rupture dynamic models in a layered 1D velocity structure. The simulations include 360 rupture dynamic models with moment magnitudes in the range of 5.5-7, for three styles of faulting (reverse, normal and strike slip), for both buried faults and surface rupturing faults. Normal stress and frictional strength are depth and non-depth dependent. Initial stress distribution follows

  5. Theoretical results on the tandem junction solar cell based on its Ebers-Moll transistor model

    NASA Technical Reports Server (NTRS)

    Goradia, C.; Vaughn, J.; Baraona, C. R.

    1980-01-01

    A one-dimensional theoretical model of the tandem junction solar cell (TJC) with base resistivity greater than about 1 ohm-cm and under low level injection has been derived. This model extends a previously published conceptual model which treats the TJC as an npn transistor. The model gives theoretical expressions for each of the Ebers-Moll type currents of the illuminated TJC and allows for the calculation of the spectral response, I(sc), V(oc), FF and eta under variation of one or more of the geometrical and material parameters and 1MeV electron fluence. Results of computer calculations based on this model are presented and discussed. These results indicate that for space applications, both a high beginning of life efficiency, greater than 15% AM0, and a high radiation tolerance can be achieved only with thin (less than 50 microns) TJC's with high base resistivity (greater than 10 ohm-cm).

  6. How much does participatory flood management contribute to stakeholders' social capacity building? Empirical findings based on a triangulation of three evaluation approaches

    NASA Astrophysics Data System (ADS)

    Buchecker, M.; Menzel, S.; Home, R.

    2013-06-01

    Recent literature suggests that dialogic forms of risk communication are more effective to build stakeholders' hazard-related social capacities. In spite of the high theoretical expectations, there is a lack of univocal empirical evidence on the relevance of these effects. This is mainly due to the methodological limitations of the existing evaluation approaches. In our paper we aim at eliciting the contribution of participatory river revitalisation projects on stakeholders' social capacity building by triangulating the findings of three evaluation studies that were based on different approaches: a field-experimental, a qualitative long-term ex-post and a cross-sectional household survey approach. The results revealed that social learning and avoiding the loss of trust were more relevant benefits of participatory flood management than acceptance building. The results suggest that stakeholder involvements should be more explicitly designed as tools for long-term social learning.

  7. An Empirical Introduction to the Concept of Chemical Element Based on Van Hiele's Theory of Level Transitions

    ERIC Educational Resources Information Center

    Vogelezang, Michiel; Van Berkel, Berry; Verdonk, Adri

    2015-01-01

    Between 1970 and 1990, the Dutch working group "Empirical Introduction to Chemistry" developed a secondary school chemistry education curriculum based on the educational vision of the mathematicians van Hiele and van Hiele-Geldof. This approach viewed learning as a process in which students must go through discontinuous level transitions…

  8. Empirically Based Phenotypic Profiles of Children with Pervasive Developmental Disorders: Interpretation in the Light of the DSM-5

    ERIC Educational Resources Information Center

    Greaves-Lord, Kirstin; Eussen, Mart L. J. M.; Verhulst, Frank C.; Minderaa, Ruud B.; Mandy, William; Hudziak, James J.; Steenhuis, Mark Peter; de Nijs, Pieter F.; Hartman, Catharina A.

    2013-01-01

    This study aimed to contribute to the Diagnostic and Statistical Manual (DSM) debates on the conceptualization of autism by investigating (1) whether empirically based distinct phenotypic profiles could be distinguished within a sample of mainly cognitively able children with pervasive developmental disorder (PDD), and (2) how profiles related to…

  9. Empirically Guided Coordination of Multiple Evidence-Based Treatments: An Illustration of Relevance Mapping in Children's Mental Health Services

    ERIC Educational Resources Information Center

    Chorpita, Bruce F.; Bernstein, Adam; Daleiden, Eric L.

    2011-01-01

    Objective: Despite substantial progress in the development and identification of psychosocial evidence-based treatments (EBTs) in mental health, there is minimal empirical guidance for selecting an optimal "set" of EBTs maximally applicable and generalizable to a chosen service sample. Relevance mapping is a proposed methodology that…

  10. Using an empirical and rule-based modeling approach to map cause of disturbance in U.S

    Treesearch

    Todd A. Schroeder; Gretchen G. Moisen; Karen Schleeweis; Chris Toney; Warren B. Cohen; Zhiqiang Yang; Elizabeth A. Freeman

    2015-01-01

    Recently completing over a decade of research, the NASA/NACP funded North American Forest Dynamics (NAFD) project has led to several important advancements in the way U.S. forest disturbance dynamics are mapped at regional and continental scales. One major contribution has been the development of an empirical and rule-based modeling approach which addresses two of the...

  11. Empirically Guided Coordination of Multiple Evidence-Based Treatments: An Illustration of Relevance Mapping in Children's Mental Health Services

    ERIC Educational Resources Information Center

    Chorpita, Bruce F.; Bernstein, Adam; Daleiden, Eric L.

    2011-01-01

    Objective: Despite substantial progress in the development and identification of psychosocial evidence-based treatments (EBTs) in mental health, there is minimal empirical guidance for selecting an optimal "set" of EBTs maximally applicable and generalizable to a chosen service sample. Relevance mapping is a proposed methodology that…

  12. The Potential for Empirically Based Estimates of Expected Progress for Students with Learning Disabilities: Legal and Conceptual Issues.

    ERIC Educational Resources Information Center

    Stone, C. Addison; Doane, J. Abram

    2001-01-01

    The purpose of this article is to spark discussion regarding the value and feasibility of empirically based procedures for goal setting and evaluation of educational services. Recent legal decisions and policy debates point to the need for clearer criteria in decisions regarding appropriate educational services. Possible roles for school…

  13. An Empirical Introduction to the Concept of Chemical Element Based on Van Hiele's Theory of Level Transitions

    ERIC Educational Resources Information Center

    Vogelezang, Michiel; Van Berkel, Berry; Verdonk, Adri

    2015-01-01

    Between 1970 and 1990, the Dutch working group "Empirical Introduction to Chemistry" developed a secondary school chemistry education curriculum based on the educational vision of the mathematicians van Hiele and van Hiele-Geldof. This approach viewed learning as a process in which students must go through discontinuous level transitions…

  14. PowerPoint-Based Lectures in Business Education: An Empirical Investigation of Student-Perceived Novelty and Effectiveness

    ERIC Educational Resources Information Center

    Burke, Lisa A.; James, Karen E.

    2008-01-01

    The use of PowerPoint (PPT)-based lectures in business classes is prevalent, yet it remains empirically understudied in business education research. The authors investigate whether students in the contemporary business classroom view PPT as a novel stimulus and whether these perceptions of novelty are related to students' self-assessment of…

  15. Empirical Differences in Omission Tendency and Reading Ability in PISA: An Application of Tree-Based Item Response Models

    ERIC Educational Resources Information Center

    Okumura, Taichi

    2014-01-01

    This study examined the empirical differences between the tendency to omit items and reading ability by applying tree-based item response (IRTree) models to the Japanese data of the Programme for International Student Assessment (PISA) held in 2009. For this purpose, existing IRTree models were expanded to contain predictors and to handle…

  16. Empirically Based Phenotypic Profiles of Children with Pervasive Developmental Disorders: Interpretation in the Light of the DSM-5

    ERIC Educational Resources Information Center

    Greaves-Lord, Kirstin; Eussen, Mart L. J. M.; Verhulst, Frank C.; Minderaa, Ruud B.; Mandy, William; Hudziak, James J.; Steenhuis, Mark Peter; de Nijs, Pieter F.; Hartman, Catharina A.

    2013-01-01

    This study aimed to contribute to the Diagnostic and Statistical Manual (DSM) debates on the conceptualization of autism by investigating (1) whether empirically based distinct phenotypic profiles could be distinguished within a sample of mainly cognitively able children with pervasive developmental disorder (PDD), and (2) how profiles related to…

  17. Pseudopotential-based electron quantum transport: Theoretical formulation and application to nanometer-scale silicon nanowire transistors

    SciTech Connect

    Fang, Jingtian Vandenberghe, William G.; Fu, Bo; Fischetti, Massimo V.

    2016-01-21

    We present a formalism to treat quantum electronic transport at the nanometer scale based on empirical pseudopotentials. This formalism offers explicit atomistic wavefunctions and an accurate band structure, enabling a detailed study of the characteristics of devices with a nanometer-scale channel and body. Assuming externally applied potentials that change slowly along the electron-transport direction, we invoke the envelope-wavefunction approximation to apply the open boundary conditions and to develop the transport equations. We construct the full-band open boundary conditions (self-energies of device contacts) from the complex band structure of the contacts. We solve the transport equations and present the expressions required to calculate the device characteristics, such as device current and charge density. We apply this formalism to study ballistic transport in a gate-all-around (GAA) silicon nanowire field-effect transistor with a body-size of 0.39 nm, a gate length of 6.52 nm, and an effective oxide thickness of 0.43 nm. Simulation results show that this device exhibits a subthreshold slope (SS) of ∼66 mV/decade and a drain-induced barrier-lowering of ∼2.5 mV/V. Our theoretical calculations predict that low-dimensionality channels in a 3D GAA architecture are able to meet the performance requirements of future devices in terms of SS swing and electrostatic control.

  18. Proposed Empirical Entropy and Gibbs Energy Based on Observations of Scale Invariance in Open Nonequilibrium Systems.

    PubMed

    Tuck, Adrian F

    2017-09-07

    There is no widely agreed definition of entropy, and consequently Gibbs energy, in open systems far from equilibrium. One recent approach has sought to formulate an entropy and Gibbs energy based on observed scale invariances in geophysical variables, particularly in atmospheric quantities, including the molecules constituting stratospheric chemistry. The Hamiltonian flux dynamics of energy in macroscopic open nonequilibrium systems maps to energy in equilibrium statistical thermodynamics, and corresponding equivalences of scale invariant variables with other relevant statistical mechanical variables such as entropy, Gibbs energy, and 1/(kBoltzmannT), are not just formally analogous but are also mappings. Three proof-of-concept representative examples from available adequate stratospheric chemistry observations-temperature, wind speed and ozone-are calculated, with the aim of applying these mappings and equivalences. Potential applications of the approach to scale invariant observations from the literature, involving scales from molecular through laboratory to astronomical, are considered. Theoretical support for the approach from the literature is discussed.

  19. Robust multitask learning with three-dimensional empirical mode decomposition-based features for hyperspectral classification

    NASA Astrophysics Data System (ADS)

    He, Zhi; Liu, Lin

    2016-11-01

    Empirical mode decomposition (EMD) and its variants have recently been applied for hyperspectral image (HSI) classification due to their ability to extract useful features from the original HSI. However, it remains a challenging task to effectively exploit the spectral-spatial information by the traditional vector or image-based methods. In this paper, a three-dimensional (3D) extension of EMD (3D-EMD) is proposed to naturally treat the HSI as a cube and decompose the HSI into varying oscillations (i.e. 3D intrinsic mode functions (3D-IMFs)). To achieve fast 3D-EMD implementation, 3D Delaunay triangulation (3D-DT) is utilized to determine the distances of extrema, while separable filters are adopted to generate the envelopes. Taking the extracted 3D-IMFs as features of different tasks, robust multitask learning (RMTL) is further proposed for HSI classification. In RMTL, pairs of low-rank and sparse structures are formulated by trace-norm and l1,2 -norm to capture task relatedness and specificity, respectively. Moreover, the optimization problems of RMTL can be efficiently solved by the inexact augmented Lagrangian method (IALM). Compared with several state-of-the-art feature extraction and classification methods, the experimental results conducted on three benchmark data sets demonstrate the superiority of the proposed methods.

  20. Dispelling myths about dissociative identity disorder treatment: an empirically based approach.

    PubMed

    Brand, Bethany L; Loewenstein, Richard J; Spiegel, David

    2014-01-01

    Some claim that treatment for dissociative identity disorder (DID) is harmful. Others maintain that the available data support the view that psychotherapy is helpful. We review the empirical support for both arguments. Current evidence supports the conclusion that phasic treatment consistent with expert consensus guidelines is associated with improvements in a wide range of DID patients' symptoms and functioning, decreased rates of hospitalization, and reduced costs of treatment. Research indicates that poor outcome is associated with treatment that does not specifically involve direct engagement with DID self-states to repair identity fragmentation and to decrease dissociative amnesia. The evidence demonstrates that carefully staged trauma-focused psychotherapy for DID results in improvement, whereas dissociative symptoms persist when not specifically targeted in treatment. The claims that DID treatment is harmful are based on anecdotal cases, opinion pieces, reports of damage that are not substantiated in the scientific literature, misrepresentations of the data, and misunderstandings about DID treatment and the phenomenology of DID. Given the severe symptomatology and disability associated with DID, iatrogenic harm is far more likely to come from depriving DID patients of treatment that is consistent with expert consensus, treatment guidelines, and current research.

  1. Ship classification using nonlinear features of radiated sound: an approach based on empirical mode decomposition.

    PubMed

    Bao, Fei; Li, Chen; Wang, Xinlong; Wang, Qingfu; Du, Shuanping

    2010-07-01

    Classification for ship-radiated underwater sound is one of the most important and challenging subjects in underwater acoustical signal processing. An approach to ship classification is proposed in this work based on analysis of ship-radiated acoustical noise in subspaces of intrinsic mode functions attained via the ensemble empirical mode decomposition. It is shown that detection and acquisition of stable and reliable nonlinear features become practically feasible by nonlinear analysis of the time series of individual decomposed components, each of which is simple enough and well represents an oscillatory mode of ship dynamics. Surrogate and nonlinear predictability analysis are conducted to probe and measure the nonlinearity and regularity. The results of both methods, which verify each other, substantiate that ship-radiated noises contain components with deterministic nonlinear features well serving for efficient classification of ships. The approach perhaps opens an alternative avenue in the direction toward object classification and identification. It may also import a new view of signals as complex as ship-radiated sound.

  2. Percentile-based Empirical Distribution Function Estimates for Performance Evaluation of Healthcare Providers

    PubMed Central

    Paddock, Susan M.; Louis, Thomas A.

    2010-01-01

    Summary Hierarchical models are widely-used to characterize the performance of individual healthcare providers. However, little attention has been devoted to system-wide performance evaluations, the goals of which include identifying extreme (e.g., top 10%) provider performance and developing statistical benchmarks to define high-quality care. Obtaining optimal estimates of these quantities requires estimating the empirical distribution function (EDF) of provider-specific parameters that generate the dataset under consideration. However, the difficulty of obtaining uncertainty bounds for a square-error loss minimizing EDF estimate has hindered its use in system-wide performance evaluations. We therefore develop and study a percentile-based EDF estimate for univariate provider-specific parameters. We compute order statistics of samples drawn from the posterior distribution of provider-specific parameters to obtain relevant uncertainty assessments of an EDF estimate and its features, such as thresholds and percentiles. We apply our method to data from the Medicare End Stage Renal Disease (ESRD) Program, a health insurance program for people with irreversible kidney failure. We highlight the risk of misclassifying providers as exceptionally good or poor performers when uncertainty in statistical benchmark estimates is ignored. Given the high stakes of performance evaluations, statistical benchmarks should be accompanied by precision estimates. PMID:21918583

  3. Percentile-based Empirical Distribution Function Estimates for Performance Evaluation of Healthcare Providers.

    PubMed

    Paddock, Susan M; Louis, Thomas A

    2011-08-01

    Hierarchical models are widely-used to characterize the performance of individual healthcare providers. However, little attention has been devoted to system-wide performance evaluations, the goals of which include identifying extreme (e.g., top 10%) provider performance and developing statistical benchmarks to define high-quality care. Obtaining optimal estimates of these quantities requires estimating the empirical distribution function (EDF) of provider-specific parameters that generate the dataset under consideration. However, the difficulty of obtaining uncertainty bounds for a square-error loss minimizing EDF estimate has hindered its use in system-wide performance evaluations. We therefore develop and study a percentile-based EDF estimate for univariate provider-specific parameters. We compute order statistics of samples drawn from the posterior distribution of provider-specific parameters to obtain relevant uncertainty assessments of an EDF estimate and its features, such as thresholds and percentiles. We apply our method to data from the Medicare End Stage Renal Disease (ESRD) Program, a health insurance program for people with irreversible kidney failure. We highlight the risk of misclassifying providers as exceptionally good or poor performers when uncertainty in statistical benchmark estimates is ignored. Given the high stakes of performance evaluations, statistical benchmarks should be accompanied by precision estimates.

  4. Empirical prediction of Indian summer monsoon rainfall with different lead periods based on global SST anomalies

    NASA Astrophysics Data System (ADS)

    Pai, D. S.; Rajeevan, M.

    2006-02-01

    The main objective of this study was to develop empirical models with different seasonal lead time periods for the long range prediction of seasonal (June to September) Indian summer monsoon rainfall (ISMR). For this purpose, 13 predictors having significant and stable relationships with ISMR were derived by the correlation analysis of global grid point seasonal Sea-Surface Temperature (SST) anomalies and the tendency in the SST anomalies. The time lags of the seasonal SST anomalies were varied from 1 season to 4 years behind the reference monsoon season. The basic SST data set used was the monthly NOAA Extended Reconstructed Global SST (ERSST) data at 2° × 2° spatial grid for the period 1951 2003. The time lags of the 13 predictors derived from various areas of all three tropical ocean basins (Indian, Pacific and Atlantic Oceans) varied from 1 season to 3 years. Based on these inter-correlated predictors, 3 predictor sub sets A, B and C were formed with prediction lead time periods of 0, 1 and 2 seasons, respectively, from the beginning of the monsoon season. The selected principal components (PCs) of these predictor sets were used as the input parameters for the models A, B and C, respectively. The model development period was 1955 1984. The correct model size was derived using all-possible regressions procedure and Mallow’s “Cp” statistics.

  5. Quantitative analysis of breast DCE-MR images based on ICA and an empirical model

    NASA Astrophysics Data System (ADS)

    Goebl, Sebastian; Plant, Claudia; Lobbes, Marc; Meyer-Bäse, Anke

    2012-06-01

    DCE-MRI represents an important tool for detecting subtle kinetic changes in breast lesion tissue. Non-masslike breast lesions exhibit an atypical dynamical behavior compared to mass-like lesions and pose a challenge to a computer-aided diagnosis system. Yet the correct diagnosis of these tumors represents an important step towards early prevention. We apply Independent Component Analysis (ICA) on DCE-MRI images to extract kinetic tumor curves. We use a known empirical mathematical model to automatically identify the tumor curves from the ICA result. Filtering out noise, our technique is superior to traditional ROI-based analysis in capturing the kinetic characteristics of the tumor curves. These typical characteristics enable us to nd out the optimal number of independent components for ICA. Another benet of our method is the segmentation of tumor tissue which is superior to the segmentation from MR subtraction images. Our aim is a optimal extraction of tumor curves to provide a better basis for kinetic analysis and to distinguish between benign and malignant lesions, especially for the challenging non-mass-like breast lesions.

  6. Empirical Study of User Preferences Based on Rating Data of Movies.

    PubMed

    Zhao, YingSi; Shen, Bo

    2016-01-01

    User preference plays a prominent role in many fields, including electronic commerce, social opinion, and Internet search engines. Particularly in recommender systems, it directly influences the accuracy of the recommendation. Though many methods have been presented, most of these have only focused on how to improve the recommendation results. In this paper, we introduce an empirical study of user preferences based on a set of rating data about movies. We develop a simple statistical method to investigate the characteristics of user preferences. We find that the movies have potential characteristics of closure, which results in the formation of numerous cliques with a power-law size distribution. We also find that a user related to a small clique always has similar opinions on the movies in this clique. Then, we suggest a user preference model, which can eliminate the predictions that are considered to be impracticable. Numerical results show that the model can reflect user preference with remarkable accuracy when data elimination is allowed, and random factors in the rating data make prediction error inevitable. In further research, we will investigate many other rating data sets to examine the universality of our findings.

  7. Empirical Study of User Preferences Based on Rating Data of Movies

    PubMed Central

    Zhao, YingSi; Shen, Bo

    2016-01-01

    User preference plays a prominent role in many fields, including electronic commerce, social opinion, and Internet search engines. Particularly in recommender systems, it directly influences the accuracy of the recommendation. Though many methods have been presented, most of these have only focused on how to improve the recommendation results. In this paper, we introduce an empirical study of user preferences based on a set of rating data about movies. We develop a simple statistical method to investigate the characteristics of user preferences. We find that the movies have potential characteristics of closure, which results in the formation of numerous cliques with a power-law size distribution. We also find that a user related to a small clique always has similar opinions on the movies in this clique. Then, we suggest a user preference model, which can eliminate the predictions that are considered to be impracticable. Numerical results show that the model can reflect user preference with remarkable accuracy when data elimination is allowed, and random factors in the rating data make prediction error inevitable. In further research, we will investigate many other rating data sets to examine the universality of our findings. PMID:26735847

  8. Seismic facies analysis based on self-organizing map and empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Du, Hao-kun; Cao, Jun-xing; Xue, Ya-juan; Wang, Xing-jian

    2015-01-01

    Seismic facies analysis plays an important role in seismic interpretation and reservoir model building by offering an effective way to identify the changes in geofacies inter wells. The selections of input seismic attributes and their time window have an obvious effect on the validity of classification and require iterative experimentation and prior knowledge. In general, it is sensitive to noise when waveform serves as the input data to cluster analysis, especially with a narrow window. To conquer this limitation, the Empirical Mode Decomposition (EMD) method is introduced into waveform classification based on SOM. We first de-noise the seismic data using EMD and then cluster the data using 1D grid SOM. The main advantages of this method are resolution enhancement and noise reduction. 3D seismic data from the western Sichuan basin, China, are collected for validation. The application results show that seismic facies analysis can be improved and better help the interpretation. The powerful tolerance for noise makes the proposed method to be a better seismic facies analysis tool than classical 1D grid SOM method, especially for waveform cluster with a narrow window.

  9. Empirical mode decomposition of digital mammograms for the statistical based characterization of architectural distortion.

    PubMed

    Zyout, Imad; Togneri, Roberto

    2015-01-01

    Among the different and common mammographic signs of the early-stage breast cancer, the architectural distortion is the most difficult to be identified. In this paper, we propose a new multiscale statistical texture analysis to characterize the presence of architectural distortion by distinguishing between textural patterns of architectural distortion and normal breast parenchyma. The proposed approach, firstly, applies the bidimensional empirical mode decomposition algorithm to decompose each mammographic region of interest into a set of adaptive and data-driven two-dimensional intrinsic mode functions (IMF) layers that capture details or high-frequency oscillations of the input image. Then, a model-based approach is applied to IMF histograms to acquire the first order statistics. The normalized entropy measure is also computed from each IMF and used as a complementary textural feature for the recognition of architectural distortion patterns. For evaluating the proposed AD characterization approach, we used a mammographic dataset of 187 true positive regions (i.e. depicting architectural distortion) and 887 true negative (normal parenchyma) regions, extracted from the DDSM database. Using the proposed multiscale textural features and the nonlinear support vector machine classifier, the best classification performance, in terms of the area under the receiver operating characteristic curve (or Az value), achieved was 0.88.

  10. Polarizable Empirical Force Field for Aromatic Compounds Based on the Classical Drude Oscillator

    PubMed Central

    Lopes, Pedro E. M.; Lamoureux, Guillaume; Roux, Benoit; MacKerell, Alexander D.

    2008-01-01

    The polarizable empirical CHARMM force field based on the classical Drude oscillator has been extended to the aromatic compounds benzene and toluene. Parameters were optimized for benzene and then transferred directly to toluene, with parameters for the methyl moiety of toluene taken from the previously published work on the alkanes. Optimization of all parameters was performed against an extensive set of quantum mechanical and experimental data. Ab initio data was used for determination of the electrostatic parameters, the vibrational analysis, and in the optimization of the relative magnitudes of the Lennard-Jones parameters. The absolute values of the Lennard-Jones parameters were determined by comparing computed and experimental heats of vaporization, molecular volumes, free energies of hydration and dielectric constants. The newly developed parameter set was extensively tested against additional experimental data such as vibrational spectra in the condensed phase, diffusion constants, heat capacities at constant pressure and isothermal compressibilities including data as a function of temperature. Moreover, the structure of liquid benzene, liquid toluene and of solutions of each in water were studied. In the case of benzene, the computed and experimental total distribution function were compared, with the developed model shown to be in excellent agreement with experiment. PMID:17388420

  11. An empirically based simulation of group foraging in the harvesting ant, Messor pergandei.

    PubMed

    Plowes, Nicola J R; Ramsch, Kai; Middendorf, Martin; Hölldobler, Bert

    2014-01-07

    We present an empirically based group model of foraging interactions in Messor pergandei, the Sonoran desert harvesting ant. M. pergandei colonies send out daily foraging columns consisting of tens of thousands of individual ants. Each day, the directions of the columns may change depending on the resource availability and the neighbor interactions. If neighboring columns meet, ants fight, and subsequent foraging is suppressed. M. pergandei colonies face a general problem which is present in many systems: dynamic spatial partitioning in a constantly changing environment, while simultaneously minimizing negative competitive interactions with multiple neighbors. Our simulation model of a population of column foragers is spatially explicit and includes neighbor interactions. We study how different behavioral strategies influence resource exploitation and space use for different nest distributions and densities. Column foraging in M. pergandei is adapted to the spatial and temporal properties of their natural habitat. Resource and space use is maximized both at the colony and the population level by a model with a behavioral strategy including learning and fast forgetting rates. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Interdigitated silver-polymer-based antibacterial surface system activated by oligodynamic iontophoresis - an empirical characterization study.

    PubMed

    Shirwaiker, Rohan A; Wysk, Richard A; Kariyawasam, Subhashinie; Voigt, Robert C; Carrion, Hector; Nembhard, Harriet Black

    2014-02-01

    There is a pressing need to control the occurrences of nosocomial infections due to their detrimental effects on patient well-being and the rising treatment costs. To prevent the contact transmission of such infections via health-critical surfaces, a prophylactic surface system that consists of an interdigitated array of oppositely charged silver electrodes with polymer separations and utilizes oligodynamic iontophoresis has been recently developed. This paper presents a systematic study that empirically characterizes the effects of the surface system parameters on its antibacterial efficacy, and validates the system's effectiveness. In the first part of the study, a fractional factorial design of experiments (DOE) was conducted to identify the statistically significant system parameters. The data were used to develop a first-order response surface model to predict the system's antibacterial efficacy based on the input parameters. In the second part of the study, the effectiveness of the surface system was validated by evaluating it against four bacterial species responsible for several nosocomial infections - Staphylococcus aureus, Escherichia coli, Pseudomonas aeruginosa, and Enterococcus faecalis - alongside non-antibacterial polymer (acrylic) control surfaces. The system demonstrated statistically significant efficacy against all four bacteria. The results indicate that given a constant total effective surface area, the system designed with micro-scale features (minimum feature width: 20 μm) and activated by 15 μA direct current will provide the most effective antibacterial prophylaxis.

  13. Topological phase transition of single-crystal Bi based on empirical tight-binding calculations

    NASA Astrophysics Data System (ADS)

    Ohtsubo, Yoshiyuki; Kimura, Shin-ichi

    2016-12-01

    The topological order of single-crystal Bi and its surface states on the (111) surface are studied in detail based on empirical tight-binding (TB) calculations. New TB parameters are presented that are used to calculate the surface states of semi-infinite single-crystal Bi(111), which agree with the experimental angle-resolved photoelectron spectroscopy results. The influence of the crystal lattice distortion is surveyed and it is revealed that a topological phase transition is driven by in-plane expansion with topologically non-trivial bulk bands. In contrast with the semi-infinite system, the surface-state dispersions on finite-thickness slabs are non-trivial irrespective of the bulk topological order. The role of the interaction between the top and bottom surfaces in the slab is systematically studied, and it is revealed that a very thick slab is required to properly obtain the bulk topological order of Bi from the (111) surface state: above 150 biatomic layers in this case.

  14. Satellite-based empirical models linking river plume dynamics with hypoxic area and volume

    NASA Astrophysics Data System (ADS)

    Le, Chengfeng; Lehrter, John C.; Hu, Chuanmin; Obenour, Daniel R.

    2016-03-01

    Satellite-based empirical models explaining hypoxic area and volume variation were developed for the seasonally hypoxic (O2 < 2 mg L-1) northern Gulf of Mexico adjacent to the Mississippi River. Annual variations in midsummer hypoxic area and volume were related to Moderate Resolution Imaging Spectroradiometer-derived monthly estimates of river plume area (km2) and average, inner shelf chlorophyll a concentration (Chl a, mg m-3). River plume area in June was negatively related with midsummer hypoxic area (km2) and volume (km3), while July inner shelf Chl a was positively related to hypoxic area and volume. Multiple regression models using river plume area and Chl a as independent variables accounted for most of the variability in hypoxic area (R2 = 0.92) or volume (R2 = 0.89). These models explain more variation in hypoxic area than models using Mississippi River nutrient loads as independent variables. The results here also support a hypothesis that confinement of the river plume to the inner shelf is an important mechanism controlling hypoxia area and volume in this region.

  15. Extracting and compensating for FOG vibration error based on improved empirical mode decomposition with masking signal.

    PubMed

    Chen, Xiyuan; Wang, Wei

    2017-05-01

    Vibration is an important error source in fiber-optic gyroscopes (FOGs), and the extraction and compensation of vibration signals are important ways to eliminate the error and improve the accuracy of FOG. To decompose the vibration signal better, a new algorithm based on empirical mode decomposition (EMD) with masking signal is proposed in this paper. The masking signal is a kind of sinusoidal signal, and the frequency and amplitude of the masking signal are selected using improved particle swarm optimization. The proposed algorithm is called adaptive masking EMD (AM-EMD). First, the optimal frequency value and range of the masking signal are analyzed and presented. Then, an optimal decomposition of the vibration signal is obtained using the PSO to obtain the optimal frequency and amplitude of the masking signal. Finally, the extraction and compensation of the vibration signal are completed according to the mean value of intrinsic mode functions (IMFs) and the correlation coefficients between IMFs and the vibration signal. Experiments show that the new method can decompose the signal more accurately compared to traditional methods, and the precision of compensation is higher.

  16. Empirical likelihood-based confidence intervals for length-biased data

    PubMed Central

    Ning, J.; Qin, J.; Asgharian, M.; Shen, Y.

    2013-01-01

    Logistic or other constraints often preclude the possibility of conducting incident cohort studies. A feasible alternative in such cases is to conduct a cross-sectional prevalent cohort study for which we recruit prevalent cases, i.e. subjects who have already experienced the initiating event, say the onset of a disease. When the interest lies in estimating the lifespan between the initiating event and a terminating event, say death for instance, such subjects may be followed prospectively until the terminating event or loss to follow-up, whichever happens first. It is well known that prevalent cases have, on average, longer lifespans. As such they do not constitute a representative random sample from the target population; they comprise a biased sample. If the initiating events are generated from a stationary Poisson process, the so-called stationarity assumption, this bias is called length bias. The current literature on length-biased sampling lacks a simple method for estimating the margin of errors of commonly used summary statistics. We fill this gap using the empirical likelihood-based confidence intervals by adapting this method to right-censored length-biased survival data. Both large and small sample behaviors of these confidence intervals are studied. We illustrate our method using a set of data on survival with dementia, collected as part of the Canadian Study of Health and Aging. PMID:23027662

  17. Multivariate Empirical Mode Decomposition Based Signal Analysis and Efficient-Storage in Smart Grid

    SciTech Connect

    Liu, Lu; Albright, Austin P; Rahimpour, Alireza; Guo, Jiandong; Qi, Hairong; Liu, Yilu

    2017-01-01

    Wide-area-measurement systems (WAMSs) are used in smart grid systems to enable the efficient monitoring of grid dynamics. However, the overwhelming amount of data and the severe contamination from noise often impede the effective and efficient data analysis and storage of WAMS generated measurements. To solve this problem, we propose a novel framework that takes advantage of Multivariate Empirical Mode Decomposition (MEMD), a fully data-driven approach to analyzing non-stationary signals, dubbed MEMD based Signal Analysis (MSA). The frequency measurements are considered as a linear superposition of different oscillatory components and noise. The low-frequency components, corresponding to the long-term trend and inter-area oscillations, are grouped and compressed by MSA using the mean shift clustering algorithm. Whereas, higher-frequency components, mostly noise and potentially part of high-frequency inter-area oscillations, are analyzed using Hilbert spectral analysis and they are delineated by statistical behavior. By conducting experiments on both synthetic and real-world data, we show that the proposed framework can capture the characteristics, such as trends and inter-area oscillation, while reducing the data storage requirements

  18. The mature minor: some critical psychological reflections on the empirical bases.

    PubMed

    Partridge, Brian C

    2013-06-01

    Moral and legal notions engaged in clinical ethics should not only possess analytic clarity but a sound basis in empirical findings. The latter condition brings into question the expansion of the mature minor exception. The mature minor exception in the healthcare law of the United States has served to enable those under the legal age to consent to medical treatment. Although originally developed primarily for minors in emergency or quasi-emergency need for health care, it was expanded especially from the 1970s in order to cover unemancipated minors older than 14 years. This expansion initially appeared plausible, given psychological data that showed the intellectual capacity of minors over 14 to recognize the causal connection between their choices and the consequences of their choices. However, subsequent psychological studies have shown that minors generally fail to have realistic affective and evaluative appreciations of the consequences of their decisions, because they tend to over-emphasize short-term benefits and underestimate long-term risks. Also, unlike most decisionmakers over 21, the decisions of minors are more often marked by the lack of adequate impulse control, all of which is reflected in the far higher involvement of adolescents in acts of violence, intentional injury, and serious automobile accidents. These effects are more evident in circumstances that elicit elevated affective responses. The advent of brain imaging has allowed the actual visualization of qualitative differences between how minors versus persons over the age of 21 generally assess risks and benefits and make decisions. In the case of most under the age of 21, subcortical systems fail adequately to be checked by the prefrontal systems that are involved in adult executive decisions. The neuroanatomical and psychological model developed by Casey, Jones, and Summerville offers an empirical insight into the qualitative differences in the neuroanatomical and neuropsychological bases

  19. The effects of sampling on the efficiency and accuracy of k-mer indexes: Theoretical and empirical comparisons using the human genome.

    PubMed

    Almutairy, Meznah; Torng, Eric

    2017-01-01

    One of the most common ways to search a sequence database for sequences that are similar to a query sequence is to use a k-mer index such as BLAST. A big problem with k-mer indexes is the space required to store the lists of all occurrences of all k-mers in the database. One method for reducing the space needed, and also query time, is sampling where only some k-mer occurrences are stored. Most previous work uses hard sampling, in which enough k-mer occurrences are retained so that all similar sequences are guaranteed to be found. In contrast, we study soft sampling, which further reduces the number of stored k-mer occurrences at a cost of decreasing query accuracy. We focus on finding highly similar local alignments (HSLA) over nucleotide sequences, an operation that is fundamental to biological applications such as cDNA sequence mapping. For our comparison, we use the NCBI BLAST tool with the human genome and human ESTs. When identifying HSLAs, we find that soft sampling significantly reduces both index size and query time with relatively small losses in query accuracy. For the human genome and HSLAs of length at least 100 bp, soft sampling reduces index size 4-10 times more than hard sampling and processes queries 2.3-6.8 times faster, while still achieving retention rates of at least 96.6%. When we apply soft sampling to the problem of mapping ESTs against the genome, we map more than 98% of ESTs perfectly while reducing the index size by a factor of 4 and query time by 23.3%. These results demonstrate that soft sampling is a simple but effective strategy for performing efficient searches for HSLAs. We also provide a new model for sampling with BLAST that predicts empirical retention rates with reasonable accuracy by modeling two key problem factors.

  20. Novel echocardiographic approach to the accurate measurement of pulmonary vascular resistance based on a theoretical formula in patients with left heart failure -- pilot study.

    PubMed

    Kanda, Takashi; Fujita, Masashi; Iida, Osamu; Masuda, Masaharu; Okamoto, Shin; Ishihara, Takayuki; Nanto, Kiyonori; Shiraki, Tatsuya; Takahara, Mitsuyoshi; Sakata, Yasushi; Uematsu, Masaaki

    2015-01-01

    Several non-invasive methods for measuring pulmonary vascular resistance (PVR) have been proposed to date, but they remain empirical, lacking sufficient accuracy to be used in clinical practice. The aims of this study were to propose a novel echocardiographic measurement of PVR based on a theoretical formula and investigate the feasibilty and accuracy of this method in patients with heart failure. Echocardiography was performed in 27 patients before right heart catheterization. Peak tricuspid regurgitation pressure gradient (TRPG), pulmonary regurgitation pressure gradient in end-diastole (PRPGed), and cardiac output derived from the time-velocity integral and the diameter in the left ventricular outflow tract (COLVOT) were measured. PVR based on a theoretical formula (PVRtheo) was calculated as (TRPG-PRPGed)/3COLVOTin Wood units (WU). The results were compared with PVR obtained by right heart catheterization (PVRcath) using linear regression and Bland-Altman analysis. Mean PVRcathwas 2.4±1.4 WU. PVRtheocorrelated well with PVRcath(r=0.83, P<0.001). On Bland-Altman analysis the mean difference was 0.1±0.7 WU. The limits of agreements were smaller than for other non-invasive estimations previously reported. The new echocardiographic approach based on a theoretical formula provides a non-invasive and accurate assessment of PVR in patients with heart failure.

  1. A Theoretical Contact Mechanics Model of Machine Joint Interfaces Based on Fractal Theory

    NASA Astrophysics Data System (ADS)

    Liu, Wenwei; Wang, Yuanhang; Li, Xiaobing; Huang, Chuangmian; Yang, Jianfeng; Pan, GuangZe; Ding, Xiaojian

    2017-06-01

    To obtain more accurate contact mechanics model of joint interfaces theoretically, A theoretical contact mechanics model of joint interfaces based on fractal theory was proposed. An improved 3D WM fractal function was used to characterize the contact surface, contact load and contact area equations of asperities in elastoplastic deformation regime were established, solutions for the relationships of area-displacement and force-displacement in the elastoplastic deformation regime was done based on Hertz contact theory and fractal theory, and the present model was proven to be effective by comparing the present model to other four classical contact models and test data. Furthermore, simulations and numerical calculation results reveal nonlinear relation between the influence factors and the contact area.

  2. Empirically Supported Family-Based Treatments for Conduct Disorder and Delinquency in Adolescents

    PubMed Central

    Henggeler, Scott W.; Sheidow, Ashli J.

    2011-01-01

    Several family-based treatments of conduct disorder and delinquency in adolescents have emerged as evidence-based and, in recent years, have been transported to more than 800 community practice settings. These models include multisystemic therapy, functional family therapy, multidimensional treatment foster care, and, to a lesser extent, brief strategic family therapy. In addition to summarizing the theoretical and clinical bases of these treatments, their results in efficacy and effectiveness trials are examined with particular emphasis on any demonstrated capacity to achieve favorable outcomes when implemented by real world practitioners in community practice settings. Special attention is also devoted to research on purported mechanisms of change as well as the long-term sustainability of outcomes achieved by these treatment models. Importantly, we note that the developers of each of the models have developed quality assurance systems to support treatment fidelity and youth and family outcomes; and the developers have formed purveyor organizations to facilitate the large scale transport of their respective treatments to community settings nationally and internationally. PMID:22283380

  3. A novel signal compression method based on optimal ensemble empirical mode decomposition for bearing vibration signals

    NASA Astrophysics Data System (ADS)

    Guo, Wei; Tse, Peter W.

    2013-01-01

    Today, remote machine condition monitoring is popular due to the continuous advancement in wireless communication. Bearing is the most frequently and easily failed component in many rotating machines. To accurately identify the type of bearing fault, large amounts of vibration data need to be collected. However, the volume of transmitted data cannot be too high because the bandwidth of wireless communication is limited. To solve this problem, the data are usually compressed before transmitting to a remote maintenance center. This paper proposes a novel signal compression method that can substantially reduce the amount of data that need to be transmitted without sacrificing the accuracy of fault identification. The proposed signal compression method is based on ensemble empirical mode decomposition (EEMD), which is an effective method for adaptively decomposing the vibration signal into different bands of signal components, termed intrinsic mode functions (IMFs). An optimization method was designed to automatically select appropriate EEMD parameters for the analyzed signal, and in particular to select the appropriate level of the added white noise in the EEMD method. An index termed the relative root-mean-square error was used to evaluate the decomposition performances under different noise levels to find the optimal level. After applying the optimal EEMD method to a vibration signal, the IMF relating to the bearing fault can be extracted from the original vibration signal. Compressing this signal component obtains a much smaller proportion of data samples to be retained for transmission and further reconstruction. The proposed compression method were also compared with the popular wavelet compression method. Experimental results demonstrate that the optimization of EEMD parameters can automatically find appropriate EEMD parameters for the analyzed signals, and the IMF-based compression method provides a higher compression ratio, while retaining the bearing defect

  4. An empirically based tool for analyzing morbidity associated with operations for congenital heart disease

    PubMed Central

    Jacobs, Marshall L.; O’Brien, Sean M.; Jacobs, Jeffrey P.; Mavroudis, Constantine; Lacour-Gayet, Francois; Pasquali, Sara K.; Welke, Karl; Pizarro, Christian; Tsai, Felix; Clarke, David R.

    2013-01-01

    Objective: Congenital heart surgery outcomes analysis requires reliable methods of estimating the risk of adverse outcomes. Contemporary methods focus primarily on mortality or rely on expert opinion to estimate morbidity associated with different procedures. We created an objective, empirically based index that reflects statistically estimated risk of morbidity by procedure. Methods: Morbidity risk was estimated using data from 62,851 operations in the Society of Thoracic Surgeons Congenital Heart Surgery Database (2002-2008). Model-based estimates with 95% Bayesian credible intervals were calculated for each procedure’s average risk of major complications and average postoperative length of stay. These 2 measures were combined into a composite morbidity score. A total of 140 procedures were assigned scores ranging from 0.1 to 5.0 and sorted into 5 relatively homogeneous categories. Results: Model-estimated risk of major complications ranged from 1.0% for simple procedures to 38.2% for truncus arteriosus with interrupted aortic arch repair. Procedure-specific estimates of average postoperative length of stay ranged from 2.9 days for simple procedures to 42.6 days for a combined atrial switch and Rastelli operation. Spearman rank correlation between raw rates of major complication and average postoperative length of stay was 0.82 in procedures with n greater than 200. Rate of major complications ranged from 3.2% in category 1 to 30.0% in category 5. Aggregate average postoperative length of stay ranged from 6.3 days in category 1 to 34.0 days in category 5. Conclusions: Complication rates and postoperative length of stay provide related but not redundant information about morbidity. The Morbidity Scores and Categories provide an objective assessment of risk associated with operations for congenital heart disease, which should facilitate comparison of outcomes across cohorts with differing case mixes. PMID:22835225

  5. Empirical population and public health ethics: A review and critical analysis to advance robust empirical-normative inquiry.

    PubMed

    Knight, Rod

    2016-05-01

    The field of population and public health ethics (PPHE) has yet to fully embrace the generation of evidence as an important project. This article reviews the philosophical debates related to the 'empirical turn' in clinical bioethics, and critically analyses how PPHE has and can engage with the philosophical implications of generating empirical data within the task of normative inquiry. A set of five conceptual and theoretical issues pertaining to population health that are unresolved and could potentially benefit from empirical PPHE approaches to normative inquiry are discussed. Each issue differs from traditional empirical bioethical approaches, in that they emphasize (1) concerns related to the population, (2) 'upstream' policy-relevant health interventions - within and outside of the health care system and (3) the prevention of illness and disease. Within each theoretical issue, a conceptual example from population and public health approaches to HIV prevention and health promotion is interrogated. Based on the review and critical analysis, this article concludes that empirical-normative approaches to population and public health ethics would be most usefully pursued as an iterative project (rather than as a linear project), in which the normative informs the empirical questions to be asked and new empirical evidence constantly directs conceptualizations of what constitutes morally robust public health practices. Finally, a conceptualization of an empirical population and public health ethics is advanced in order to open up new interdisciplinary 'spaces', in which empirical and normative approaches to ethical inquiry are transparently (and ethically) integrated. © The Author(s) 2015.

  6. Uncertainty Propagation and the Fano Based Infromation Theoretic Method: A Radar Example

    DTIC Science & Technology

    2015-02-01

    Hogg, “Phase transitions and the search problem by, artificial intellience”, (an Elsevier journal) volume 81, published in 1996, Pages 1- 15. [ 39 ] R...order of N operations (N dimensional problem ). While entropy based methods operate non-parametrically such that the probability does not have to be...suggested the use of an information theoretic approach to the design of radar waveforms. Dr. Bell formulated and obtained a solution to the problem

  7. Shape of the self-concept clarity change during group psychotherapy predicts the outcome: an empirical validation of the theoretical model of the self-concept change

    PubMed Central

    Styła, Rafał

    2015-01-01

    Background: Self-Concept Clarity (SCC) describes the extent to which the schemas of the self are internally integrated, well defined, and temporally stable. This article presents a theoretical model that describes how different shapes of SCC change (especially stable increase and “V” shape) observed in the course of psychotherapy are related to the therapy outcome. Linking the concept of Jean Piaget and the dynamic systems theory, the study postulates that a stable SCC increase is needed for the participants with a rather healthy personality structure, while SCC change characterized by a “V” shape or fluctuations is optimal for more disturbed patients. Method: Correlational study in a naturalistic setting with repeated measurements (M = 5.8) was conducted on the sample of 85 patients diagnosed with neurosis and personality disorders receiving intensive eclectic group psychotherapy under routine inpatient conditions. Participants filled in the Self-Concept Clarity Scale (SCCS), Symptoms' Questionnaire KS-II, and Neurotic Personality Questionnaire KON-2006 at the beginning and at the end of the course of psychotherapy. The SCCS was also administered every 2 weeks during psychotherapy. Results: As hypothesized, among the relatively healthiest group of patients the stable SCC increase was related to positive treatment outcome, while more disturbed patients benefited from the fluctuations and “V” shape of SCC change. Conclusions: The findings support the idea that for different personality dispositions either a monotonic increase or transient destabilization of SCC is a sign of a good treatment prognosis. PMID:26579001

  8. Shape of the self-concept clarity change during group psychotherapy predicts the outcome: an empirical validation of the theoretical model of the self-concept change.

    PubMed

    Styła, Rafał

    2015-01-01

    Self-Concept Clarity (SCC) describes the extent to which the schemas of the self are internally integrated, well defined, and temporally stable. This article presents a theoretical model that describes how different shapes of SCC change (especially stable increase and "V" shape) observed in the course of psychotherapy are related to the therapy outcome. Linking the concept of Jean Piaget and the dynamic systems theory, the study postulates that a stable SCC increase is needed for the participants with a rather healthy personality structure, while SCC change characterized by a "V" shape or fluctuations is optimal for more disturbed patients. Correlational study in a naturalistic setting with repeated measurements (M = 5.8) was conducted on the sample of 85 patients diagnosed with neurosis and personality disorders receiving intensive eclectic group psychotherapy under routine inpatient conditions. Participants filled in the Self-Concept Clarity Scale (SCCS), Symptoms' Questionnaire KS-II, and Neurotic Personality Questionnaire KON-2006 at the beginning and at the end of the course of psychotherapy. The SCCS was also administered every 2 weeks during psychotherapy. As hypothesized, among the relatively healthiest group of patients the stable SCC increase was related to positive treatment outcome, while more disturbed patients benefited from the fluctuations and "V" shape of SCC change. The findings support the idea that for different personality dispositions either a monotonic increase or transient destabilization of SCC is a sign of a good treatment prognosis.

  9. Sci—Thur AM: YIS - 09: Validation of a General Empirically-Based Beam Model for kV X-ray Sources

    SciTech Connect

    Poirier, Y.; Sommerville, M.; Johnstone, C.D.; Gräfe, J.; Nygren, I.; Jacso, F.; Khan, R.; Villareal-Barajas, J.E.; Tambasco, M.

    2014-08-15

    Purpose: To present an empirically-based beam model for computing dose deposited by kilovoltage (kV) x-rays and validate it for radiographic, CT, CBCT, superficial, and orthovoltage kV sources. Method and Materials: We modeled a wide variety of imaging (radiographic, CT, CBCT) and therapeutic (superficial, orthovoltage) kV x-ray sources. The model characterizes spatial variations of the fluence and spectrum independently. The spectrum is derived by matching measured values of the half value layer (HVL) and nominal peak potential (kVp) to computationally-derived spectra while the fluence is derived from in-air relative dose measurements. This model relies only on empirical values and requires no knowledge of proprietary source specifications or other theoretical aspects of the kV x-ray source. To validate the model, we compared measured doses to values computed using our previously validated in-house kV dose computation software, kVDoseCalc. The dose was measured in homogeneous and anthropomorphic phantoms using ionization chambers and LiF thermoluminescent detectors (TLDs), respectively. Results: The maximum difference between measured and computed dose measurements was within 2.6%, 3.6%, 2.0%, 4.8%, and 4.0% for the modeled radiographic, CT, CBCT, superficial, and the orthovoltage sources, respectively. In the anthropomorphic phantom, the computed CBCT dose generally agreed with TLD measurements, with an average difference and standard deviation ranging from 2.4 ± 6.0% to 5.7 ± 10.3% depending on the imaging technique. Most (42/62) measured TLD doses were within 10% of computed values. Conclusions: The proposed model can be used to accurately characterize a wide variety of kV x-ray sources using only empirical values.

  10. An empirical comparison of character-based and coalescent-based approaches to species delimitation in a young avian complex.

    PubMed

    McKay, Bailey D; Mays, Herman L; Wu, Yuchun; Li, Hui; Yao, Cheng-Te; Nishiumi, Isao; Zou, Fasheng

    2013-10-01

    The process of discovering species is a fundamental responsibility of systematics. Recently, there has been a growing interest in coalescent-based methods of species delimitation aimed at objectively identifying species early in the divergence process. However, few empirical studies have compared these new methods with character-based approaches for discovering species. In this study, we applied both a character-based and a coalescent-based approaches to delimit species in a closely related avian complex, the light-vented/Taiwan bulbul (Pycnonotus sinensis/Pycnonotus taivanus). Population aggregation analyses of plumage, mitochondrial and 13 nuclear intron character data sets produced conflicting species hypotheses with plumage data suggesting three species, mitochondrial data suggesting two species, and nuclear intron data suggesting one species. Such conflict is expected among recently diverged species, and by integrating all sources of data, we delimited three species verified with independently congruent character evidence as well as a more weakly supported fourth species identified by a single character. Attempts to validate species hypothesis using Bayesian Phylogenetics and Phylogeography (BPP), a coalescent-based method of species delimitation, revealed several issues that can seemingly affect statistical support for species recognition. We found that θ priors had a dramatic impact on speciation probabilities, with lower values consistently favouring splitting and higher values consistently favouring lumping. More resolved guide trees also resulted in overall higher speciation probabilities. Finally, we found suggestive evidence that BPP is sensitive to the divergent effects of nonrandom mating caused by intraspecific processes such as isolation-with-distance, and therefore, BPP may not be a conservative method for delimiting independently evolving population lineages. Based on these concerns, we questioned the reliability of BPP results and based our

  11. Theoretical frameworks informing family-based child and adolescent obesity interventions: A qualitative meta-synthesis.

    PubMed

    Alulis, Sarah; Grabowski, Dan

    2017-08-24

    Child and adolescent obesity trends are rising throughout the world, revealing treatment difficulties and a lack of consensus about treatment. The family system is broadly viewed as a potential setting for facilitation of behaviour change. Therefore, family-based interventions have come into focus. However, the use of theoretical frameworks to strengthen these interventions is rare and very uneven. To conduct a qualitative meta-synthesis of family-based interventions for child and adolescent obesity to identify the theoretical frameworks applied, thus understanding how theory is used in practice. A literature review was conducted between January and March 2016. A total of 35 family-based interventions were selected for analysis. Eleven interventions explicitly stated that theory guided the development and were classified as theory-inspired. The social cognitive, self-efficacy and Family Systems Theory appeared most frequently. The remaining 24 were classified as theory-related as theoretical elements of self-monitoring; stimulus control, reinforcement and modelling were used. The designs of family-based interventions reveal numerous inconsistencies and a significant void between research results and health care practice. Based on the analysis, this article proposes three themes to be used as focus points when designing future interventions and when selecting theories for the development of solid, theory-based frameworks for application. The themes are: (1) age of target group, (2) intervention objective, and (3) self-efficacy and readiness for change. Copyright © 2017 Asia Oceania Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.

  12. A Scaling-based Robust Empirical Model of Stream Dissolved Oxygen for the Eastern United States

    NASA Astrophysics Data System (ADS)

    Siddik, M. A. Z.; Abdul-Aziz, O. I.; Ishtiaq, K. S.

    2016-12-01

    We predicted the diurnal cycles of hourly dissolved oxygen (DO) in streams by using a scaling-based empirical model. A single reference observation from each DO cycle was considered as a scaling parameter to convert the DO cycles into a single dimensionless diurnal curve, which was then estimated by employing an extended stochastic harmonic algorithm (ESHA). Hourly DO observations of growing season (May-August) during 2008-2015 from sixteen USGS water quality monitoring stations of the eastern U.S. were used for model calibrations and validations. The study sites incorporated a gradient in climate (tropical vs. temperate), land use (rural vs. urban vs. forest vs. coastal), and catchment size (2.4 - 184.0 mile2) — representing different USEPA level III ecoregions. The estimated model parameters showed a notable spatiotemporal robustness by collapsing into narrow ranges across the growing seasons and study sites. DO predicted using the site-specific, temporally averaged model parameters from a day-specific single reference observation exhibited good model fitting efficiency and accuracy. The model performance was also assessed by simulating the DO time-series using a regional scale parameter set that was obtained from the spatiotemporal aggregation (average) of the estimated parameters for all the sites. Further, model robustness to the individual and simultaneous perturbations in parameters was determined by calculating the analytical sensitivity and uncertainty measures. The study is a continuation of our previous research with a goal to develop a regional-scale predictive model of diurnal cycles of DO. The model can be used to estimate missing data in the observed fine-resolution time-series of DO with a single set of parameter across the eastern USA from limited observations. The fine-resolution DO time-series will be useful to dynamically assess the general health of the aquatic ecosystem.

  13. An altimetry-based gravest empirical mode south of Africa: 1. Development and validation

    NASA Astrophysics Data System (ADS)

    Swart, Sebastiaan; Speich, Sabrina; Ansorge, Isabelle J.; Lutjeharms, Johann R. E.

    2010-03-01

    Hydrographic transects of the Antarctic Circumpolar Current (ACC) south of Africa are projected into baroclinic stream function space parameterized by pressure and dynamic height. This produces a two-dimensional gravest empirical mode (GEM) that captures more than 97% of the total density and temperature variance in the ACC domain. Weekly maps of absolute dynamic topography data, derived from satellite altimetry, are combined with the GEM to obtain a 16 year time series of temperature and salinity fields. The time series of thermohaline fields are compared with independent in situ observations. The residuals decrease sharply below the thermocline and through the entire water column the mean root-mean-square (RMS) error is 0.15°C, 0.02, and 0.02 kg m-3 for temperature, salinity, and density, respectively. The positions of ACC fronts are followed in time using satellite altimetry data. These locations correspond to both the observed and GEM-based positions. The available temperature and salinity information allow one to calculate the baroclinic zonal velocity field between the surface and 2500 dbar. This is compared with velocity measurements from repeat hydrographic transects at the GoodHope line. The net accumulated transports of the ACC, derived from these different methods are within 1-3 Sv of each other. Similarly, GEM-produced cross-sectional velocities at 300 dbar compare closely to the observed data, with the RMS difference not exceeding 0.03 m s-1. The continuous time series of thermohaline fields, described here, are further exploited to understand the dynamic nature of the ACC fronts in the region, and which is given by Swart and Speich (2010).

  14. Analyzing the "nature" and "specific effectiveness" of clinical empathy: a theoretical overview and contribution towards a theory-based research agenda.

    PubMed

    Neumann, Melanie; Bensing, Jozien; Mercer, Stewart; Ernstmann, Nicole; Ommen, Oliver; Pfaff, Holger

    2009-03-01

    To establish sound empirical evidence that clinical empathy (abbreviated as CE) is a core element in the clinician-patient relationship with profound therapeutic potential, a substantial theoretical-based understanding of CE in medical care and medical education is still required. The two aims of the present paper are, therefore, (1) to give a multidisciplinary overview of the "nature" and "specific effectiveness" of CE, and (2) to use this base as a means of deriving relevant questions for a theory-based research agenda. We made an effort to identify current and past literature about conceptual and empirical work focusing on empathy and CE, which derives from a multiplicity of disciplines. We review the material in a structured fashion. We describe the "nature" of empathy by briefly summarizing concepts and models from sociology, psychology, social psychology, education, (social-)epidemiology, and neurosciences. To explain the "specific effectiveness" of CE for patients, we develop the "Effect model of empathic communication in the clinical encounter", which demonstrates how an empathically communicating clinician can achieve improved patient outcomes. Both parts of theoretical findings are synthesized in a theory-based research agenda with the following key hypotheses: (1) CE is a determinant of quality in medical care, (2) clinicians biographical experiences influence their empathic behavior, and (3) CE is affected by situational factors. The main conclusions of our review are twofold. First of all, CE seems to be a fundamental determinant of quality in medical care, because it enables the clinician to fulfill key medical tasks more accurately, thereby achieving enhanced patient health outcomes. Second, the integration of biographical experiences and situational factors as determinants of CE in medical care and medical education appears to be crucial to develop and promote CE and ultimately ensuring high-quality patient care. Due to the complexity and

  15. Knowledge-based immunosuppressive therapy for kidney transplant patients--from theoretical model to clinical integration.

    PubMed

    Seeling, Walter; Plischke, Max; de Bruin, Jeroen S; Schuh, Christian

    2015-01-01

    Immunosuppressive therapy is a risky necessity after a patient received a kidney transplant. To reduce risks, a knowledge-based system was developed that determines the right dosage of the immunosuppresive agent Tacrolimus. A theoretical model, to classify medication blood levels as well as medication adaptions, was created using data from almost 500 patients, and over 13.000 examinations. This model was then translated into an Arden Syntax knowledge base, and integrated directly into the hospital information system of the Vienna General Hospital. In this paper we give an overview of the construction and integration of such a system.

  16. A game-theoretic framework for landmark-based image segmentation.

    PubMed

    Ibragimov, Bulat; Likar, Boštjan; Pernus, Franjo; Vrtovec, Tomaz

    2012-09-01

    A novel game-theoretic framework for landmark-based image segmentation is presented. Landmark detection is formulated as a game, in which landmarks are players, landmark candidate points are strategies, and likelihoods that candidate points represent landmarks are payoffs, determined according to the similarity of image intensities and spatial relationships between the candidate points in the target image and their corresponding landmarks in images from the training set. The solution of the formulated game-theoretic problem is the equilibrium of candidate points that represent landmarks in the target image and is obtained by a novel iterative scheme that solves the segmentation problem in polynomial time. The object boundaries are finally extracted by applying dynamic programming to the optimal path searching problem between the obtained adjacent landmarks. The performance of the proposed framework was evaluated for segmentation of lung fields from chest radiographs and heart ventricles from cardiac magnetic resonance cross sections. The comparison to other landmark-based segmentation techniques shows that the results obtained by the proposed game-theoretic framework are highly accurate and precise in terms of mean boundary distance and area overlap. Moreover, the framework overcomes several shortcomings of the existing techniques, such as sensitivity to initialization and convergence to local optima.

  17. Security Analysis of Selected AMI Failure Scenarios Using Agent Based Game Theoretic Simulation

    SciTech Connect

    Abercrombie, Robert K; Schlicher, Bob G; Sheldon, Frederick T

    2014-01-01

    Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. We concentrated our analysis on the Advanced Metering Infrastructure (AMI) functional domain which the National Electric Sector Cyber security Organization Resource (NESCOR) working group has currently documented 29 failure scenarios. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain. From these five selected scenarios, we characterize them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrates how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA.

  18. Asynchronous cellular automaton-based neuron: theoretical analysis and on-FPGA learning.

    PubMed

    Matsubara, Takashi; Torikai, Hiroyuki

    2013-05-01

    A generalized asynchronous cellular automaton-based neuron model is a special kind of cellular automaton that is designed to mimic the nonlinear dynamics of neurons. The model can be implemented as an asynchronous sequential logic circuit and its control parameter is the pattern of wires among the circuit elements that is adjustable after implementation in a field-programmable gate array (FPGA) device. In this paper, a novel theoretical analysis method for the model is presented. Using this method, stabilities of neuron-like orbits and occurrence mechanisms of neuron-like bifurcations of the model are clarified theoretically. Also, a novel learning algorithm for the model is presented. An equivalent experiment shows that an FPGA-implemented learning algorithm enables an FPGA-implemented model to automatically reproduce typical nonlinear responses and occurrence mechanisms observed in biological and model neurons.

  19. An Empirically-based Steady-state Friction Law and its Implications for Fault Stability

    NASA Astrophysics Data System (ADS)

    Spagnuolo, E.; Nielsen, S. B.; Di Toro, G.; Violay, M.

    2015-12-01

    Empirically-based rate-and-state friction laws (RSFL) have been proposed to model the dependence of friction forces with slip and time. The relevance of the RSFL for earthquakes mechanics is that few constitutive parameters (e.g. A-B= dτ/dlog(V) with τ and V the shear stress and slip rate respectively, allow us to define the stability conditions of a fault. According to RSFL if A-B> 0, τ increases with V (rate-hardening behavior) resulting in an unconditionally stable behavior; if A-B< 0, τ decreases with V (rate-weakening behavior) potentially resulting in an unstable behavior leading to dynamic runaway. Given that τ at steady state conditions allows us also to define a critical fault stiffness, the RSFL determine a condition of stability for faults as their stiffness approaches the critical conditions. However, the conditions of fault stability, determined by the critical stiffness under the assumption of either a rate-weakening or a rate-hardening behavior, might be restrictive given that frictional properties sensibly change as a function of slip or slip rate. Moreover, the RSFL were determined from experiments conducted at sub-seismic slip rates (< 1 cm/s) and their extrapolation to earthquake deformation conditions remains questionable on the basis of the experimental evidence of large dynamic weakening at seismic slip rates and the plethora of slip events which characterize the seismic cycle. Here, we propose a modified RSFL based on the review of a large published and unpublished dataset of rock-friction experiments performed with different testing machines (rotary shear, bi-axial, tri-axial). The modified RSFL is valid at steady-state conditions from sub-seismic to seismic slip rates (0.1 μm/s

  20. Simulation of Long Lived Tracers Using an Improved Empirically Based Two-Dimensional Model Transport Algorithm

    NASA Technical Reports Server (NTRS)

    Fleming, E. L.; Jackman, C. H.; Stolarski, R. S.; Considine, D. B.

    1998-01-01

    We have developed a new empirically-based transport algorithm for use in our GSFC two-dimensional transport and chemistry model. The new algorithm contains planetary wave statistics, and parameterizations to account for the effects due to gravity waves and equatorial Kelvin waves. As such, this scheme utilizes significantly more information compared to our previous algorithm which was based only on zonal mean temperatures and heating rates. The new model transport captures much of the qualitative structure and seasonal variability observed in long lived tracers, such as: isolation of the tropics and the southern hemisphere winter polar vortex; the well mixed surf-zone region of the winter sub-tropics and mid-latitudes; the latitudinal and seasonal variations of total ozone; and the seasonal variations of mesospheric H2O. The model also indicates a double peaked structure in methane associated with the semiannual oscillation in the tropical upper stratosphere. This feature is similar in phase but is significantly weaker in amplitude compared to the observations. The model simulations of carbon-14 and strontium-90 are in good agreement with observations, both in simulating the peak in mixing ratio at 20-25 km, and the decrease with altitude in mixing ratio above 25 km. We also find mostly good agreement between modeled and observed age of air determined from SF6 outside of the northern hemisphere polar vortex. However, observations inside the vortex reveal significantly older air compared to the model. This is consistent with the model deficiencies in simulating CH4 in the northern hemisphere winter high latitudes and illustrates the limitations of the current climatological zonal mean model formulation. The propagation of seasonal signals in water vapor and CO2 in the lower stratosphere showed general agreement in phase, and the model qualitatively captured the observed amplitude decrease in CO2 from the tropics to midlatitudes. However, the simulated seasonal