Science.gov

Sample records for empirically based theoretical

  1. Distributed optical fiber-based theoretical and empirical methods monitoring hydraulic engineering subjected to seepage velocity

    NASA Astrophysics Data System (ADS)

    Su, Huaizhi; Tian, Shiguang; Cui, Shusheng; Yang, Meng; Wen, Zhiping; Xie, Wei

    2016-09-01

    In order to systematically investigate the general principle and method of monitoring seepage velocity in the hydraulic engineering, the theoretical analysis and physical experiment were implemented based on distributed fiber-optic temperature sensing (DTS) technology. During the coupling influence analyses between seepage field and temperature field in the embankment dam or dike engineering, a simplified model was constructed to describe the coupling relationship of two fields. Different arrangement schemes of optical fiber and measuring approaches of temperature were applied on the model. The inversion analysis idea was further used. The theoretical method of monitoring seepage velocity in the hydraulic engineering was finally proposed. A new concept, namely the effective thermal conductivity, was proposed referring to the thermal conductivity coefficient in the transient hot-wire method. The influence of heat conduction and seepage could be well reflected by this new concept, which was proved to be a potential approach to develop an empirical method monitoring seepage velocity in the hydraulic engineering.

  2. Outcome (competency) based education: an exploration of its origins, theoretical basis, and empirical evidence.

    PubMed

    Morcke, Anne Mette; Dornan, Tim; Eika, Berit

    2013-10-01

    Outcome based or competency based education (OBE) is so firmly established in undergraduate medical education that it might not seem necessary to ask why it was included in recommendations for the future, like the Flexner centenary report. Uncritical acceptance may not, however, deliver its greatest benefits. Our aim was to explore the underpinnings of OBE: its historical origins, theoretical basis, and empirical evidence of its effects in order to answer the question: How can predetermined learning outcomes influence undergraduate medical education? This literature review had three components: A review of historical landmarks in the evolution of OBE; a review of conceptual frameworks and theories; and a systematic review of empirical publications from 1999 to 2010 that reported data concerning the effects of learning outcomes on undergraduate medical education. OBE had its origins in behaviourist theories of learning. It is tightly linked to the assessment and regulation of proficiency, but less clearly linked to teaching and learning activities. Over time, there have been cycles of advocacy for, then criticism of, OBE. A recurring critique concerns the place of complex personal and professional attributes as "competencies". OBE has been adopted by consensus in the face of weak empirical evidence. OBE, which has been advocated for over 50 years, can contribute usefully to defining requisite knowledge and skills, and blueprinting assessments. Its applicability to more complex aspects of clinical performance is not clear. OBE, we conclude, provides a valuable approach to some, but not all, important aspects of undergraduate medical education.

  3. A Review of Theoretical and Empirical Advancements

    ERIC Educational Resources Information Center

    Wang, Mo; Henkens, Kene; van Solinge, Hanna

    2011-01-01

    In this article, we review both theoretical and empirical advancements in retirement adjustment research. After reviewing and integrating current theories about retirement adjustment, we propose a resource-based dynamic perspective to apply to the understanding of retirement adjustment. We then review empirical findings that are associated with…

  4. Theoretical and empirical bases for dialect-neutral language assessment: contributions from theoretical and applied linguistics to communication disorders.

    PubMed

    Pearson, Barbara Zurer

    2004-02-01

    Three avenues of theoretical research provide insights for discovering abstract properties of language that are subject to disorder and amenable to assessment: (1) the study of universal grammar and its acquisition; (2) descriptions of African American English (AAE) Syntax, Semantics, and Phonology within theoretical linguistics; and (3) the study of specific language impairment (SLI) cross-linguistically. Abstract linguistic concepts were translated into a set of assessment protocols that were used to establish normative data on language acquisition (developmental milestones) in typically developing AAE children ages 4 to 9 years. Testing AAE-speaking language impaired (LI) children and both typically developing (TD) and LI Mainstream American English (MAE)-learning children on these same measures provided the data to select assessments for which (1) TD MAE and AAE children performed the same, and (2) TD performance was reliably different from LI performance in both dialect groups.

  5. Outcome (Competency) Based Education: An Exploration of Its Origins, Theoretical Basis, and Empirical Evidence

    ERIC Educational Resources Information Center

    Morcke, Anne Mette; Dornan, Tim; Eika, Berit

    2013-01-01

    Outcome based or competency based education (OBE) is so firmly established in undergraduate medical education that it might not seem necessary to ask why it was included in recommendations for the future, like the Flexner centenary report. Uncritical acceptance may not, however, deliver its greatest benefits. Our aim was to explore the…

  6. Theoretical and Empirical Base for Implementation Components of Health-Promoting Schools

    ERIC Educational Resources Information Center

    Samdal, Oddrun; Rowling, Louise

    2011-01-01

    Purpose: Efforts to create a scientific base for the health-promoting school approach have so far not articulated a clear "Science of Delivery". There is thus a need for systematic identification of clearly operationalised implementation components. To address a next step in the refinement of the health-promoting schools' work, this paper sets out…

  7. Pathways from parental AIDS to child psychological, educational and sexual risk: developing an empirically-based interactive theoretical model.

    PubMed

    Cluver, Lucie; Orkin, Mark; Boyes, Mark E; Sherr, Lorraine; Makasi, Daphne; Nikelo, Joy

    2013-06-01

    Increasing evidence demonstrates negative psychological, health, and developmental outcomes for children associated with parental HIV/AIDS illness and death. However, little is known about how parental AIDS leads to negative child outcomes. This study used a structural equation modelling approach to develop an empirically-based theoretical model of interactive relationships between parental or primary caregiver AIDS-illness, AIDS-orphanhood and predicted intervening factors associated with children's psychological distress, educational access and sexual health. Cross-sectional data were collected in 2009-2011, from 6002 children aged 10-17 years in three provinces of South Africa using stratified random sampling. Comparison groups included children orphaned by AIDS, orphaned by other causes and non-orphans, and children whose parents or primary caregivers were unwell with AIDS, unwell with other causes or healthy. Participants reported on psychological symptoms, educational access, and sexual health risks, as well as hypothesized sociodemographic and intervening factors. In order to build an interactive theoretical model of multiple child outcomes, multivariate regression and structural equation models were developed for each individual outcome, and then combined into an overall model. Neither AIDS-orphanhood nor parental AIDS-illness were directly associated with psychological distress, educational access, or sexual health. Instead, significant indirect effects of AIDS-orphanhood and parental AIDS-illness were obtained on all measured outcomes. Child psychological, educational and sexual health risks share a common set of intervening variables including parental disability, poverty, community violence, stigma, and child abuse that together comprise chain effects. In all models, parental AIDS-illness had stronger effects and more risk pathways than AIDS-orphanhood, especially via poverty and parental disability. AIDS-orphanhood and parental AIDS-illness impact

  8. A theoretical and empirical investigation of nutritional label use.

    PubMed

    Drichoutis, Andreas C; Lazaridis, Panagiotis; Nayga, Rodolfo M; Kapsokefalou, Maria; Chryssochoidis, George

    2008-08-01

    Due in part to increasing diet-related health problems caused, among others, by obesity, nutritional labelling has been considered important, mainly because it can provide consumers with information that can be used to make informed and healthier food choices. Several studies have focused on the empirical perspective of nutritional label use. None of these studies, however, have focused on developing a theoretical economic model that would adequately describe nutritional label use based on a utility theoretic framework. We attempt to fill this void by developing a simple theoretical model of nutritional label use, incorporating the time a consumer spends reading labels as part of the food choice process. The demand equations of the model are then empirically tested. Results suggest the significant role of several variables that flow directly from the model which, to our knowledge, have not been used in any previous empirical work.

  9. Designing Educative Curriculum Materials: A Theoretically and Empirically Driven Process

    ERIC Educational Resources Information Center

    Davis, Elizabeth A.; Palincsar, Annemarie Sullivan; Arias, Anna Maria; Bismack, Amber Schultz; Marulis, Loren M.; Iwashyna, Stefanie K.

    2014-01-01

    In this article, the authors argue for a design process in the development of educative curriculum materials that is theoretically and empirically driven. Using a design-based research approach, they describe their design process for incorporating educative features intended to promote teacher learning into existing, high-quality curriculum…

  10. Defining Empirically Based Practice.

    ERIC Educational Resources Information Center

    Siegel, Deborah H.

    1984-01-01

    Provides a definition of empirically based practice, both conceptually and operationally. Describes a study of how research and practice were integrated in the graduate social work program at the School of Social Service Administration, University of Chicago. (JAC)

  11. An empirical evaluation of two theoretically-based hypotheses on the directional association between self-worth and hope.

    PubMed

    McDavid, Lindley; McDonough, Meghan H; Smith, Alan L

    2015-06-01

    Fostering self-worth and hope are important goals of positive youth development (PYD) efforts, yet intervention design is complicated by contrasting theoretical hypotheses regarding the directional association between these constructs. Therefore, within a longitudinal design we tested: (1) that self-worth predicts changes in hope (self theory; Harter, 1999), and (2) that hope predicts changes in self-worth (hope theory; Snyder, 2002) over time. Youth (N = 321; Mage = 10.33 years) in a physical activity-based PYD program completed surveys 37-45 days prior to and on the second day and third-to-last day of the program. A latent variable panel model that included autoregressive and cross-lagged paths indicated that self-worth was a significant predictor of change in hope, but hope did not predict change in self-worth. Therefore, the directional association between self-worth and hope is better explained by self-theory and PYD programs should aim to enhance perceptions of self-worth to build perceptions of hope.

  12. A Theoretical and Empirical Integrated Method to Select the Optimal Combined Signals for Geometry-Free and Geometry-Based Three-Carrier Ambiguity Resolution

    PubMed Central

    Zhao, Dongsheng; Roberts, Gethin Wyn; Lau, Lawrence; Hancock, Craig M.; Bai, Ruibin

    2016-01-01

    Twelve GPS Block IIF satellites, out of the current constellation, can transmit on three-frequency signals (L1, L2, L5). Taking advantages of these signals, Three-Carrier Ambiguity Resolution (TCAR) is expected to bring much benefit for ambiguity resolution. One of the research areas is to find the optimal combined signals for a better ambiguity resolution in geometry-free (GF) and geometry-based (GB) mode. However, the existing researches select the signals through either pure theoretical analysis or testing with simulated data, which might be biased as the real observation condition could be different from theoretical prediction or simulation. In this paper, we propose a theoretical and empirical integrated method, which first selects the possible optimal combined signals in theory and then refines these signals with real triple-frequency GPS data, observed at eleven baselines of different lengths. An interpolation technique is also adopted in order to show changes of the AR performance with the increase in baseline length. The results show that the AR success rate can be improved by 3% in GF mode and 8% in GB mode at certain intervals of the baseline length. Therefore, the TCAR can perform better by adopting the combined signals proposed in this paper when the baseline meets the length condition. PMID:27854324

  13. A Theoretical and Empirical Integrated Method to Select the Optimal Combined Signals for Geometry-Free and Geometry-Based Three-Carrier Ambiguity Resolution.

    PubMed

    Zhao, Dongsheng; Roberts, Gethin Wyn; Lau, Lawrence; Hancock, Craig M; Bai, Ruibin

    2016-11-16

    Twelve GPS Block IIF satellites, out of the current constellation, can transmit on three-frequency signals (L1, L2, L5). Taking advantages of these signals, Three-Carrier Ambiguity Resolution (TCAR) is expected to bring much benefit for ambiguity resolution. One of the research areas is to find the optimal combined signals for a better ambiguity resolution in geometry-free (GF) and geometry-based (GB) mode. However, the existing researches select the signals through either pure theoretical analysis or testing with simulated data, which might be biased as the real observation condition could be different from theoretical prediction or simulation. In this paper, we propose a theoretical and empirical integrated method, which first selects the possible optimal combined signals in theory and then refines these signals with real triple-frequency GPS data, observed at eleven baselines of different lengths. An interpolation technique is also adopted in order to show changes of the AR performance with the increase in baseline length. The results show that the AR success rate can be improved by 3% in GF mode and 8% in GB mode at certain intervals of the baseline length. Therefore, the TCAR can perform better by adopting the combined signals proposed in this paper when the baseline meets the length condition.

  14. Semivolatile Organic Compounds in Homes: Strategies for Efficient and Systematic Exposure Measurement Based on Empirical and Theoretical Factors

    PubMed Central

    2014-01-01

    Residential exposure can dominate total exposure for commercial chemicals of health concern; however, despite the importance of consumer exposures, methods for estimating household exposures remain limited. We collected house dust and indoor air samples in 49 California homes and analyzed for 76 semivolatile organic compounds (SVOCs)—phthalates, polybrominated diphenyl ethers (PBDEs), polychlorinated biphenyls (PCBs), polycyclic aromatic hydrocarbons (PAHs), and pesticides. Sixty chemicals were detected in either dust or air and here we report 58 SVOCs detected in dust for the first time. In dust, phthalates (bis(2-ethylhexyl) phthalate, benzyl butyl phthalate, di-n-butyl phthalate) and flame retardants (PBDE 99, PBDE 47) were detected at the highest concentrations relative to other chemicals at the 95th percentile, while phthalates were highest at the median. Because SVOCs are found in both gas and condensed phases and redistribute from their original source over time, partitioning models can clarify their fate indoors. We use empirical data to validate air-dust partitioning models and use these results, combined with experience in SVOC exposure assessment, to recommend residential exposure measurement strategies. We can predict dust concentrations reasonably well from measured air concentrations (R2 = 0.80). Partitioning models and knowledge of chemical Koa elucidate exposure pathways and suggest priorities for chemical regulation. These findings also inform study design by allowing researchers to select sampling approaches optimized for their chemicals of interest and study goals. While surface wipes are commonly used in epidemiology studies because of ease of implementation, passive air sampling may be more standardized between homes and also relatively simple to deploy. Validation of passive air sampling methods for SVOCs is a priority. PMID:25488487

  15. Semivolatile organic compounds in homes: strategies for efficient and systematic exposure measurement based on empirical and theoretical factors.

    PubMed

    Dodson, Robin E; Camann, David E; Morello-Frosch, Rachel; Brody, Julia G; Rudel, Ruthann A

    2015-01-06

    Residential exposure can dominate total exposure for commercial chemicals of health concern; however, despite the importance of consumer exposures, methods for estimating household exposures remain limited. We collected house dust and indoor air samples in 49 California homes and analyzed for 76 semivolatile organic compounds (SVOCs)--phthalates, polybrominated diphenyl ethers (PBDEs), polychlorinated biphenyls (PCBs), polycyclic aromatic hydrocarbons (PAHs), and pesticides. Sixty chemicals were detected in either dust or air and here we report 58 SVOCs detected in dust for the first time. In dust, phthalates (bis(2-ethylhexyl) phthalate, benzyl butyl phthalate, di-n-butyl phthalate) and flame retardants (PBDE 99, PBDE 47) were detected at the highest concentrations relative to other chemicals at the 95th percentile, while phthalates were highest at the median. Because SVOCs are found in both gas and condensed phases and redistribute from their original source over time, partitioning models can clarify their fate indoors. We use empirical data to validate air-dust partitioning models and use these results, combined with experience in SVOC exposure assessment, to recommend residential exposure measurement strategies. We can predict dust concentrations reasonably well from measured air concentrations (R(2) = 0.80). Partitioning models and knowledge of chemical Koa elucidate exposure pathways and suggest priorities for chemical regulation. These findings also inform study design by allowing researchers to select sampling approaches optimized for their chemicals of interest and study goals. While surface wipes are commonly used in epidemiology studies because of ease of implementation, passive air sampling may be more standardized between homes and also relatively simple to deploy. Validation of passive air sampling methods for SVOCs is a priority.

  16. Competence and drug use: theoretical frameworks, empirical evidence and measurement.

    PubMed

    Lindenberg, C S; Solorzano, R; Kelley, M; Darrow, V; Gendrop, S C; Strickland, O

    1998-01-01

    Statistics show that use of harmful substances (alcohol, cigarettes, marijuana, cocaine) among women of childbearing age is widespread and serious. Numerous theoretical models and empirical studies have attempted to explain the complex factors that lead individuals to use drugs. The Social Stress Model of Substance Abuse [1] is one model developed to explain parameters that influence drug use. According to the model, the likelihood of an individual engaging in drug use is seen as a function of the stress level and the extent to which it is offset by stress modifiers such as social networks, social competencies, and resources. The variables of the denominator are viewed as interacting with each other to buffer the impact of stress [1]. This article focuses on one of the constructs in this model: that of competence. It presents a summary of theoretical and conceptual formulations for the construct of competence, a review of empirical evidence for the association of competence with drug use, and describes the preliminary development of a multi-scale instrument designed to assess drug protective competence among low-income Hispanic childbearing women. Based upon theoretical and empirical studies, eight domains of drug protective competence were identified and conceptually defined. Using subscales from existing instruments with psychometric evidence for their validity and reliability, a multi-scale instrument was developed to assess drug protective competence. Hypothesis testing was used to assess construct validity. Four drug protective competence domains (social influence, sociability, self-worth, and control/responsibility) were found to be statistically associated with drug use behaviors. Although not statistically significant, expected trends were observed between drug use and the other four domains of drug protective competence (intimacy, nurturance, goal directedness, and spiritual directedness). Study limitations and suggestions for further psychometric testing

  17. Gay identity, interpersonal violence, and HIV risk behaviors: an empirical test of theoretical relationships among a probability-based sample of urban men who have sex with men.

    PubMed

    Relf, Michael V; Huang, Bu; Campbell, Jacquelyn; Catania, Joe

    2004-01-01

    The highest absolute number of new HIV infections and AIDS cases still occur among men who have sex with men (MSM). Numerous theoretical approaches have been used to understand HIV risk behaviors among MSM; however, no theoretical model examines sexual risk behaviors in the context of gay identity and interpersonal violence. Using a model testing predictive correlational design, the theoretical relationships between childhood sexual abuse, adverse early life experiences, gay identity, substance use, battering, aversive emotions, HIV alienation, cue-to-action triggers, and HIV risk behaviors were empirically tested using confirmatory factor analysis and structural equation modeling. The relationships between these constructs are complex, yet childhood sexual abuse and gay identity were found to be theoretically associated with HIV risk behaviors. Also of importance, battering victimization was identified as a key mediating variable between childhood sexual abuse, gay identity, and adverse early life experiences and HIV risk behaviors among urban MSM.

  18. Empirical and theoretical analysis of complex systems

    NASA Astrophysics Data System (ADS)

    Zhao, Guannan

    structures evolve on a similar timescale to individual level transmission, we investigated the process of transmission through a model population comprising of social groups which follow simple dynamical rules for growth and break-up, and the profiles produced bear a striking resemblance to empirical data obtained from social, financial and biological systems. Finally, for better implementation of a widely accepted power law test algorithm, we have developed a fast testing procedure using parallel computation.

  19. Dissecting Situational Strength: Theoretical Analysis and Empirical Tests

    DTIC Science & Technology

    2012-09-01

    Technical Report 1315 Dissecting Situational Strength: Theoretical Analysis and Empirical Tests Reeshad S. Dalal George Mason ...George Mason University Charlie K. Brooks Georgia Institute of Technology September 2012 United States Army Research...MICHELLE SAMS, Ph.D. Director Research accomplished under contract for the Department of the Army George Mason University

  20. Developing Empirically Based Models of Practice.

    ERIC Educational Resources Information Center

    Blythe, Betty J.; Briar, Scott

    1985-01-01

    Over the last decade emphasis has shifted from theoretically based models of practice to empirically based models whose elements are derived from clinical research. These models are defined and a developing model of practice through the use of single-case methodology is examined. Potential impediments to this new role are identified. (Author/BL)

  1. Kinetics of solute adsorption at solid/solution interfaces: a theoretical development of the empirical pseudo-first and pseudo-second order kinetic rate equations, based on applying the statistical rate theory of interfacial transport.

    PubMed

    Rudzinski, Wladyslaw; Plazinski, Wojciech

    2006-08-24

    For practical applications of solid/solution adsorption processes, the kinetics of these processes is at least as much essential as their features at equilibrium. Meanwhile, the general understanding of this kinetics and its corresponding theoretical description are far behind the understanding and the level of theoretical interpretation of adsorption equilibria in these systems. The Lagergren empirical equation proposed at the end of 19th century to describe the kinetics of solute sorption at the solid/solution interfaces has been the most widely used kinetic equation until now. This equation has also been called the pseudo-first order kinetic equation because it was intuitively associated with the model of one-site occupancy adsorption kinetics governed by the rate of surface reaction. More recently, its generalization for the two-sites-occupancy adsorption was proposed and called the pseudo-second-order kinetic equation. However, the general use and the wide applicability of these empirical equations during more than one century have not resulted in a corresponding fundamental search for their theoretical origin. Here the first theoretical development of these equations is proposed, based on applying the new fundamental approach to kinetics of interfacial transport called the Statistical Rate Theory. It is shown that these empirical equations are simplified forms of a more general equation developed here, for the case when the adsorption kinetics is governed by the rate of surface reactions. The features of that general equation are shown by presenting exhaustive model investigations, and the applicability of that equation is tested by presenting a quantitative analysis of some experimental data reported in the literature.

  2. Empirical STORM-E Model. [I. Theoretical and Observational Basis

    NASA Technical Reports Server (NTRS)

    Mertens, Christopher J.; Xu, Xiaojing; Bilitza, Dieter; Mlynczak, Martin G.; Russell, James M., III

    2013-01-01

    Auroral nighttime infrared emission observed by the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument onboard the Thermosphere-Ionosphere-Mesosphere Energetics and Dynamics (TIMED) satellite is used to develop an empirical model of geomagnetic storm enhancements to E-region peak electron densities. The empirical model is called STORM-E and will be incorporated into the 2012 release of the International Reference Ionosphere (IRI). The proxy for characterizing the E-region response to geomagnetic forcing is NO+(v) volume emission rates (VER) derived from the TIMED/SABER 4.3 lm channel limb radiance measurements. The storm-time response of the NO+(v) 4.3 lm VER is sensitive to auroral particle precipitation. A statistical database of storm-time to climatological quiet-time ratios of SABER-observed NO+(v) 4.3 lm VER are fit to widely available geomagnetic indices using the theoretical framework of linear impulse-response theory. The STORM-E model provides a dynamic storm-time correction factor to adjust a known quiescent E-region electron density peak concentration for geomagnetic enhancements due to auroral particle precipitation. Part II of this series describes the explicit development of the empirical storm-time correction factor for E-region peak electron densities, and shows comparisons of E-region electron densities between STORM-E predictions and incoherent scatter radar measurements. In this paper, Part I of the series, the efficacy of using SABER-derived NO+(v) VER as a proxy for the E-region response to solar-geomagnetic disturbances is presented. Furthermore, a detailed description of the algorithms and methodologies used to derive NO+(v) VER from SABER 4.3 lm limb emission measurements is given. Finally, an assessment of key uncertainties in retrieving NO+(v) VER is presented

  3. Segmented crystalline scintillators: empirical and theoretical investigation of a high quantum efficiency EPID based on an initial engineering prototype CsI(TI) detector.

    PubMed

    Sawant, Amit; Antonuk, Larry E; El-Mohri, Youcef; Zhao, Qihua; Wang, Yi; Li, Yixin; Du, Hong; Perna, Louis

    2006-04-01

    Modern-day radiotherapy relies on highly sophisticated forms of image guidance in order to implement increasingly conformal treatment plans and achieve precise dose delivery. One of the most important goals of such image guidance is to delineate the clinical target volume from surrounding normal tissue during patient setup and dose delivery, thereby avoiding dependence on surrogates such as bony landmarks. In order to achieve this goal, it is necessary to integrate highly efficient imaging technology, capable of resolving soft-tissue contrast at very low doses, within the treatment setup. In this paper we report on the development of one such modality, which comprises a nonoptimized, prototype electronic portal imaging device (EPID) based on a 40 mm thick, segmented crystalline CsI(Tl) detector incorporated into an indirect-detection active matrix flat panel imager (AMFPI). The segmented detector consists of a matrix of 160 x 160 optically isolated, crystalline CsI(Tl) elements spaced at 1016 microm pitch. The detector was coupled to an indirect detection-based active matrix array having a pixel pitch of 508 microm, with each detector element registered to 2 x 2 array pixels. The performance of the prototype imager was evaluated under very low-dose radiotherapy conditions and compared to that of a conventional megavoltage AMFPI based on a Lanex Fast-B phosphor screen. Detailed quantitative measurements were performed in order to determine the x-ray sensitivity, modulation transfer function, noise power spectrum, and detective quantum efficiency (DQE). In addition, images of a contrast-detail phantom and an anthropomorphic head phantom were also acquired. The prototype imager exhibited approximately 22 times higher zero-frequency DQE (approximately 22%) compared to that of the conventional AMFPI (approximately 1%). The measured zero-frequency DQE was found to be lower than theoretical upper limits (approximately 27%) calculated from Monte Carlo simulations, which

  4. Converging Paradigms: A Reflection on Parallel Theoretical Developments in Psychoanalytic Metapsychology and Empirical Dream Research.

    PubMed

    Schmelowszky, Ágoston

    2016-08-01

    In the last decades one can perceive a striking parallelism between the shifting perspective of leading representatives of empirical dream research concerning their conceptualization of dreaming and the paradigm shift within clinically based psychoanalytic metapsychology with respect to its theory on the significance of dreaming. In metapsychology, dreaming becomes more and more a central metaphor of mental functioning in general. The theories of Klein, Bion, and Matte-Blanco can be considered as milestones of this paradigm shift. In empirical dream research, the competing theories of Hobson and of Solms respectively argued for and against the meaningfulness of the dream-work in the functioning of the mind. In the meantime, empirical data coming from various sources seemed to prove the significance of dream consciousness for the development and maintenance of adaptive waking consciousness. Metapsychological speculations and hypotheses based on empirical research data seem to point in the same direction, promising for contemporary psychoanalytic practice a more secure theoretical base. In this paper the author brings together these diverse theoretical developments and presents conclusions regarding psychoanalytic theory and technique, as well as proposing an outline of an empirical research plan for testing the specificity of psychoanalysis in developing dream formation.

  5. Whole-body cryotherapy: empirical evidence and theoretical perspectives

    PubMed Central

    Bleakley, Chris M; Bieuzen, François; Davison, Gareth W; Costello, Joseph T

    2014-01-01

    Whole-body cryotherapy (WBC) involves short exposures to air temperatures below −100°C. WBC is increasingly accessible to athletes, and is purported to enhance recovery after exercise and facilitate rehabilitation postinjury. Our objective was to review the efficacy and effectiveness of WBC using empirical evidence from controlled trials. We found ten relevant reports; the majority were based on small numbers of active athletes aged less than 35 years. Although WBC produces a large temperature gradient for tissue cooling, the relatively poor thermal conductivity of air prevents significant subcutaneous and core body cooling. There is weak evidence from controlled studies that WBC enhances antioxidant capacity and parasympathetic reactivation, and alters inflammatory pathways relevant to sports recovery. A series of small randomized studies found WBC offers improvements in subjective recovery and muscle soreness following metabolic or mechanical overload, but little benefit towards functional recovery. There is evidence from one study only that WBC may assist rehabilitation for adhesive capsulitis of the shoulder. There were no adverse events associated with WBC; however, studies did not seem to undertake active surveillance of predefined adverse events. Until further research is available, athletes should remain cognizant that less expensive modes of cryotherapy, such as local ice-pack application or cold-water immersion, offer comparable physiological and clinical effects to WBC. PMID:24648779

  6. Integrative Behavioral Couple Therapy: Theoretical Background, Empirical Research, and Dissemination.

    PubMed

    Roddy, McKenzie K; Nowlan, Kathryn M; Doss, Brian D; Christensen, Andrew

    2016-09-01

    Integrative Behavioral Couple Therapy (IBCT), developed by Drs. Andrew Christensen and Neil Jacobson, builds off the tradition of behavioral couple therapy by including acceptance strategies as key components of treatment. Results from a large randomized clinical trial of IBCT indicate that it yields large and significant gains in relationship satisfaction. Furthermore, these benefits have been shown to persist for at least 5 years after treatment for the average couple. Not only does IBCT positively impact relationship constructs such as satisfaction and communication, but the benefits of therapy extend to individual, co-parenting, and child functioning. Moreover, IBCT has been shown to operate through the putative mechanisms of improvements in emotional acceptance, behavior change, and communication. IBCT was chosen for nationwide training and dissemination through the Veteran Affairs Medical Centers. Furthermore, the principles of IBCT have been translated into a web-based intervention for distressed couples, OurRelationship.com. IBCT is continuing to evolve and grow as research and technologies allow for continued evaluation and dissemination of this well-supported theoretical model.

  7. [Attachment theory and eating disorders--theoretical and empirical issues].

    PubMed

    Józefik, Barbara

    2008-01-01

    The paper presents the attachment theory in relation to eating disorders. In the first part, the classic concepts of anorexia and bulimia nervosa are discussed taking into account assumptions of Bowlby's and his followers' model. In the second part, empirical data on anorexia and bulimia nervosa and attachment patterns are presented. The importance of methodological issues is stressed regarding the attachment model particularly in eating disorders. In the conclusion significant findings correlation of attachment patterns and eating disorders are indicated.

  8. Unifying Different Theories of Learning: Theoretical Framework and Empirical Evidence

    ERIC Educational Resources Information Center

    Phan, Huy Phuong

    2008-01-01

    The main aim of this research study was to test out a conceptual model encompassing the theoretical frameworks of achievement goals, study processing strategies, effort, and reflective thinking practice. In particular, it was postulated that the causal influences of achievement goals on academic performance are direct and indirect through study…

  9. Alternative Information Theoretic Measures of Television Messages: An Empirical Test.

    ERIC Educational Resources Information Center

    Danowski, James A.

    This research examines two information theoretic measures of media exposure within the same sample of respondents and examines their relative strengths in predicting self-reported aggression. The first measure is the form entropy (DYNUFAM) index of Watt and Krull, which assesses the structural and organizational properties of specific television…

  10. Potential benefits of remote sensing: Theoretical framework and empirical estimate

    NASA Technical Reports Server (NTRS)

    Eisgruber, L. M.

    1972-01-01

    A theoretical framwork is outlined for estimating social returns from research and application of remote sensing. The approximate dollar magnitude is given of a particular application of remote sensing, namely estimates of corn production, soybeans, and wheat. Finally, some comments are made on the limitations of this procedure and on the implications of results.

  11. Psychometric Test Theory and Cognitive Processes: A Theoretical Scrutiny and Empirical Research. Research Bulletin No. 57.

    ERIC Educational Resources Information Center

    Leino, Jarkko

    This report is the third in a series of research projects concerning abilities and performance processes, particularly in school mathematics. A theoretical scrutiny of traditional psychometric testing, cognitive processes, their interrelationships, and an empirical application of the theoretical considerations on the level of junior secondary…

  12. The ascent of man: Theoretical and empirical evidence for blatant dehumanization.

    PubMed

    Kteily, Nour; Bruneau, Emile; Waytz, Adam; Cotterill, Sarah

    2015-11-01

    Dehumanization is a central concept in the study of intergroup relations. Yet although theoretical and methodological advances in subtle, "everyday" dehumanization have progressed rapidly, blatant dehumanization remains understudied. The present research attempts to refocus theoretical and empirical attention on blatant dehumanization, examining when and why it provides explanatory power beyond subtle dehumanization. To accomplish this, we introduce and validate a blatant measure of dehumanization based on the popular depiction of evolutionary progress in the "Ascent of Man." We compare blatant dehumanization to established conceptualizations of subtle and implicit dehumanization, including infrahumanization, perceptions of human nature and human uniqueness, and implicit associations between ingroup-outgroup and human-animal concepts. Across 7 studies conducted in 3 countries, we demonstrate that blatant dehumanization is (a) more strongly associated with individual differences in support for hierarchy than subtle or implicit dehumanization, (b) uniquely predictive of numerous consequential attitudes and behaviors toward multiple outgroup targets, (c) predictive above prejudice, and (d) reliable over time. Finally, we show that blatant-but not subtle-dehumanization spikes immediately after incidents of real intergroup violence and strongly predicts support for aggressive actions like torture and retaliatory violence (after the Boston Marathon bombings and Woolwich attacks in England). This research extends theory on the role of dehumanization in intergroup relations and intergroup conflict and provides an intuitive, validated empirical tool to reliably measure blatant dehumanization.

  13. Adaptive evolution: evaluating empirical support for theoretical predictions

    PubMed Central

    Olson-Manning, Carrie F.; Wagner, Maggie R.; Mitchell-Olds, Thomas

    2013-01-01

    Adaptive evolution is shaped by the interaction of population genetics, natural selection and underlying network and biochemical constraints. Variation created by mutation, the raw material for evolutionary change, is translated into phenotypes by flux through metabolic pathways and by the topography and dynamics of molecular networks. Finally, the retention of genetic variation and the efficacy of selection depend on population genetics and demographic history. Emergent high-throughput experimental methods and sequencing technologies allow us to gather more evidence and to move beyond the theory in different systems and populations. Here we review the extent to which recent evidence supports long-established theoretical principles of adaptation. PMID:23154809

  14. Recent work on consciousness: philosophical, theoretical, and empirical.

    PubMed

    Churchland, P M; Churchland, P S

    1997-06-01

    Broad-spectrum philosophical resistance to physicalist accounts of conscious awareness has condensed around a single and clearly identified line of argument. Philosophical analysis and criticism of that line of argument has also begun to crystallize. The nature of that criticism coheres with certain theoretical ideas from cognitive neuroscience that attempt to address both the existence and the contents of consciousness. Also, experimental evidence has recently begun to emerge that will serve both to constrain and to inspire such theorizing. The present article attempts to summarize the situation.

  15. Theoretical, Methodological, and Empirical Approaches to Cost Savings: A Compendium

    SciTech Connect

    M Weimar

    1998-12-10

    This publication summarizes and contains the original documentation for understanding why the U.S. Department of Energy's (DOE's) privatization approach provides cost savings and the different approaches that could be used in calculating cost savings for the Tank Waste Remediation System (TWRS) Phase I contract. The initial section summarizes the approaches in the different papers. The appendices are the individual source papers which have been reviewed by individuals outside of the Pacific Northwest National Laboratory and the TWRS Program. Appendix A provides a theoretical basis for and estimate of the level of savings that can be" obtained from a fixed-priced contract with performance risk maintained by the contractor. Appendix B provides the methodology for determining cost savings when comparing a fixed-priced contractor with a Management and Operations (M&O) contractor (cost-plus contractor). Appendix C summarizes the economic model used to calculate cost savings and provides hypothetical output from preliminary calculations. Appendix D provides the summary of the approach for the DOE-Richland Operations Office (RL) estimate of the M&O contractor to perform the same work as BNFL Inc. Appendix E contains information on cost growth and per metric ton of glass costs for high-level waste at two other DOE sites, West Valley and Savannah River. Appendix F addresses a risk allocation analysis of the BNFL proposal that indicates,that the current approach is still better than the alternative.

  16. A Unified Model of Knowledge Sharing Behaviours: Theoretical Development and Empirical Test

    ERIC Educational Resources Information Center

    Chennamaneni, Anitha; Teng, James T. C.; Raja, M. K.

    2012-01-01

    Research and practice on knowledge management (KM) have shown that information technology alone cannot guarantee that employees will volunteer and share knowledge. While previous studies have linked motivational factors to knowledge sharing (KS), we took a further step to thoroughly examine this theoretically and empirically. We developed a…

  17. Social Experiences with Peers and High School Graduation: A Review of Theoretical and Empirical Research

    ERIC Educational Resources Information Center

    Veronneau, Marie-Helene; Vitaro, Frank

    2007-01-01

    This article reviews theoretical and empirical work on the relations between child and adolescent peer experiences and high school graduation. First, the different developmental models that guide research in this domain will be explained. Then, descriptions of peer experiences at the group level (peer acceptance/rejection, victimisation, and crowd…

  18. Theoretical Foundation of Zisman's Empirical Equation for Wetting of Liquids on Solid Surfaces

    ERIC Educational Resources Information Center

    Zhu, Ruzeng; Cui, Shuwen; Wang, Xiaosong

    2010-01-01

    Theories of wetting of liquids on solid surfaces under the condition that van der Waals force is dominant are briefly reviewed. We show theoretically that Zisman's empirical equation for wetting of liquids on solid surfaces is a linear approximation of the Young-van der Waals equation in the wetting region, and we express the two parameters in…

  19. University Students' Understanding of the Concepts Empirical, Theoretical, Qualitative and Quantitative Research

    ERIC Educational Resources Information Center

    Murtonen, Mari

    2015-01-01

    University research education in many disciplines is frequently confronted by problems with students' weak level of understanding of research concepts. A mind map technique was used to investigate how students understand central methodological concepts of empirical, theoretical, qualitative and quantitative. The main hypothesis was that some…

  20. [Masked orthographic priming in the recognition of written words: empirical data and theoretical prospects].

    PubMed

    Robert, Christelle

    2009-12-01

    The present paper reviews the main studies that have been conducted on the effects of masked orthographic priming in written word recognition. Empirical data accumulated over the last two decades are exposed by considering three factors that play a role in the effects of orthographic priming: prime lexicality, prime duration, and target and/or prime orthographic neighbourhood. The theoretical implications of these data are discussed in light of the two major frameworks of visual word recognition, the serial search and the interactive activation. As a whole, the interactive activation hypothesis seems to be more appropriate to account for the empirical data.

  1. Empirically Based Play Interventions for Children

    ERIC Educational Resources Information Center

    Reddy, Linda A., Ed.; Files-Hall, Tara M., Ed.; Schaefer, Charles E., Ed.

    2005-01-01

    "Empirically Based Play Interventions for Children" is a compilation of innovative, well-designed play interventions, presented for the first time in one text. Play therapy is the oldest and most popular form of child therapy in clinical practice and is widely considered by practitioners to be uniquely responsive to children's developmental needs.…

  2. Color and psychological functioning: a review of theoretical and empirical work.

    PubMed

    Elliot, Andrew J

    2015-01-01

    In the past decade there has been increased interest in research on color and psychological functioning. Important advances have been made in theoretical work and empirical work, but there are also important weaknesses in both areas that must be addressed for the literature to continue to develop apace. In this article, I provide brief theoretical and empirical reviews of research in this area, in each instance beginning with a historical background and recent advancements, and proceeding to an evaluation focused on weaknesses that provide guidelines for future research. I conclude by reiterating that the literature on color and psychological functioning is at a nascent stage of development, and by recommending patience and prudence regarding conclusions about theory, findings, and real-world application.

  3. Color and psychological functioning: a review of theoretical and empirical work

    PubMed Central

    Elliot, Andrew J.

    2015-01-01

    In the past decade there has been increased interest in research on color and psychological functioning. Important advances have been made in theoretical work and empirical work, but there are also important weaknesses in both areas that must be addressed for the literature to continue to develop apace. In this article, I provide brief theoretical and empirical reviews of research in this area, in each instance beginning with a historical background and recent advancements, and proceeding to an evaluation focused on weaknesses that provide guidelines for future research. I conclude by reiterating that the literature on color and psychological functioning is at a nascent stage of development, and by recommending patience and prudence regarding conclusions about theory, findings, and real-world application. PMID:25883578

  4. A review of the nurtured heart approach to parenting: evaluation of its theoretical and empirical foundations.

    PubMed

    Hektner, Joel M; Brennan, Alison L; Brotherson, Sean E

    2013-09-01

    The Nurtured Heart Approach to parenting (NHA; Glasser & Easley, 2008) is summarized and evaluated in terms of its alignment with current theoretical perspectives and empirical evidence in family studies and developmental science. Originally conceived and promoted as a behavior management approach for parents of difficult children (i.e., with behavior disorders), NHA is increasingly offered as a valuable strategy for parents of any children, despite a lack of published empirical support. Parents using NHA are trained to minimize attention to undesired behaviors, provide positive attention and praise for compliance with rules, help children be successful by scaffolding and shaping desired behavior, and establish a set of clear rules and consequences. Many elements of the approach have strong support in the theoretical and empirical literature; however, some of the assumptions are more questionable, such as that negative child behavior can always be attributed to unintentional positive reinforcement by parents responding with negative attention. On balance, NHA appears to promote effective and validated parenting practices, but its effectiveness now needs to be tested empirically.

  5. Gatekeeper Training for Suicide Prevention: A Theoretical Model and Review of the Empirical Literature

    DTIC Science & Technology

    2015-01-01

    mental health interventions, screening with standardized instruments, restricted access to lethal means, and coping skills/self-referral training...on maladaptive coping regarding suicide inter- vention than the control group, but this effect was only true for females. Thus, sex moderated the...effect of training on maladaptive coping mechanisms. In contrast, a quasi-experimental A Theoretical Model and Review of the Empirical Literature 11

  6. Theoretical and Empirical Equations of State for Nitrogen Gas at High Pressure and Temperature

    DTIC Science & Technology

    1981-09-01

    probably In the gas phase. Otherwise, there would not be evidence of an exponen- tial dependence of pressure on the burning rate. In view of the...the energy of the products formed. The products formed depend on the pressure , the temperature, and the composition of the propellant gas. Thus, the...Afc-Avc&S?^ AD AD-E400 697 TECHNICAL REPORT ARLCD-TR-81029 THEORETICAL AND EMPIRICAL EQUATIONS OF STATE FOR NITROGEN GAS AT HIGH PRESSURE AND

  7. Dignity in the care of older people – a review of the theoretical and empirical literature

    PubMed Central

    Gallagher, Ann; Li, Sarah; Wainwright, Paul; Jones, Ian Rees; Lee, Diana

    2008-01-01

    Background Dignity has become a central concern in UK health policy in relation to older and vulnerable people. The empirical and theoretical literature relating to dignity is extensive and as likely to confound and confuse as to clarify the meaning of dignity for nurses in practice. The aim of this paper is critically to examine the literature and to address the following questions: What does dignity mean? What promotes and diminishes dignity? And how might dignity be operationalised in the care of older people? This paper critically reviews the theoretical and empirical literature relating to dignity and clarifies the meaning and implications of dignity in relation to the care of older people. If nurses are to provide dignified care clarification is an essential first step. Methods This is a review article, critically examining papers reporting theoretical perspectives and empirical studies relating to dignity. The following databases were searched: Assia, BHI, CINAHL, Social Services Abstracts, IBSS, Web of Knowledge Social Sciences Citation Index and Arts & Humanities Citation Index and location of books a chapters in philosophy literature. An analytical approach was adopted to the publications reviewed, focusing on the objectives of the review. Results and discussion We review a range of theoretical and empirical accounts of dignity and identify key dignity promoting factors evident in the literature, including staff attitudes and behaviour; environment; culture of care; and the performance of specific care activities. Although there is scope to learn more about cultural aspects of dignity we know a good deal about dignity in care in general terms. Conclusion We argue that what is required is to provide sufficient support and education to help nurses understand dignity and adequate resources to operationalise dignity in their everyday practice. Using the themes identified from our review we offer proposals for the direction of future research. PMID:18620561

  8. Conceptual and empirical problems with game theoretic approaches to language evolution

    PubMed Central

    Watumull, Jeffrey; Hauser, Marc D.

    2014-01-01

    The importance of game theoretic models to evolutionary theory has been in formulating elegant equations that specify the strategies to be played and the conditions to be satisfied for particular traits to evolve. These models, in conjunction with experimental tests of their predictions, have successfully described and explained the costs and benefits of varying strategies and the dynamics for establishing equilibria in a number of evolutionary scenarios, including especially cooperation, mating, and aggression. Over the past decade or so, game theory has been applied to model the evolution of language. In contrast to the aforementioned scenarios, however, we argue that these models are problematic due to conceptual confusions and empirical difficiences. In particular, these models conflate the comptutations and representations of our language faculty (mechanism) with its utility in communication (function); model languages as having different fitness functions for which there is no evidence; depend on assumptions for the starting state of the system, thereby begging the question of how these systems evolved; and to date, have generated no empirical studies at all. Game theoretic models of language evolution have therefore failed to advance how or why language evolved, or why it has the particular representations and computations that it does. We conclude with some brief suggestions for how this situation might be ameliorated, enabling this important theoretical tool to make substantive empirical contributions. PMID:24678305

  9. Modelling drying kinetics of thyme (Thymus vulgaris L.): theoretical and empirical models, and neural networks.

    PubMed

    Rodríguez, J; Clemente, G; Sanjuán, N; Bon, J

    2014-01-01

    The drying kinetics of thyme was analyzed by considering different conditions: air temperature of between 40°C  and 70°C , and air velocity of 1 m/s. A theoretical diffusion model and eight different empirical models were fitted to the experimental data. From the theoretical model application, the effective diffusivity per unit area of the thyme was estimated (between 3.68 × 10(-5) and 2.12 × 10 (-4) s(-1)). The temperature dependence of the effective diffusivity was described by the Arrhenius relationship with activation energy of 49.42 kJ/mol. Eight different empirical models were fitted to the experimental data. Additionally, the dependence of the parameters of each model on the drying temperature was determined, obtaining equations that allow estimating the evolution of the moisture content at any temperature in the established range. Furthermore, artificial neural networks were developed and compared with the theoretical and empirical models using the percentage of the relative errors and the explained variance. The artificial neural networks were found to be more accurate predictors of moisture evolution with VAR ≥ 99.3% and ER ≤ 8.7%.

  10. Why It Is Hard to Find Genes Associated With Social Science Traits: Theoretical and Empirical Considerations

    PubMed Central

    Lee, James J.; Benjamin, Daniel J.; Beauchamp, Jonathan P.; Glaeser, Edward L.; Borst, Gregoire; Pinker, Steven; Laibson, David I.

    2013-01-01

    Objectives. We explain why traits of interest to behavioral scientists may have a genetic architecture featuring hundreds or thousands of loci with tiny individual effects rather than a few with large effects and why such an architecture makes it difficult to find robust associations between traits and genes. Methods. We conducted a genome-wide association study at 2 sites, Harvard University and Union College, measuring more than 100 physical and behavioral traits with a sample size typical of candidate gene studies. We evaluated predictions that alleles with large effect sizes would be rare and most traits of interest to social science are likely characterized by a lack of strong directional selection. We also carried out a theoretical analysis of the genetic architecture of traits based on R.A. Fisher’s geometric model of natural selection and empirical analyses of the effects of selection bias and phenotype measurement stability on the results of genetic association studies. Results. Although we replicated several known genetic associations with physical traits, we found only 2 associations with behavioral traits that met the nominal genome-wide significance threshold, indicating that physical and behavioral traits are mainly affected by numerous genes with small effects. Conclusions. The challenge for social science genomics is the likelihood that genes are connected to behavioral variation by lengthy, nonlinear, interactive causal chains, and unraveling these chains requires allying with personal genomics to take advantage of the potential for large sample sizes as well as continuing with traditional epidemiological studies. PMID:23927501

  11. Uncovering curvilinear relationships between conscientiousness and job performance: how theoretically appropriate measurement makes an empirical difference.

    PubMed

    Carter, Nathan T; Dalal, Dev K; Boyce, Anthony S; O'Connell, Matthew S; Kung, Mei-Chuan; Delgado, Kristin M

    2014-07-01

    The personality trait of conscientiousness has seen considerable attention from applied psychologists due to its efficacy for predicting job performance across performance dimensions and occupations. However, recent theoretical and empirical developments have questioned the assumption that more conscientiousness always results in better job performance, suggesting a curvilinear link between the 2. Despite these developments, the results of studies directly testing the idea have been mixed. Here, we propose this link has been obscured by another pervasive assumption known as the dominance model of measurement: that higher scores on traditional personality measures always indicate higher levels of conscientiousness. Recent research suggests dominance models show inferior fit to personality test scores as compared to ideal point models that allow for curvilinear relationships between traits and scores. Using data from 2 different samples of job incumbents, we show the rank-order changes that result from using an ideal point model expose a curvilinear link between conscientiousness and job performance 100% of the time, whereas results using dominance models show mixed results, similar to the current state of the literature. Finally, with an independent cross-validation sample, we show that selection based on predicted performance using ideal point scores results in more favorable objective hiring outcomes. Implications for practice and future research are discussed.

  12. The Role of Trait Emotional Intelligence in Academic Performance: Theoretical Overview and Empirical Update.

    PubMed

    Perera, Harsha N

    2016-01-01

    Considerable debate still exists among scholars over the role of trait emotional intelligence (TEI) in academic performance. The dominant theoretical position is that TEI should be orthogonal or only weakly related to achievement; yet, there are strong theoretical reasons to believe that TEI plays a key role in performance. The purpose of the current article is to provide (a) an overview of the possible theoretical mechanisms linking TEI with achievement and (b) an update on empirical research examining this relationship. To elucidate these theoretical mechanisms, the overview draws on multiple theories of emotion and regulation, including TEI theory, social-functional accounts of emotion, and expectancy-value and psychobiological model of emotion and regulation. Although these theoretical accounts variously emphasize different variables as focal constructs, when taken together, they provide a comprehensive picture of the possible mechanisms linking TEI with achievement. In this regard, the article redresses the problem of vaguely specified theoretical links currently hampering progress in the field. The article closes with a consideration of directions for future research.

  13. On the complex relationship between energy expenditure and longevity: Reconciling the contradictory empirical results with a simple theoretical model.

    PubMed

    Hou, Chen; Amunugama, Kaushalya

    2015-07-01

    The relationship between energy expenditure and longevity has been a central theme in aging studies. Empirical studies have yielded controversial results, which cannot be reconciled by existing theories. In this paper, we present a simple theoretical model based on first principles of energy conservation and allometric scaling laws. The model takes into considerations the energy tradeoffs between life history traits and the efficiency of the energy utilization, and offers quantitative and qualitative explanations for a set of seemingly contradictory empirical results. We show that oxidative metabolism can affect cellular damage and longevity in different ways in animals with different life histories and under different experimental conditions. Qualitative data and the linearity between energy expenditure, cellular damage, and lifespan assumed in previous studies are not sufficient to understand the complexity of the relationships. Our model provides a theoretical framework for quantitative analyses and predictions. The model is supported by a variety of empirical studies, including studies on the cellular damage profile during ontogeny; the intra- and inter-specific correlations between body mass, metabolic rate, and lifespan; and the effects on lifespan of (1) diet restriction and genetic modification of growth hormone, (2) the cold and exercise stresses, and (3) manipulations of antioxidant.

  14. Nonparametric Bayes Factors Based On Empirical Likelihood Ratios

    PubMed Central

    Vexler, Albert; Deng, Wei; Wilding, Gregory E.

    2012-01-01

    Bayes methodology provides posterior distribution functions based on parametric likelihoods adjusted for prior distributions. A distribution-free alternative to the parametric likelihood is use of empirical likelihood (EL) techniques, well known in the context of nonparametric testing of statistical hypotheses. Empirical likelihoods have been shown to exhibit many of the properties of conventional parametric likelihoods. In this article, we propose and examine Bayes factors (BF) methods that are derived via the EL ratio approach. Following Kass & Wasserman [10], we consider Bayes factors type decision rules in the context of standard statistical testing techniques. We show that the asymptotic properties of the proposed procedure are similar to the classical BF’s asymptotic operating characteristics. Although we focus on hypothesis testing, the proposed approach also yields confidence interval estimators of unknown parameters. Monte Carlo simulations were conducted to evaluate the theoretical results as well as to demonstrate the power of the proposed test. PMID:23180904

  15. Scientific thinking in young children: theoretical advances, empirical research, and policy implications.

    PubMed

    Gopnik, Alison

    2012-09-28

    New theoretical ideas and empirical research show that very young children's learning and thinking are strikingly similar to much learning and thinking in science. Preschoolers test hypotheses against data and make causal inferences; they learn from statistics and informal experimentation, and from watching and listening to others. The mathematical framework of probabilistic models and Bayesian inference can describe this learning in precise ways. These discoveries have implications for early childhood education and policy. In particular, they suggest both that early childhood experience is extremely important and that the trend toward more structured and academic early childhood programs is misguided.

  16. SAGE II/Umkehr ozone comparisons and aerosols effects: An empirical and theoretical study. Final report

    SciTech Connect

    Newchurch, M.

    1997-09-15

    The objectives of this research were to: (1) examine empirically the aerosol effect on Umkehr ozone profiles using SAGE II aerosol and ozone data; (2) examine theoretically the aerosol effect on Umkehr ozone profiles; (3) examine the differences between SAGE II ozone profiles and both old- and new-format Umkehr ozone profiles for ozone-trend information; (4) reexamine SAGE I-Umkehr ozone differences with the most recent version of SAGE I data; and (5) contribute to the SAGE II science team.

  17. The Theoretical and Empirical Basis for Meditation as an Intervention for PTSD

    ERIC Educational Resources Information Center

    Lang, Ariel J.; Strauss, Jennifer L.; Bomyea, Jessica; Bormann, Jill E.; Hickman, Steven D.; Good, Raquel C.; Essex, Michael

    2012-01-01

    In spite of the existence of good empirically supported treatments for posttraumatic stress disorder (PTSD), consumers and providers continue to ask for more options for managing this common and often chronic condition. Meditation-based approaches are being widely implemented, but there is minimal research rigorously assessing their effectiveness.…

  18. An empirical comparison of information-theoretic selection criteria for multivariate behavior genetic models.

    PubMed

    Markon, Kristian E; Krueger, Robert F

    2004-11-01

    Information theory provides an attractive basis for statistical inference and model selection. However, little is known about the relative performance of different information-theoretic criteria in covariance structure modeling, especially in behavioral genetic contexts. To explore these issues, information-theoretic fit criteria were compared with regard to their ability to discriminate between multivariate behavioral genetic models under various model, distribution, and sample size conditions. Results indicate that performance depends on sample size, model complexity, and distributional specification. The Bayesian Information Criterion (BIC) is more robust to distributional misspecification than Akaike's Information Criterion (AIC) under certain conditions, and outperforms AIC in larger samples and when comparing more complex models. An approximation to the Minimum Description Length (MDL; Rissanen, J. (1996). IEEE Transactions on Information Theory 42:40-47, Rissanen, J. (2001). IEEE Transactions on Information Theory 47:1712-1717) criterion, involving the empirical Fisher information matrix, exhibits variable patterns of performance due to the complexity of estimating Fisher information matrices. Results indicate that a relatively new information-theoretic criterion, Draper's Information Criterion (DIC; Draper, 1995), which shares features of the Bayesian and MDL criteria, performs similarly to or better than BIC. Results emphasize the importance of further research into theory and computation of information-theoretic criteria.

  19. Common liability to addiction and “gateway hypothesis”: Theoretical, empirical and evolutionary perspective

    PubMed Central

    Vanyukov, Michael M.; Tarter, Ralph E.; Kirillova, Galina P.; Kirisci, Levent; Reynolds, Maureen D.; Kreek, Mary Jeanne; Conway, Kevin P.; Maher, Brion S.; Iacono, William G.; Bierut, Laura; Neale, Michael C.; Clark, Duncan B.; Ridenour, Ty A.

    2013-01-01

    Background Two competing concepts address the development of involvement with psychoactive substances: the “gateway hypothesis” (GH) and common liability to addiction (CLA). Method The literature on theoretical foundations and empirical findings related to both concepts is reviewed. Results The data suggest that drug use initiation sequencing, the core GH element, is variable and opportunistic rather than uniform and developmentally deterministic. The association between risks for use of different substances, if any, can be more readily explained by common underpinnings than by specific staging. In contrast, the CLA concept is grounded in genetic theory and supported by data identifying common sources of variation in the risk for specific addictions. This commonality has identifiable neurobiological substrate and plausible evolutionary explanations. Conclusions Whereas the “gateway” hypothesis does not specify mechanistic connections between “stages”, and does not extend to the risks for addictions, the concept of common liability to addictions incorporates sequencing of drug use initiation as well as extends to related addictions and their severity, provides a parsimonious explanation of substance use and addiction co-occurrence, and establishes a theoretical and empirical foundation to research in etiology, quantitative risk and severity measurement, as well as targeted non-drug-specific prevention and early intervention. PMID:22261179

  20. Linear regression calibration: theoretical framework and empirical results in EPIC, Germany.

    PubMed

    Kynast-Wolf, Gisela; Becker, Nikolaus; Kroke, Anja; Brandstetter, Birgit R; Wahrendorf, Jürgen; Boeing, Heiner

    2002-01-01

    Large scale dietary assessment instruments are usually based on the food frequency technique and have therefore to be tailored to the involved populations with respect to mode of application and inquired food items. In multicenter studies with different populations, the direct comparability of dietary data is therefore a challenge because each local dietary assessment tool might have its specific measurement error. Thus, for risk analysis the direct use of dietary measurements across centers requires a common reference. For example, in the European prospective cohort study EPIC (European Prospective Investigation into Cancer and Nutrition) a 24-hour recall was chosen to serve as such a reference instrument which was based on a highly standardized computer-assisted interview (EPIC-SOFT). The 24-hour recall was applied to a representative subset of EPIC participants in all centers. The theoretical framework of combining multicenter dietary information was previously published in several papers and is called linear regression calibration. It is based on a linear regression of the food frequency questionnaire to the reference. The regression coefficients describe the absolute and proportional scaling bias of the questionnaire with the 24-hour recall taken as reference. This article describes the statistical basis of the calibration approach and presents first empirical results of its application to fruit, cereals and meat consumption in EPIC Germany represented by the two EPIC centers, Heidelberg and Potsdam. It was found that fruit could be measured well by the questionnaire in both centers (lambdacirc; = 0.98 (males) and lambdacirc; = 0.95 (females) in Heidelberg, and lambdacirc; = 0.86 (males) and lambdacirc; = 0.7 (females) in Potsdam), cereals less (lambdacirc; = 0.53 (males) and lambdacirc; = 0.4 (females) in Heidelberg, and lambdacirc; = 0.53 (males) and lambdacirc; = 0.44 (females) in Potsdam), and that the assessment of meat (lambdacirc; = 0.72 (males) and

  1. Evolution of the empirical and theoretical foundations of eyewitness identification reform.

    PubMed

    Clark, Steven E; Moreland, Molly B; Gronlund, Scott D

    2014-04-01

    Scientists in many disciplines have begun to raise questions about the evolution of research findings over time (Ioannidis in Epidemiology, 19, 640-648, 2008; Jennions & Møller in Proceedings of the Royal Society, Biological Sciences, 269, 43-48, 2002; Mullen, Muellerleile, & Bryan in Personality and Social Psychology Bulletin, 27, 1450-1462, 2001; Schooler in Nature, 470, 437, 2011), since many phenomena exhibit decline effects-reductions in the magnitudes of effect sizes as empirical evidence accumulates. The present article examines empirical and theoretical evolution in eyewitness identification research. For decades, the field has held that there are identification procedures that, if implemented by law enforcement, would increase eyewitness accuracy, either by reducing false identifications, with little or no change in correct identifications, or by increasing correct identifications, with little or no change in false identifications. Despite the durability of this no-cost view, it is unambiguously contradicted by data (Clark in Perspectives on Psychological Science, 7, 238-259, 2012a; Clark & Godfrey in Psychonomic Bulletin & Review, 16, 22-42, 2009; Clark, Moreland, & Rush, 2013; Palmer & Brewer in Law and Human Behavior, 36, 247-255, 2012), raising questions as to how the no-cost view became well-accepted and endured for so long. Our analyses suggest that (1) seminal studies produced, or were interpreted as having produced, the no-cost pattern of results; (2) a compelling theory was developed that appeared to account for the no-cost pattern; (3) empirical results changed over the years, and subsequent studies did not reliably replicate the no-cost pattern; and (4) the no-cost view survived despite the accumulation of contradictory empirical evidence. Theories of memory that were ruled out by early data now appear to be supported by data, and the theory developed to account for early data now appears to be incorrect.

  2. Issues and Controversies that Surround Recent Texts on Empirically Supported and Empirically Based Treatments

    ERIC Educational Resources Information Center

    Paul, Howard A.

    2004-01-01

    Since the 1993 APA task force of the Society of Clinical Psychology developed guidelines to apply data-based psychology to the identification of effective psychotherapy, there has been an increasing number of texts focussing on Empirically based Psychotherapy and Empirically Supported Treatments. This manuscript examines recent key texts and…

  3. Agriculture and deforestation in the tropics: a critical theoretical and empirical review.

    PubMed

    Benhin, James K A

    2006-02-01

    Despite the important role that tropical forests play in human existence, their depletion, especially in the developing world, continue relentlessly. Agriculture has been cited as the major cause of this depletion. This paper discusses two main theoretical underpinnings for the role of agriculture in tropical deforestation. First, the forest biomass as input in agricultural production, and second, the competition between agriculture and forestry underlined by their relative marginal benefits. These are supported by empirical evidence from selected countries in Africa and South America. The paper suggests a need to find a win-win situation to control the spate of tropical deforestation. This may imply improved technologies in the agriculture sector in the developing world, which would lead both to increased production in the agriculture sector, and would also help control the use of tropical forest as an input in agriculture production.

  4. How beauty works. Theoretical mechanisms and two empirical applications on students' evaluation of teaching.

    PubMed

    Wolbring, Tobias; Riordan, Patrick

    2016-05-01

    Plenty of studies show that the physical appearance of a person affects a variety of outcomes in everyday life. However, due to an incomplete theoretical explication and empirical problems in disentangling different beauty effects, it is unclear which mechanisms are at work. To clarify how beauty works we present explanations from evolutionary theory and expectation states theory and show where both perspectives differ and where interlinkage appears promising. Using students' evaluations of teaching we find observational and experimental evidence for the different causal pathways of physical attractiveness. First, independent raters strongly agree over the physical attractiveness of a person. Second, attractive instructors receive better student ratings. Third, students attend classes of attractive instructors more frequently - even after controlling for teaching quality. Fourth, we find no evidence that attractiveness effects become stronger if rater and ratee are of the opposite sex. Finally, the beauty premium turns into a penalty if an attractive instructor falls short of students' expectations.

  5. Enhanced FMAM based on empirical kernel map.

    PubMed

    Wang, Min; Chen, Songcan

    2005-05-01

    The existing morphological auto-associative memory models based on the morphological operations, typically including morphological auto-associative memories (auto-MAM) proposed by Ritter et al. and our fuzzy morphological auto-associative memories (auto-FMAM), have many attractive advantages such as unlimited storage capacity, one-shot recall speed and good noise-tolerance to single erosive or dilative noise. However, they suffer from the extreme vulnerability to noise of mixing erosion and dilation, resulting in great degradation on recall performance. To overcome this shortcoming, we focus on FMAM and propose an enhanced FMAM (EFMAM) based on the empirical kernel map. Although it is simple, EFMAM can significantly improve the auto-FMAM with respect to the recognition accuracy under hybrid-noise and computational effort. Experiments conducted on the thumbnail-sized faces (28 x 23 and 14 x 11) scaled from the ORL database show the average accuracies of 92%, 90%, and 88% with 40 classes under 10%, 20%, and 30% randomly generated hybrid-noises, respectively, which are far higher than the auto-FMAM (67%, 46%, 31%) under the same noise levels.

  6. Coaching and guidance with patient decision aids: A review of theoretical and empirical evidence

    PubMed Central

    2013-01-01

    Background Coaching and guidance are structured approaches that can be used within or alongside patient decision aids (PtDAs) to facilitate the process of decision making. Coaching is provided by an individual, and guidance is embedded within the decision support materials. The purpose of this paper is to: a) present updated definitions of the concepts “coaching” and “guidance”; b) present an updated summary of current theoretical and empirical insights into the roles played by coaching/guidance in the context of PtDAs; and c) highlight emerging issues and research opportunities in this aspect of PtDA design. Methods We identified literature published since 2003 on shared decision making theoretical frameworks inclusive of coaching or guidance. We also conducted a sub-analysis of randomized controlled trials included in the 2011 Cochrane Collaboration Review of PtDAs with search results updated to December 2010. The sub-analysis was conducted on the characteristics of coaching and/or guidance included in any trial of PtDAs and trials that allowed the impact of coaching and/or guidance with PtDA to be compared to another intervention or usual care. Results Theoretical evidence continues to justify the use of coaching and/or guidance to better support patients in the process of thinking about a decision and in communicating their values/preferences with others. In 98 randomized controlled trials of PtDAs, 11 trials (11.2%) included coaching and 63 trials (64.3%) provided guidance. Compared to usual care, coaching provided alongside a PtDA improved knowledge and decreased mean costs. The impact on some other outcomes (e.g., participation in decision making, satisfaction, option chosen) was more variable, with some trials showing positive effects and other trials reporting no differences. For values-choice agreement, decisional conflict, adherence, and anxiety there were no differences between groups. None of these outcomes were worse when patients were exposed

  7. Evaluation of theoretical and empirical water vapor sorption isotherm models for soils

    NASA Astrophysics Data System (ADS)

    Arthur, Emmanuel; Tuller, Markus; Moldrup, Per; de Jonge, Lis W.

    2016-01-01

    The mathematical characterization of water vapor sorption isotherms of soils is crucial for modeling processes such as volatilization of pesticides and diffusive and convective water vapor transport. Although numerous physically based and empirical models were previously proposed to describe sorption isotherms of building materials, food, and other industrial products, knowledge about the applicability of these functions for soils is noticeably lacking. We present an evaluation of nine models for characterizing adsorption/desorption isotherms for a water activity range from 0.03 to 0.93 based on measured data of 207 soils with widely varying textures, organic carbon contents, and clay mineralogy. In addition, the potential applicability of the models for prediction of sorption isotherms from known clay content was investigated. While in general, all investigated models described measured adsorption and desorption isotherms reasonably well, distinct differences were observed between physical and empirical models and due to the different degrees of freedom of the model equations. There were also considerable differences in model performance for adsorption and desorption data. While regression analysis relating model parameters and clay content and subsequent model application for prediction of measured isotherms showed promise for the majority of investigated soils, for soils with distinct kaolinitic and smectitic clay mineralogy predicted isotherms did not closely match the measurements.

  8. The complexities of defining optimal sleep: empirical and theoretical considerations with a special emphasis on children.

    PubMed

    Blunden, Sarah; Galland, Barbara

    2014-10-01

    The main aim of this paper is to consider relevant theoretical and empirical factors defining optimal sleep, and assess the relative importance of each in developing a working definition for, or guidelines about, optimal sleep, particularly in children. We consider whether optimal sleep is an issue of sleep quantity or of sleep quality. Sleep quantity is discussed in terms of duration, timing, variability and dose-response relationships. Sleep quality is explored in relation to continuity, sleepiness, sleep architecture and daytime behaviour. Potential limitations of sleep research in children are discussed, specifically the loss of research precision inherent in sleep deprivation protocols involving children. We discuss which outcomes are the most important to measure. We consider the notion that insufficient sleep may be a totally subjective finding, is impacted by the age of the reporter, driven by socio-cultural patterns and sleep-wake habits, and that, in some individuals, the driver for insufficient sleep can be viewed in terms of a cost-benefit relationship, curtailing sleep in order to perform better while awake. We conclude that defining optimal sleep is complex. The only method of capturing this elusive concept may be by somnotypology, taking into account duration, quality, age, gender, race, culture, the task at hand, and an individual's position in both sleep-alert and morningness-eveningness continuums. At the experimental level, a unified approach by researchers to establish standardized protocols to evaluate optimal sleep across paediatric age groups is required.

  9. Solubility of caffeine from green tea in supercritical CO2: a theoretical and empirical approach.

    PubMed

    Gadkari, Pravin Vasantrao; Balaraman, Manohar

    2015-12-01

    Decaffeination of fresh green tea was carried out with supercritical CO2 in the presence of ethanol as co-solvent. The solubility of caffeine in supercritical CO2 varied from 44.19 × 10(-6) to 149.55 × 10(-6) (mole fraction) over a pressure and temperature range of 15 to 35 MPa and 313 to 333 K, respectively. The maximum solubility of caffeine was obtained at 25 MPa and 323 K. Experimental solubility data were correlated with the theoretical equation of state models Peng-Robinson (PR), Soave Redlich-Kwong (SRK), and Redlich-Kwong (RK). The RK model had regressed experimental data with 15.52 % average absolute relative deviation (AARD). In contrast, Gordillo empirical model regressed the best to experimental data with only 0.96 % AARD. Under supercritical conditions, solubility of caffeine in tea matrix was lower than the solubility of pure caffeine. Further, solubility of caffeine in supercritical CO2 was compared with solubility of pure caffeine in conventional solvents and a maximum solubility 90 × 10(-3) mol fraction was obtained with chloroform.

  10. On the impact of empirical and theoretical star formation laws on galaxy formation

    NASA Astrophysics Data System (ADS)

    Lagos, Claudia Del P.; Lacey, Cedric G.; Baugh, Carlton M.; Bower, Richard G.; Benson, Andrew J.

    2011-09-01

    We investigate the consequences of applying different star formation laws in the galaxy formation model GALFORM. Three broad star formation laws are implemented: the empirical relations of Kennicutt and Schmidt and Blitz & Rosolowsky and the theoretical model of Krumholz, McKee & Tumlinson. These laws have no free parameters once calibrated against observations of the star formation rate (SFR) and gas surface density in nearby galaxies. We start from published models, and investigate which observables are sensitive to a change in the star formation law, without altering any other model parameters. We show that changing the star formation law (i) does not significantly affect either the star formation history of the universe or the galaxy luminosity functions in the optical and near-infrared, due to an effective balance between the quiescent and burst star formation modes, (ii) greatly affects the cold gas contents of galaxies and (iii) changes the location of galaxies in the SFR versus stellar mass plane, so that a second sequence of 'passive' galaxies arises, in addition to the known 'active' sequence. We show that this plane can be used to discriminate between the star formation laws.

  11. Should we adjust for a confounder if empirical and theoretical criteria yield contradictory results? A simulation study

    PubMed Central

    Lee, Paul H.

    2014-01-01

    Confounders can be identified by one of two main strategies: empirical or theoretical. Although confounder identification strategies that combine empirical and theoretical strategies have been proposed, the need for adjustment remains unclear if the empirical and theoretical criteria yield contradictory results due to random error. We simulated several scenarios to mimic either the presence or the absence of a confounding effect and tested the accuracy of the exposure-outcome association estimates with and without adjustment. Various criteria (significance criterion, Change-in-estimate(CIE) criterion with a 10% cutoff and with a simulated cutoff) were imposed, and a range of sample sizes were trialed. In the presence of a true confounding effect, unbiased estimates were obtained only by using the CIE criterion with a simulated cutoff. In the absence of a confounding effect, all criteria performed well regardless of adjustment. When the confounding factor was affected by both exposure and outcome, all criteria yielded accurate estimates without adjustment, but the adjusted estimates were biased. To conclude, theoretical confounders should be adjusted for regardless of the empirical evidence found. The adjustment for factors that do not have a confounding effect minimally effects. Potential confounders affected by both exposure and outcome should not be adjusted for. PMID:25124526

  12. Rural Employment, Migration, and Economic Development: Theoretical Issues and Empirical Evidence from Africa. Africa Rural Employment Paper No. 1.

    ERIC Educational Resources Information Center

    Byerlee, Derek; Eicher, Carl K.

    Employment problems in Africa were examined with special emphasis on rural employment and migration within the context of overall economic development. A framework was provided for analyzing rural employment in development; that framework was used to analyze empirical information from Africa; and theoretical issues were raised in analyzing rural…

  13. A Theoretical Analysis of Social Interactions in Computer-based Learning Environments: Evidence for Reciprocal Understandings.

    ERIC Educational Resources Information Center

    Jarvela, Sanna; Bonk, Curtis Jay; Lehtinen, Erno; Lehti, Sirpa

    1999-01-01

    Presents a theoretical and empirical analysis of social interactions in computer-based learning environments. Explores technology use to support reciprocal understanding between teachers and students based on three technology-based learning environments in Finland and the United States, and discusses situated learning, cognitive apprenticeships,…

  14. Empirically Based Myths: Astrology, Biorhythms, and ATIs.

    ERIC Educational Resources Information Center

    Ragsdale, Ronald G.

    1980-01-01

    A myth may have an empirical basis through chance occurrence; perhaps Aptitude Treatment Interactions (ATIs) are in this category. While ATIs have great utility in describing, planning, and implementing instruction, few disordinal interactions have been found. Article suggests narrowing of ATI research with replications and estimates of effect…

  15. Theoretical and Empirical Comparisons between Two Models for Continuous Item Responses.

    ERIC Educational Resources Information Center

    Ferrando, Pere J.

    2002-01-01

    Analyzed the relations between two continuous response models intended for typical response items: the linear congeneric model and Samejima's continuous response model (CRM). Illustrated the relations described using an empirical example and assessed the relations through a simulation study. (SLD)

  16. Discovering the Neural Nature of Moral Cognition? Empirical, Theoretical, and Practical Challenges in Bioethical Research with Electroencephalography (EEG).

    PubMed

    Wagner, Nils-Frederic; Chaves, Pedro; Wolff, Annemarie

    2017-02-28

    In this article we critically review the neural mechanisms of moral cognition that have recently been studied via electroencephalography (EEG). Such studies promise to shed new light on traditional moral questions by helping us to understand how effective moral cognition is embodied in the brain. It has been argued that conflicting normative ethical theories require different cognitive features and can, accordingly, in a broadly conceived naturalistic attempt, be associated with different brain processes that are rooted in different brain networks and regions. This potentially morally relevant brain activity has been empirically investigated through EEG-based studies on moral cognition. From neuroscientific evidence gathered in these studies, a variety of normative conclusions have been drawn and bioethical applications have been suggested. We discuss methodological and theoretical merits and demerits of the attempt to use EEG techniques in a morally significant way, point to legal challenges and policy implications, indicate the potential to reveal biomarkers of psychopathological conditions, and consider issues that might inform future bioethical work.

  17. A Model of Resource Allocation in Public School Districts: A Theoretical and Empirical Analysis.

    ERIC Educational Resources Information Center

    Chambers, Jay G.

    This paper formulates a comprehensive model of resource allocation in a local public school district. The theoretical framework specified could be applied equally well to any number of local public social service agencies. Section 1 develops the theoretical model describing the process of resource allocation. This involves the determination of the…

  18. Culminating Experience Empirical and Theoretical Research Projects, University of Tennessee at Chattanooga, Spring, 2005

    ERIC Educational Resources Information Center

    Watson, Sandy White, Ed.

    2005-01-01

    This document represents a sample collection of master's theses from the University of Tennessee at Chattanooga's Teacher Education Program, spring semester, 2005. The majority of these student researchers were simultaneously student teaching while writing their theses. Studies were empirical and conceptual in nature and demonstrate some ways in…

  19. Image Retrieval: Theoretical Analysis and Empirical User Studies on Accessing Information in Images.

    ERIC Educational Resources Information Center

    Ornager, Susanne

    1997-01-01

    Discusses indexing and retrieval for effective searches of digitized images. Reports on an empirical study about criteria for analysis and indexing digitized images, and the different types of user queries done in newspaper image archives in Denmark. Concludes that it is necessary that the indexing represent both a factual and an expressional…

  20. The Status of the Counseling Relationship: An Empirical Review, Theoretical Implications, and Research Directions.

    ERIC Educational Resources Information Center

    Sexton, Thomas L.; Whiston, Susan C.

    1994-01-01

    Reviews studies of counseling relationship, using Gelso and Carter's multidimensional model to summarize empirical support for "real,""unreal," and "working alliance" elements of relationship. Discussion of implications of potential model shift in thinking of counseling relationship outlines how adoption of social…

  1. Religious Identity Development of Adolescents in Religious Affiliated Schools. A Theoretical Foundation for Empirical Research

    ERIC Educational Resources Information Center

    Bertram-Troost, Gerdien D.; de Roos, Simone; Miedema, Siebren

    2006-01-01

    The question, how religious affiliated schools for secondary education shape religious education and what effects this education has on the religious identity development of pupils, is relevant in a time when the position of religious affiliated schools is highly disputable. In earlier empirical research on religious identity development of…

  2. Empirical likelihood-based tests for stochastic ordering

    PubMed Central

    BARMI, HAMMOU EL; MCKEAGUE, IAN W.

    2013-01-01

    This paper develops an empirical likelihood approach to testing for the presence of stochastic ordering among univariate distributions based on independent random samples from each distribution. The proposed test statistic is formed by integrating a localized empirical likelihood statistic with respect to the empirical distribution of the pooled sample. The asymptotic null distribution of this test statistic is found to have a simple distribution-free representation in terms of standard Brownian bridge processes. The approach is used to compare the lengths of rule of Roman Emperors over various historical periods, including the “decline and fall” phase of the empire. In a simulation study, the power of the proposed test is found to improve substantially upon that of a competing test due to El Barmi and Mukerjee. PMID:23874142

  3. Use of forensic science in investigating crimes of sexual violence: contrasting its theoretical potential with empirical realities.

    PubMed

    Johnson, Donald; Peterson, Joseph; Sommers, Ira; Baskin, Deborah

    2012-02-01

    This article contrasts the theoretical potential of modern forensic science techniques in the investigation of sexual violence cases with empirical research that has assessed the role played by scientific evidence in the criminal justice processing of sexual assault cases. First, the potential of forensic scientific procedures (including DNA testing) are outlined and the sexual assault literature that examines the importance of physical and forensic evidence in resolving such cases is reviewed. Then, empirical data from a recent National Institute of Justice (NIJ) study of 602 rapes are presented that describe the forensic evidence collected and examined in such cases and its impact on decisions to arrest, prosecute, adjudicate, and sentence defendants. The article closes with a discussion of research and policy recommendations to enhance the role played by forensic science evidence in sexual assault investigations.

  4. Public Disaster Communication and Child and Family Disaster Mental Health: a Review of Theoretical Frameworks and Empirical Evidence.

    PubMed

    Houston, J Brian; First, Jennifer; Spialek, Matthew L; Sorenson, Mary E; Koch, Megan

    2016-06-01

    Children have been identified as particularly vulnerable to psychological and behavioral difficulties following disaster. Public child and family disaster communication is one public health tool that can be utilized to promote coping/resilience and ameliorate maladaptive child reactions following an event. We conducted a review of the public disaster communication literature and identified three main functions of child and family disaster communication: fostering preparedness, providing psychoeducation, and conducting outreach. Our review also indicates that schools are a promising system for child and family disaster communication. We complete our review with three conclusions. First, theoretically, there appears to be a great opportunity for public disaster communication focused on child disaster reactions. Second, empirical research assessing the effects of public child and family disaster communication is essentially nonexistent. Third, despite the lack of empirical evidence in this area, there is opportunity for public child and family disaster communication efforts that address new domains.

  5. Theoretical and empirical investigations of KCl:Eu2+ for nearly water-equivalent radiotherapy dosimetry

    PubMed Central

    Zheng, Yuanshui; Han, Zhaohui; Driewer, Joseph P.; Low, Daniel A.; Li, H. Harold

    2010-01-01

    Purpose: The low effective atomic number, reusability, and other computed radiography-related advantages make europium doped potassium chloride (KCl:Eu2+) a promising dosimetry material. The purpose of this study is to model KCl:Eu2+ point dosimeters with a Monte Carlo (MC) method and, using this model, to investigate the dose responses of two-dimensional (2D) KCl:Eu2+ storage phosphor films (SPFs). Methods: KCl:Eu2+ point dosimeters were irradiated using a 6 MV beam at four depths (5–20 cm) for each of five square field sizes (5×5–25×25 cm2). The dose measured by KCl:Eu2+ was compared to that measured by an ionization chamber to obtain the magnitude of energy dependent dose measurement artifact. The measurements were simulated using DOSXYZnrc with phase space files generated by BEAMnrcMP. Simulations were also performed for KCl:Eu2+ films with thicknesses ranging from 1 μm to 1 mm. The work function of the prototype KCl:Eu2+ material was determined by comparing the sensitivity of a 150 μm thick KCl:Eu2+ film to a commercial BaFBr0.85I0.15:Eu2+-based SPF with a known work function. The work function was then used to estimate the sensitivity of a 1 μm thick KCl:Eu2+ film. Results: The simulated dose responses of prototype KCl:Eu2+ point dosimeters agree well with measurement data acquired by irradiating the dosimeters in the 6 MV beam with varying field size and depth. Furthermore, simulations with films demonstrate that an ultrathin KCl:Eu2+ film with thickness of the order of 1 μm would have nearly water-equivalent dose response. The simulation results can be understood using classic cavity theories. Finally, preliminary experiments and theoretical calculations show that ultrathin KCl:Eu2+ film could provide excellent signal in a 1 cGy dose-to-water irradiation. Conclusions: In conclusion, the authors demonstrate that KCl:Eu2+-based dosimeters can be accurately modeled by a MC method and that 2D KCl:Eu2+ films of the order of 1 μm thick would have

  6. Empirically Based Comprehensive Treatment Program for Parasuicide.

    ERIC Educational Resources Information Center

    Clum, George A.; And Others

    1979-01-01

    Suggests secondary parasuicide prevention is the most viable path for future research. Aggressive case findings and primary prevention approaches have failed to reduce suicide attempt rates. A secondary prevention model, based on factors predictive of parasuicide, was developed. Stress reduction and cognitive restructuring were primary goals of…

  7. Mechanisms of risk and resilience in military families: theoretical and empirical basis of a family-focused resilience enhancement program.

    PubMed

    Saltzman, William R; Lester, Patricia; Beardslee, William R; Layne, Christopher M; Woodward, Kirsten; Nash, William P

    2011-09-01

    Recent studies have confirmed that repeated wartime deployment of a parent exacts a toll on military children and families and that the quality and functionality of familial relations is linked to force preservation and readiness. As a result, family-centered care has increasingly become a priority across the military health system. FOCUS (Families OverComing Under Stress), a family-centered, resilience-enhancing program developed by a team at UCLA and Harvard Schools of Medicine, is a primary initiative in this movement. In a large-scale implementation project initiated by the Bureau of Navy Medicine, FOCUS has been delivered to thousands of Navy, Marine, Navy Special Warfare, Army, and Air Force families since 2008. This article describes the theoretical and empirical foundation and rationale for FOCUS, which is rooted in a broad conception of family resilience. We review the literature on family resilience, noting that an important next step in building a clinically useful theory of family resilience is to move beyond developing broad "shopping lists" of risk indicators by proposing specific mechanisms of risk and resilience. Based on the literature, we propose five primary risk mechanisms for military families and common negative "chain reaction" pathways through which they undermine the resilience of families contending with wartime deployments and parental injury. In addition, we propose specific mechanisms that mobilize and enhance resilience in military families and that comprise central features of the FOCUS Program. We describe these resilience-enhancing mechanisms in detail, followed by a discussion of the ways in which evaluation data from the program's first 2 years of operation supports the proposed model and the specified mechanisms of action.

  8. Empirically Based Strategies for Preventing Juvenile Delinquency.

    PubMed

    Pardini, Dustin

    2016-04-01

    Juvenile crime is a serious public health problem that results in significant emotional and financial costs for victims and society. Using etiologic models as a guide, multiple interventions have been developed to target risk factors thought to perpetuate the emergence and persistence of delinquent behavior. Evidence suggests that the most effective interventions tend to have well-defined treatment protocols, focus on therapeutic approaches as opposed to external control techniques, and use multimodal cognitive-behavioral treatment strategies. Moving forward, there is a need to develop effective policies and procedures that promote the widespread adoption of evidence-based delinquency prevention practices across multiple settings.

  9. Image-Based Empirical Modeling of the Plasmasphere

    NASA Technical Reports Server (NTRS)

    Adrian, Mark L.; Gallagher, D. L.

    2008-01-01

    A new suite of empirical models of plasmaspheric plasma based on remote, global images from the IMAGE EUV instrument is proposed for development. The purpose of these empirical models is to establish the statistical properties of the plasmasphere as a function of conditions. This suite of models will mark the first time the plasmaspheric plume is included in an empirical model. Development of these empirical plasmaspheric models will support synoptic studies (such as for wave propagation and growth, energetic particle loss through collisions and dust transport as influenced by charging) and serves as a benchmark against which physical models can be tested. The ability to know that a specific global density distribution occurs in response to specific magnetospheric and solar wind factors is a huge advantage over all previous in-situ based empirical models. The consequence of creating these new plasmaspheric models will be to provide much higher fidelity and much richer quantitative descriptions of the statistical properties of plasmaspheric plasma in the inner magnetosphere, whether that plasma is in the main body of the plasmasphere, nearby during recovery or in the plasmaspheric plume. Model products to be presented include statistical probabilities for being in the plasmasphere, near thermal He+ density boundaries and the complexity of its spatial structure.

  10. An improved theoretical approach to the empirical corrections of density functional theory

    NASA Astrophysics Data System (ADS)

    Lii, Jenn-Huei; Hu, Ching-Han

    2012-02-01

    An empirical correction to density functional theory (DFT) has been developed in this study. The approach, called correlation corrected atomization-dispersion (CCAZD), involves short- and long-range terms. Short-range correction consists of bond ( 1,2-) and angle ( 1,3-) interactions, which remedies the deficiency of DFT in describing the proto-branching stabilization effects. Long-range correction includes a Buckingham potential function aiming to account for the dispersion interactions. The empirical corrections of DFT were parameterized to reproduce reported Δ H f values of the training set containing alkane, alcohol and ether molecules. The Δ H f of the training set molecules predicted by the CCAZD method combined with two different DFT methods, B3LYP and MPWB1K, with a 6-31G* basis set agreed well with the experimental data. For 106 alkane, alcohol and ether compounds, the average absolute deviations (AADs) in Δ H f were 0.45 and 0.51 kcal/mol for B3LYP- and MPWB1K-CCAZD, respectively. Calculations of isomerization energies, rotational barriers and conformational energies further validated the CCAZD approach. The isomerization energies improved significantly with the CCAZD treatment. The AADs for 22 energies of isomerization reactions were decreased from 3.55 and 2.44 to 0.55 and 0.82 kcal/mol for B3LYP and MPWB1K, respectively. This study also provided predictions of MM4, G3, CBS-QB3 and B2PLYP-D for comparison. The final test of the CCAZD approach on the calculation of the cellobiose analog potential surface also showed promising results. This study demonstrated that DFT calculations with CCAZD empirical corrections achieved very good agreement with reported values for various chemical reactions with a small basis set as 6-31G*.

  11. Empirical Testing of a Theoretical Extension of the Technology Acceptance Model: An Exploratory Study of Educational Wikis

    ERIC Educational Resources Information Center

    Liu, Xun

    2010-01-01

    This study extended the technology acceptance model and empirically tested the new model with wikis, a new type of educational technology. Based on social cognitive theory and the theory of planned behavior, three new variables, wiki self-efficacy, online posting anxiety, and perceived behavioral control, were added to the original technology…

  12. Multiple Embedded Inequalities and Cultural Diversity in Educational Systems: A Theoretical and Empirical Exploration

    ERIC Educational Resources Information Center

    Verhoeven, Marie

    2011-01-01

    This article explores the social construction of cultural diversity in education, with a view to social justice. It examines how educational systems organize ethno-cultural difference and how this process contributes to inequalities. Theoretical resources are drawn from social philosophy as well as from recent developments in social organisation…

  13. Rural Schools, Social Capital and the Big Society: A Theoretical and Empirical Exposition

    ERIC Educational Resources Information Center

    Bagley, Carl; Hillyard, Sam

    2014-01-01

    The paper commences with a theoretical exposition of the current UK government's policy commitment to the idealised notion of the Big Society and the social capital currency underpinning its formation. The paper positions this debate in relation to the rural and adopts an ethnographically-informed methodological approach to provide an in-depth…

  14. Corrective Feedback in L2 Writing: Theoretical Perspectives, Empirical Insights, and Future Directions

    ERIC Educational Resources Information Center

    Van Beuningen, Catherine

    2010-01-01

    The role of (written) corrective feedback (CF) in the process of acquiring a second language (L2) has been an issue of considerable controversy among theorists and researchers alike. Although CF is a widely applied pedagogical tool and its use finds support in SLA theory, practical and theoretical objections to its usefulness have been raised…

  15. Cryoprotective agent and temperature effects on human sperm membrane permeabilities: convergence of theoretical and empirical approaches for optimal cryopreservation methods.

    PubMed

    Gilmore, J A; Liu, J; Woods, E J; Peter, A T; Critser, J K

    2000-02-01

    Previous reports have left unresolved discrepancies between human sperm cryopreservation methods developed using theoretical optimization approaches and those developed empirically. This study was designed to investigate possible reasons for the discrepancies. Human spermatozoa were exposed to 1 mol/l glycerol, 1 mol/l dimethyl sulphoxide (DMSO), 1 mol/l propylene glycol (PG) or 2 mol/l ethylene glycol (EG) at 22, 11 and 0 degrees C, then returned to isosmotic media while changes in cell volume were monitored. Activation energies (E(a)) of the hydraulic conductivity (L(p)) in the presence of cryoprotective agents (CPA) (L(p)(CPA)) were 22.2 (DMSO), 11.9 (glycerol), 15.8 (PG), and 7.8 (EG) kcal/mol. The E(a) values of the membrane permeability to CPA (P(CPA)) were 12.1 (DMSO), 10.4 (glycerol), 8.6 (PG) and 8.0 (EG) kcal/mol. These data indicated that even at low temperatures, EG permeates fastest. The high L(p)(CPA) in the presence of EG and low associated E(a) would allow spermatozoa to remain closer to equilibrium with the extracellular solution during slow cooling in the presence of ice. Collectively, these data suggest that the increase of the E(a) of L(p) in the presence of CPA at low temperature is the likely reason for the observed discrepancy between theoretical predictions of spermatozoa freezing response and empirical data.

  16. Empirical Likelihood-Based Confidence Interval of ROC Curves.

    PubMed

    Su, Haiyan; Qin, Yongsong; Liang, Hua

    2009-11-01

    In this article we propose an empirical likelihood-based confidence interval for receiver operating characteristic curves which are based on a continuous-scale test. The approach is easily understood, simply implemented, and computationally efficient. The results from our simulation studies indicate that the finite-sample numerical performance slightly outperforms the most promising methods published recently. Two real datasets are analyzed by using the proposed method and the existing bootstrap-based method.

  17. Responses to Commentaries on Advances in Empirically Based Assessment.

    ERIC Educational Resources Information Center

    McConaughy, Stephanie H.

    1993-01-01

    Author of article (this issue) describing research program to advance assessment of children's behavioral and emotional problems; presenting conceptual framework for multiaxial empirically based assessment; and summarizing research efforts to develop cross-informant scales for scoring parent, teacher, and self-reports responds to commentaries on…

  18. Empirical Data Sets for Agent Based Modeling of Crowd Scenarios

    DTIC Science & Technology

    2009-08-06

    Conclusion 2UNCLASSIFIED- Approved for Public Release Crowd Research • Large numbers • Heterogeneous • Individual Actors • Interdependence • Language ... Barriers • Empirical testing is difficult • Simulations require models based on real data, otherwise they are fiction 3UNCLASSIFIED- Approved for

  19. GIS Teacher Training: Empirically-Based Indicators of Effectiveness

    ERIC Educational Resources Information Center

    Höhnle, Steffen; Fögele, Janis; Mehren, Rainer; Schubert, Jan Christoph

    2016-01-01

    In spite of various actions, the implementation of GIS (geographic information systems) in German schools is still very low. In the presented research, teaching experts as well as teaching novices were presented with empirically based constraints for implementation stemming from an earlier survey. In the process of various group discussions, the…

  20. Chronic Pain in a Couples Context: A Review and Integration of Theoretical Models and Empirical Evidence

    PubMed Central

    Leonard, Michelle T.; Cano, Annmarie; Johansen, Ayna B.

    2007-01-01

    Researchers have become increasingly interested in the social context of chronic pain conditions. The purpose of this article is to provide an integrated review of the evidence linking marital functioning with chronic pain outcomes including pain severity, physical disability, pain behaviors, and psychological distress. We first present an overview of existing models that identify an association between marital functioning and pain variables. We then review the empirical evidence for a relationship between pain variables and several marital functioning variables including marital satisfaction, spousal support, spouse responses to pain, and marital interaction. On the basis of the evidence, we present a working model of marital and pain variables, identify gaps in the literature, and offer recommendations for research and clinical work. Perspective The authors provide a comprehensive review of the relationships between marital functioning and chronic pain variables to advance future research and help treatment providers understand marital processes in chronic pain. PMID:16750794

  1. Three essays on energy and environmental economics: Empirical, applied, and theoretical

    NASA Astrophysics Data System (ADS)

    Karney, Daniel Houghton

    Energy and environmental economics are closely related fields as nearly all forms of energy production generate pollution and thus nearly all forms of environmental policy affect energy production and consumption. The three essays in this dissertation are related by their common themes of energy and environmental economics, but they differ in their methodologies. The first chapter is an empirical exercise that looks that the relationship between electricity price deregulation and maintenance outages at nuclear power plants. The second chapter is an applied theory paper that investigates environmental regulation in a multiple pollutants setting. The third chapter develops a new methodology regarding the construction of analytical general equilibrium models that can be used to study topics in energy and environmental economics.

  2. Guiding Empirical and Theoretical Explorations of Organic Matter Decay By Synthesizing Temperature Responses of Enzyme Kinetics, Microbes, and Isotope Fluxes

    NASA Astrophysics Data System (ADS)

    Billings, S. A.; Ballantyne, F.; Lehmeier, C.; Min, K.

    2014-12-01

    Soil organic matter (SOM) transformation rates generally increase with temperature, but whether this is realized depends on soil-specific features. To develop predictive models applicable to all soils, we must understand two key, ubiquitous features of SOM transformation: the temperature sensitivity of myriad enzyme-substrate combinations and temperature responses of microbial physiology and metabolism, in isolation from soil-specific conditions. Predicting temperature responses of production of CO2 vs. biomass is also difficult due to soil-specific features: we cannot know the identity of active microbes nor the substrates they employ. We highlight how recent empirical advances describing SOM decay can help develop theoretical tools relevant across diverse spatial and temporal scales. At a molecular level, temperature effects on purified enzyme kinetics reveal distinct temperature sensitivities of decay of diverse SOM substrates. Such data help quantify the influence of microbial adaptations and edaphic conditions on decay, have permitted computation of the relative availability of carbon (C) and nitrogen (N) liberated upon decay, and can be used with recent theoretical advances to predict changes in mass specific respiration rates as microbes maintain biomass C:N with changing temperature. Enhancing system complexity, we can subject microbes to temperature changes while controlling growth rate and without altering substrate availability or identity of the active population, permitting calculation of variables typically inferred in soils: microbial C use efficiency (CUE) and isotopic discrimination during C transformations. Quantified declines in CUE with rising temperature are critical for constraining model CUE estimates, and known changes in δ13C of respired CO2 with temperature is useful for interpreting δ13C-CO2 at diverse scales. We suggest empirical studies important for advancing knowledge of how microbes respond to temperature, and ideas for theoretical

  3. Theoretical performance assessment and empirical analysis of super-resolution under unknown affine sensor motion.

    PubMed

    Thelen, Brian J; Valenzuela, John R; LeBlanc, Joel W

    2016-04-01

    This paper deals with super-resolution (SR) processing and associated theoretical performance assessment for under-sampled video data collected from a moving imaging platform with unknown motion and assuming a relatively flat scene. This general scenario requires joint estimation of the high-resolution image and the parameters that determine a projective transform that relates the collected frames to one another. A quantitative assessment of the variance in the random error as achieved through a joint-estimation approach (e.g., SR image reconstruction and motion estimation) is carried out via the general framework of M-estimators and asymptotic statistics. This approach provides a performance measure on estimating the fine-resolution scene when there is a lack of perspective information and represents a significant advancement over previous work that considered only the more specific scenario of mis-registration. A succinct overview of the theoretical framework is presented along with some specific results on the approximate random error for the case of unknown translation and affine motions. A comparison is given between the approximated random error and that actually achieved by an M-estimator approach to the joint-estimation problem. These results provide insight on the reduction in SR reconstruction accuracy when jointly estimating unknown inter-frame affine motion.

  4. Pharmaceuticals, political money, and public policy: a theoretical and empirical agenda.

    PubMed

    Jorgensen, Paul D

    2013-01-01

    Why, when confronted with policy alternatives that could improve patient care, public health, and the economy, does Congress neglect those goals and tailor legislation to suit the interests of pharmaceutical corporations? In brief, for generations, the pharmaceutical industry has convinced legislators to define policy problems in ways that protect its profit margin. It reinforces this framework by selectively providing information and by targeting campaign contributions to influential legislators and allies. In this way, the industry displaces the public's voice in developing pharmaceutical policy. Unless citizens mobilize to confront the political power of pharmaceutical firms, objectionable industry practices and public policy will not change. Yet we need to refine this analysis. I propose a research agenda to uncover pharmaceutical influence. It develops the theory of dependence corruption to explain how the pharmaceutical industry is able to deflect the broader interests of the general public. It includes empirical studies of lobbying and campaign finance to uncover the means drug firms use to: (1) shape the policy framework adopted and information used to analyze policy; (2) subsidize the work of political allies; and (3) influence congressional voting.

  5. The Operation Was a Success, but the Patient Died: Theoretical Orthodoxy versus Empirical Validation.

    ERIC Educational Resources Information Center

    Hershenson, David B.

    1992-01-01

    Reviews issues raised in ongoing debate between advocates of eclecticism and proponents of single-theory-based counseling. Sees essential issue for field of mental health counseling to be need to build theory base specific to profession. Asserts that adequate theory must be based on defining principles of mental health counseling profession and…

  6. Lay attitudes toward deception in medicine: Theoretical considerations and empirical evidence

    PubMed Central

    Pugh, Jonathan; Kahane, Guy; Maslen, Hannah; Savulescu, Julian

    2016-01-01

    Abstract Background: There is a lack of empirical data on lay attitudes toward different sorts of deception in medicine. However, lay attitudes toward deception should be taken into account when we consider whether deception is ever permissible in a medical context. The objective of this study was to examine lay attitudes of U.S. citizens toward different sorts of deception across different medical contexts. Methods: A one-time online survey was administered to U.S. users of the Amazon “Mechanical Turk” website. Participants were asked to answer questions regarding a series of vignettes depicting different sorts of deception in medical care, as well as a question regarding their general attitudes toward truth-telling. Results: Of the 200 respondents, the majority found the use of placebos in different contexts to be acceptable following partial disclosure but found it to be unacceptable if it involved outright lying. Also, 55.5% of respondents supported the use of sham surgery in clinical research, although 55% claimed that it would be unacceptable to deceive patients in this research, even if this would improve the quality of the data from the study. Respondents supported fully informing patients about distressing medical information in different contexts, especially when the patient is suffering from a chronic condition. In addition, 42.5% of respondents believed that it is worse to deceive someone by providing the person with false information than it is to do so by giving the person true information that is likely to lead them to form a false belief, without telling them other important information that shows it to be false. However, 41.5% believed that the two methods of deception were morally equivalent. Conclusions: Respondents believed that some forms of deception were acceptable in some circumstances. While the majority of our respondents opposed outright lying in medical contexts, they were prepared to support partial disclosure and the use of

  7. Picture-word interference is a Stroop effect: A theoretical analysis and new empirical findings.

    PubMed

    Starreveld, Peter A; La Heij, Wido

    2016-10-06

    The picture-word interference (PWI) paradigm and the Stroop color-word interference task are often assumed to reflect the same underlying processes. On the basis of a PRP study, Dell'Acqua et al. (Psychonomic Bulletin & Review, 14: 717-722, 2007) argued that this assumption is incorrect. In this article, we first discuss the definitions of Stroop- and picture-word interference. Next, we argue that both effects consist of at least four components that correspond to four characteristics of the distractor word: (1) response-set membership, (2) task relevance, (3) semantic relatedness, and (4) lexicality. On the basis of this theoretical analysis, we conclude that the typical Stroop effect and the typical PWI effect mainly differ in the relative contributions of these four components. Finally, the results of an interference task are reported in which only the nature of the target - color or picture - was manipulated and all other distractor task characteristics were kept constant. The results showed no difference between color and picture targets with respect to all behavioral measures examined. We conclude that the assumption that the same processes underlie verbal interference in color and picture naming is warranted.

  8. Linking predator risk and uncertainty to adaptive forgetting: a theoretical framework and empirical test using tadpoles.

    PubMed

    Ferrari, Maud C O; Brown, Grant E; Bortolotti, Gary R; Chivers, Douglas P

    2010-07-22

    Hundreds of studies have examined how prey animals assess their risk of predation. These studies work from the basic tennet that prey need to continually balance the conflicting demands of predator avoidance with activities such as foraging and reproduction. The information that animals gain regarding local predation risk is most often learned. Yet, the concept of 'memory' in the context of predation remains virtually unexplored. Here, our goal was (i) to determine if the memory window associated with predator recognition is fixed or flexible and, if it is flexible, (ii) to identify which factors affect the length of this window and in which ways. We performed an experiment on larval wood frogs, Rana sylvatica, to test whether the risk posed by, and the uncertainty associated with, the predator would affect the length of the tadpoles' memory window. We found that as the risk associated with the predator increases, tadpoles retained predator-related information for longer. Moreover, if the uncertainty about predator-related information increases, then prey use this information for a shorter period. We also present a theoretical framework aiming at highlighting both intrinsic and extrinsic factors that could affect the memory window of information use by prey individuals.

  9. Adult Coping with Childhood Sexual Abuse: A Theoretical and Empirical Review

    PubMed Central

    Walsh, Kate; Fortier, Michelle A.; DiLillo, David

    2009-01-01

    Coping has been suggested as an important element in understanding the long-term functioning of individuals with a history of child sexual abuse (CSA). The present review synthesizes the literature on coping with CSA, first by examining theories of coping with trauma, and, second by examining how these theories have been applied to studies of coping in samples of CSA victims. Thirty-nine studies were reviewed, including eleven descriptive studies of the coping strategies employed by individuals with a history of CSA, eighteen correlational studies of the relationship between coping strategies and long-term functioning of CSA victims, and ten investigations in which coping was examined as a mediational factor in relation to long-term outcomes. These studies provide initial information regarding early sexual abuse and subsequent coping processes. However, this literature is limited by several theoretical and methodological issues, including a failure to specify the process of coping as it occurs, a disparity between theory and research, and limited applicability to clinical practice. Future directions of research are discussed and include the need to understand coping as a process, identification of coping in relation to adaptive outcomes, and considerations of more complex mediational and moderational processes in the study of coping with CSA. PMID:20161502

  10. Innovation in Information Technology: Theoretical and Empirical Study in SMQR Section of Export Import in Automotive Industry

    NASA Astrophysics Data System (ADS)

    Edi Nugroho Soebandrija, Khristian; Pratama, Yogi

    2014-03-01

    This paper has the objective to provide the innovation in information technology in both theoretical and empirical study. Precisely, both aspects relate to the Shortage Mispacking Quality Report (SMQR) Claims in Export and Import in Automotive Industry. This paper discusses the major aspects of Innovation, Information Technology, Performance and Competitive Advantage. Furthermore, In the empirical study of PT. Astra Honda Motor (AHM) refers to SMQR Claims, Communication Systems, Analysis and Design Systems. Briefly both aspects of the major aspects and its empirical study are discussed in the Introduction Session. Furthermore, the more detail discussion is conducted in the related aspects in other sessions of this paper, in particular in Literature Review in term classical and updated reference of current research. The increases of SMQR claim and communication problem at PT. Astra Daihatsu Motor (PT. ADM) which still using the email cause the time of claim settlement become longer and finally it causes the rejected of SMQR claim by supplier. With presence of this problem then performed to design the integrated communication system to manage the communication process of SMQR claim between PT. ADM with supplier. The systems was analyzed and designed is expected to facilitate the claim communication process so that can be run in accordance with the procedure and fulfill the target of claim settlement time and also eliminate the difficulties and problems on the previous manual communication system with the email. The design process of the system using the approach of system development life cycle method by Kendall & Kendall (2006)which design process covers the SMQR problem communication process, judgment process by the supplier, claim process, claim payment process and claim monitoring process. After getting the appropriate system designs for managing the SMQR claim, furthermore performed the system implementation and can be seen the improvement in claim communication

  11. Reversing Language Shift: Theoretical and Empirical Foundations of Assistance to Threatened Languages. Multilingual Matters Series: 76.

    ERIC Educational Resources Information Center

    Fishman, Joshua A.

    The theory and practice of assistance to speech communities whose native languages are threatened are examined. The discussion focuses on why most efforts to reverse language shift are unsuccessful or even harmful, diagnosing difficulties and prescribing alternatives based on a combination of ethnolinguistic, sociocultural, and econotechnical…

  12. Alignment of Standards and Assessment: A Theoretical and Empirical Study of Methods for Alignment

    ERIC Educational Resources Information Center

    Nasstrom, Gunilla; Henriksson, Widar

    2008-01-01

    Introduction: In a standards-based school-system alignment of policy documents with standards and assessment is important. To be able to evaluate whether schools and students have reached the standards, the assessment should focus on the standards. Different models and methods can be used for measuring alignment, i.e. the correspondence between…

  13. Student Conceptual Level and Models of Teaching: Theoretical and Empirical Coordination of Two Models

    ERIC Educational Resources Information Center

    Hunt, David E.; And Others

    1974-01-01

    The studies described here are the first in a series of investigations of the teaching-learning process based on Kurt Lewin's B-P-E paradigm (learning outcomes are a result of the interactive effects of different kinds of students and different kinds of teaching approaches). (JA)

  14. Coping, acculturation, and psychological adaptation among migrants: a theoretical and empirical review and synthesis of the literature.

    PubMed

    Kuo, Ben C H

    2014-01-01

    Given the continuous, dynamic demographic changes internationally due to intensive worldwide migration and globalization, the need to more fully understand how migrants adapt and cope with acculturation experiences in their new host cultural environment is imperative and timely. However, a comprehensive review of what we currently know about the relationship between coping behavior and acculturation experience for individuals undergoing cultural changes has not yet been undertaken. Hence, the current article aims to compile, review, and examine cumulative cross-cultural psychological research that sheds light on the relationships among coping, acculturation, and psychological and mental health outcomes for migrants. To this end, this present article reviews prevailing literature pertaining to: (a) the stress and coping conceptual perspective of acculturation; (b) four theoretical models of coping, acculturation and cultural adaptation; (c) differential coping pattern among diverse acculturating migrant groups; and (d) the relationship between coping variabilities and acculturation levels among migrants. In terms of theoretical understanding, this review points to the relative strengths and limitations associated with each of the four theoretical models on coping-acculturation-adaptation. These theories and the empirical studies reviewed in this article further highlight the central role of coping behaviors/strategies in the acculturation process and outcome for migrants and ethnic populations, both conceptually and functionally. Moreover, the review shows that across studies culturally preferred coping patterns exist among acculturating migrants and migrant groups and vary with migrants' acculturation levels. Implications and limitations of the existing literature for coping, acculturation, and psychological adaptation research are discussed and recommendations for future research are put forth.

  15. Theoretical and empirical study of 2-biphenylmethanol molecule: the structure and intermolecular interactions

    NASA Astrophysics Data System (ADS)

    Babkov, L. M.; Baran, J.; Davydova, N. A.; Pietraszko, A.; Uspenskiy, K. E.

    2005-06-01

    The crystal structure of 2-biphenylmethanol has been studied by X-ray crystallography at room temperature and its IR transmittance spectra have been measured in the wide frequency region 400-4000 cm -1. The structure, energy, electrooptical parameters, frequencies and intensities in the IR spectra for the free molecules of 2-biphenylmethanol, methanol, and tetramer of hydrogen-bonded methanol molecules have been calculated at the B3LYP level of the density functional theory with the 6-31G* basis set. Based on analysis of the obtained results the interpretation of the IR spectra for room temperature was given and estimation of the hydrogen bonds energy has been done.

  16. Theoretical and empirical investigation of the structure and intermolecular interactions in 2-biphenylmethanol

    NASA Astrophysics Data System (ADS)

    Uspenskiy, K. E.; Babkov, L. M.; Baran, J.; Davydova, N. A.; Pietraszko, A.

    2005-06-01

    The crystal structure of 2-biphenylmethanol has been studied by X-ray crystallography at room temperature and its JR transmittance spectra have been measured in the wide frequency region 400-4000 cm-1. The structure, energy, electrooptical parameters, frequencies and intensities in the IR spectra for the free molecules of 2-biphenylmethanol, methanol, and tetramer of hydrogen-bonded methanol molecules have been calculated at the B3LYP level of the density functional theory with the 6-3 1G* basis set. Based on analysis of the obtained results the interpretation of the JR spectra for room temperature was given and estimation of the hydrogen bonds energy has been done.

  17. "The liability of newness" revisited: Theoretical restatement and empirical testing in emergent organizations.

    PubMed

    Yang, Tiantian; Aldrich, Howard E

    2017-03-01

    The mismatch between Stinchcombe's original propositions regarding "the liability of newness" and subsequent attempts to test those propositions suggests to us that the form and causes of the liability remain open to further investigation. Taking organizational emergence as a process comprising entrepreneurs engaging in actions that produce outcomes, we propose hypotheses about the social mechanisms of organizational construction involved in investing resources, developing routines, and maintaining boundaries. Distinguishing between initial founding conditions versus subsequent activities, our results not only confirm the liability of newness hypothesis, but also reveal a much higher risk of failure in organizations' early lifetime than rates found in previous research. Moreover, our results highlight the importance of entrepreneurs' continuing effort after their initial organizing attempts. Whereas only a few initial founding conditions lower the risk of failure, subsequent entrepreneurial activities play a major role in keeping the venture alive. Entrepreneurs contribute to whether a venture survives through raising more resources, enacting routines, and gaining increased public recognition of organizational boundaries. After controlling for financial performance, our results still hold. Based on our analysis, we offer suggestions for theory and research on organizations and entrepreneurship.

  18. The Ease of Language Understanding (ELU) model: theoretical, empirical, and clinical advances

    PubMed Central

    Rönnberg, Jerker; Lunner, Thomas; Zekveld, Adriana; Sörqvist, Patrik; Danielsson, Henrik; Lyxell, Björn; Dahlström, Örjan; Signoret, Carine; Stenfelt, Stefan; Pichora-Fuller, M. Kathleen; Rudner, Mary

    2013-01-01

    Working memory is important for online language processing during conversation. We use it to maintain relevant information, to inhibit or ignore irrelevant information, and to attend to conversation selectively. Working memory helps us to keep track of and actively participate in conversation, including taking turns and following the gist. This paper examines the Ease of Language Understanding model (i.e., the ELU model, Rönnberg, 2003; Rönnberg et al., 2008) in light of new behavioral and neural findings concerning the role of working memory capacity (WMC) in uni-modal and bimodal language processing. The new ELU model is a meaning prediction system that depends on phonological and semantic interactions in rapid implicit and slower explicit processing mechanisms that both depend on WMC albeit in different ways. It is based on findings that address the relationship between WMC and (a) early attention processes in listening to speech, (b) signal processing in hearing aids and its effects on short-term memory, (c) inhibition of speech maskers and its effect on episodic long-term memory, (d) the effects of hearing impairment on episodic and semantic long-term memory, and finally, (e) listening effort. New predictions and clinical implications are outlined. Comparisons with other WMC and speech perception models are made. PMID:23874273

  19. Theoretical and empirical low perigee aerodynamic heating during orbital flight of an atmosphere explorer

    NASA Technical Reports Server (NTRS)

    Caruso, P. S., Jr.; Naegeli, C. R.

    1976-01-01

    This document presents the results of an extensive, low perigee, orbital aerodynamic heating study undertaken in support of the Atmosphere Explorer-C Temperature Alarm. Based upon in-flight orbital temperature data from the Temperature Alarm tungsten resistance wire thermometer, aerodynamic heating rates have been determined for eight selected orbits by means of a reduced thermal analytical model verified by both ground test and flight data. These heating rates are compared with the classical free molecular and first order collision regime values. It has been concluded that, for engineering purposes, the aerodynamic heating rate of atmospheric gases at perigee altitudes between 170 and 135 km on pure tungsten wire is 30 to 60% of the value set by the classical free molecular limit. Relative to the more usual orbital thermal input attributable to direct solar radiation, the aerodynamic heating rate at the lowest altitude attempted with the spacecraft despun (135 km) is the equivalent of about 1.2 solar constants incident on a tungsten wire with a solar absorptivity of 0.85.

  20. Periodic limb movements of sleep: empirical and theoretical evidence supporting objective at-home monitoring

    PubMed Central

    Moro, Marilyn; Goparaju, Balaji; Castillo, Jelina; Alameddine, Yvonne; Bianchi, Matt T

    2016-01-01

    Introduction Periodic limb movements of sleep (PLMS) may increase cardiovascular and cerebrovascular morbidity. However, most people with PLMS are either asymptomatic or have nonspecific symptoms. Therefore, predicting elevated PLMS in the absence of restless legs syndrome remains an important clinical challenge. Methods We undertook a retrospective analysis of demographic data, subjective symptoms, and objective polysomnography (PSG) findings in a clinical cohort with or without obstructive sleep apnea (OSA) from our laboratory (n=443 with OSA, n=209 without OSA). Correlation analysis and regression modeling were performed to determine predictors of periodic limb movement index (PLMI). Markov decision analysis with TreeAge software compared strategies to detect PLMS: in-laboratory PSG, at-home testing, and a clinical prediction tool based on the regression analysis. Results Elevated PLMI values (>15 per hour) were observed in >25% of patients. PLMI values in No-OSA patients correlated with age, sex, self-reported nocturnal leg jerks, restless legs syndrome symptoms, and hypertension. In OSA patients, PLMI correlated only with age and self-reported psychiatric medications. Regression models indicated only a modest predictive value of demographics, symptoms, and clinical history. Decision modeling suggests that at-home testing is favored as the pretest probability of PLMS increases, given plausible assumptions regarding PLMS morbidity, costs, and assumed benefits of pharmacological therapy. Conclusion Although elevated PLMI values were commonly observed, routinely acquired clinical information had only weak predictive utility. As the clinical importance of elevated PLMI continues to evolve, it is likely that objective measures such as PSG or at-home PLMS monitors will prove increasingly important for clinical and research endeavors. PMID:27540316

  1. Theoretical and Empirical Study of Visual Cortex Using the Bcm Neural Network Model

    NASA Astrophysics Data System (ADS)

    Clothiaux, Eugene Edmund

    1990-01-01

    Most neurons in kitten visual cortex respond to specific visual stimuli that are presented to either one or both eyes. These selective response properties of neurons in kitten visual cortex depend on the visual environment that is experienced during a critical period of postnatal development. For example, closing one eye (monocular deprivation) leads to the loss of that eye's ability to elicit a cortical response and any manipulation that causes the two eyes to become misaligned produces abnormal neuronal responses as well. A number of proposed learning rules attempt to account for these observations. The learning rule considered in this thesis is the cortical synaptic plasticity theory of Bienenstock, Cooper and Munro (1982), which uses a sliding modification threshold based on average cell activity to determine the changes made to the synaptic weights. A detailed investigation of the BCM model is undertaken with special emphasis on finding parameter sets that produce rates of change of the BCM model synapses consistent with experiment. Computer simulations using the developed parameter sets indicate that the BCM theory does indeed capture the essential features of a broad range of experimental results. The BCM theory also makes one clear prediction: during monocular deprivation the closed eye's loss of response occurs because of the highly specific response properties of the open eye. An analysis of an experiment designed to test this prediction of the theory is described in detail. The results of the analysis do indicate a slight correlation between open eye selectivity and closed eye responsiveness; however, the results are not highly statistically significant.

  2. AGENT-BASED MODELS IN EMPIRICAL SOCIAL RESEARCH*

    PubMed Central

    Bruch, Elizabeth; Atwell, Jon

    2014-01-01

    Agent-based modeling has become increasingly popular in recent years, but there is still no codified set of recommendations or practices for how to use these models within a program of empirical research. This article provides ideas and practical guidelines drawn from sociology, biology, computer science, epidemiology, and statistics. We first discuss the motivations for using agent-based models in both basic science and policy-oriented social research. Next, we provide an overview of methods and strategies for incorporating data on behavior and populations into agent-based models, and review techniques for validating and testing the sensitivity of agent-based models. We close with suggested directions for future research. PMID:25983351

  3. Video watermarking with empirical PCA-based decoding.

    PubMed

    Khalilian, Hanieh; Bajic, Ivan V

    2013-12-01

    A new method for video watermarking is presented in this paper. In the proposed method, data are embedded in the LL subband of wavelet coefficients, and decoding is performed based on the comparison among the elements of the first principal component resulting from empirical principal component analysis (PCA). The locations for data embedding are selected such that they offer the most robust PCA-based decoding. Data are inserted in the LL subband in an adaptive manner based on the energy of high frequency subbands and visual saliency. Extensive testing was performed under various types of attacks, such as spatial attacks (uniform and Gaussian noise and median filtering), compression attacks (MPEG-2, H. 263, and H. 264), and temporal attacks (frame repetition, frame averaging, frame swapping, and frame rate conversion). The results show that the proposed method offers improved performance compared with several methods from the literature, especially under additive noise and compression attacks.

  4. Examination of the hierarchical structure of the brief COPE in a French sample: empirical and theoretical convergences.

    PubMed

    Doron, Julie; Trouillet, Raphaël; Gana, Kamel; Boiché, Julie; Neveu, Dorine; Ninot, Grégory

    2014-01-01

    This study aimed to determine whether the various factors of coping as measured by the Brief COPE could be integrated into a more parsimonious hierarchical structure. To identify a higher structure for the Brief COPE, several measurement models based on prior theoretical and hierarchical conceptions of coping were tested. First, confirmatory factor analysis (CFA) results revealed that the Brief COPE's 14 original factors could be represented more parsimoniously with 5 higher order dimensions: problem-solving, support-seeking, avoidance, cognitive restructuring, and distraction (N = 2,187). Measurement invariance across gender was also shown. Second, results provided strong support for the cross-validation and the concurrent validity of the hierarchical structure of the Brief COPE (N = 584). Results indicated statistically significant correlations between Brief COPE factors and trait anxiety and perceived stress. Limitations and theoretical and methodological implications of these results are discussed.

  5. Empirically based device modeling of bulk heterojunction organic photovoltaics

    NASA Astrophysics Data System (ADS)

    Pierre, Adrien; Lu, Shaofeng; Howard, Ian A.; Facchetti, Antonio; Arias, Ana Claudia

    2013-10-01

    An empirically based, open source, optoelectronic model is constructed to accurately simulate organic photovoltaic (OPV) devices. Bulk heterojunction OPV devices based on a new low band gap dithienothiophene- diketopyrrolopyrrole donor polymer (P(TBT-DPP)) are blended with PC70BM and processed under various conditions, with efficiencies up to 4.7%. The mobilities of electrons and holes, bimolecular recombination coefficients, exciton quenching efficiencies in donor and acceptor domains and optical constants of these devices are measured and input into the simulator to yield photocurrent with less than 7% error. The results from this model not only show carrier activity in the active layer but also elucidate new routes of device optimization by varying donor-acceptor composition as a function of position. Sets of high and low performance devices are investigated and compared side-by-side.

  6. Development of an empirically based dynamic biomechanical strength model

    NASA Technical Reports Server (NTRS)

    Pandya, A.; Maida, J.; Aldridge, A.; Hasson, S.; Woolford, B.

    1992-01-01

    The focus here is on the development of a dynamic strength model for humans. Our model is based on empirical data. The shoulder, elbow, and wrist joints are characterized in terms of maximum isolated torque, position, and velocity in all rotational planes. This information is reduced by a least squares regression technique into a table of single variable second degree polynomial equations determining the torque as a function of position and velocity. The isolated joint torque equations are then used to compute forces resulting from a composite motion, which in this case is a ratchet wrench push and pull operation. What is presented here is a comparison of the computed or predicted results of the model with the actual measured values for the composite motion.

  7. Empirically and theoretically determined spatial and temporal variability of the Late Holocene sea level in the South-Central Pacific (Invited)

    NASA Astrophysics Data System (ADS)

    Eisenhauer, A.; Rashid, R. J.; Hallmann, N.; Stocchi, P.; Fietzke, J.; Camoin, G.; Vella, C.; Samankassou, E.

    2013-12-01

    We present U/Th dated fossil corals which were collected from reef platforms on three islands (Moorea, Huahine and Bora Bora) of the Society Islands, French Polynesia. In particular U/Th-dated fossil microatolls precisely constrain the timing and amplitude of sea-level variations at and after the 'Holocene Sea Level Maximum, HSLM' because microatolls grow close or even directly at the current sea-level position. We found that sea level reached a subsidence corrected position of at least ~1.5 m above present sea level (apsl) at ~5.4 ka before present (BP) relative to Huahine island and a maximum amplitude of at least ~2.0 m apsl at ~2.0 ka BP relative to Moorea. In between 5.4 and 2 ka minimum sealevel oscillated between 1.5 and 2 m for ~3 ka but then declined to the present position after ~2 ka BP. Based on statistical arguments on the coral age distribution HSLM is constrained to an interval of 3.5×0.8 ka. Former studies being in general accord with our data show that sea level in French Polynesia was ~1 m higher than present between 5,000 and 1,250 yrs BP and that a highstand was reached between 2,000 and 1,500 yrs BP (Pirazzoli and Montaggioni, 1988) and persisted until 1,200 yrs BP in the Tuamotu Archipelago (Pirazzoli and Montaggioni, 1986). Modeling of the Late Holocene sea-level rise performed during the course of this study taking glacio-isostatic and the ocean syphoning effect into account predicts a Late Holocene sea-level highstand of ~1 m apsl at ~4 ka BP for Bora Bora which is in general agreement with the statistical interpretation of our empirical data. However, the modeled HSLM amplitude of ~1 m apsl is considerably smaller than predicted by the empirical data indicating amplitudes of more than 2 m. Furthermore, the theoretical model predicts a continuously falling sea level after ~4 ka to the present. This is in contrast to the empirical data indicating a sea level remaining above at least ~1 m apsl between 5 ka and 2 ka then followed by a certain

  8. Ontology-Based Empirical Knowledge Verification for Professional Virtual Community

    ERIC Educational Resources Information Center

    Chen, Yuh-Jen

    2011-01-01

    A professional virtual community provides an interactive platform for enterprise experts to create and share their empirical knowledge cooperatively, and the platform contains a tremendous amount of hidden empirical knowledge that knowledge experts have preserved in the discussion process. Therefore, enterprise knowledge management highly…

  9. Activity Theory as a Theoretical Framework for Health Self-Quantification: A Systematic Review of Empirical Studies

    PubMed Central

    2016-01-01

    Background Self-quantification (SQ) is a way of working in which, by using tracking tools, people aim to collect, manage, and reflect on personal health data to gain a better understanding of their own body, health behavior, and interaction with the world around them. However, health SQ lacks a formal framework for describing the self-quantifiers’ activities and their contextual components or constructs to pursue these health related goals. Establishing such framework is important because it is the first step to operationalize health SQ fully. This may in turn help to achieve the aims of health professionals and researchers who seek to make or study changes in the self-quantifiers’ health systematically. Objective The aim of this study was to review studies on health SQ in order to answer the following questions: What are the general features of the work and the particular activities that self-quantifiers perform to achieve their health objectives? What constructs of health SQ have been identified in the scientific literature? How have these studies described such constructs? How would it be possible to model these constructs theoretically to characterize the work of health SQ? Methods A systematic review of peer-reviewed literature was conducted. A total of 26 empirical studies were included. The content of these studies was thematically analyzed using Activity Theory as an organizing framework. Results The literature provided varying descriptions of health SQ as data-driven and objective-oriented work mediated by SQ tools. From the literature, we identified two types of SQ work: work on data (ie, data management activities) and work with data (ie, health management activities). Using Activity Theory, these activities could be characterized into 6 constructs: users, tracking tools, health objectives, division of work, community or group setting, and SQ plan and rules. We could not find a reference to any single study that accounted for all these activities and

  10. Empirical Likelihood-Based ANOVA for Trimmed Means

    PubMed Central

    Velina, Mara; Valeinis, Janis; Greco, Luca; Luta, George

    2016-01-01

    In this paper, we introduce an alternative to Yuen’s test for the comparison of several population trimmed means. This nonparametric ANOVA type test is based on the empirical likelihood (EL) approach and extends the results for one population trimmed mean from Qin and Tsao (2002). The results of our simulation study indicate that for skewed distributions, with and without variance heterogeneity, Yuen’s test performs better than the new EL ANOVA test for trimmed means with respect to control over the probability of a type I error. This finding is in contrast with our simulation results for the comparison of means, where the EL ANOVA test for means performs better than Welch’s heteroscedastic F test. The analysis of a real data example illustrates the use of Yuen’s test and the new EL ANOVA test for trimmed means for different trimming levels. Based on the results of our study, we recommend the use of Yuen’s test for situations involving the comparison of population trimmed means between groups of interest. PMID:27690063

  11. Methods for combining a theoretical and an empirical approach in modelling pressure and flow control valves for CAE-programs for fluid power circuits

    NASA Astrophysics Data System (ADS)

    Handroos, Heikki

    An analytical mathematical model for a fluid power valve uses equations based on physical laws. The parameters consist of physical coefficients, dimensions of the internal elements, spring constants, etc. which are not provided by the component manufacturers. The valve has to be dismantled in order to determine their values. The model is only in accordance with a particular type of valve construction and there are a large number of parameters. This is a major common problem in computer aided engineering (CAE) programs for fluid power circuits. Methods for solving this problem by combining a theoretical and an empirical approach are presented. Analytical models for single stage pressure and flow control valves are brought into forms which contain fewer parameters whose values can be determined from measured characteristic curves. The least squares criterion is employed to identify the parameter values describing the steady state of a valve. The steady state characteristic curves that are required data for this identification are quite often provided by the manufacturers. The parameters describing the dynamics of a valve are determined using a simple noncomputational method using dynamic characteristic curves that can be easily measured. The importance of the identification accuracy of the different parameters of the single stage pressure relief valve model is compared using a parameter sensitivity analysis method. A new comparison method called relative mean value criterion is used to compare the influences of variations of the different parameters to a nominal dynamic response.

  12. Empirically based device modeling of bulk heterojunction organic photovoltaics

    NASA Astrophysics Data System (ADS)

    Pierre, Adrien; Lu, Shaofeng; Howard, Ian A.; Facchetti, Antonio; Arias, Ana Claudia

    2013-04-01

    We develop an empirically based optoelectronic model to accurately simulate the photocurrent in organic photovoltaic (OPV) devices with novel materials including bulk heterojunction OPV devices based on a new low band gap dithienothiophene-DPP donor polymer, P(TBT-DPP), blended with PC70BM at various donor-acceptor weight ratios and solvent compositions. Our devices exhibit power conversion efficiencies ranging from 1.8% to 4.7% at AM 1.5G. Electron and hole mobilities are determined using space-charge limited current measurements. Bimolecular recombination coefficients are both analytically calculated using slowest-carrier limited Langevin recombination and measured using an electro-optical pump-probe technique. Exciton quenching efficiencies in the donor and acceptor domains are determined from photoluminescence spectroscopy. In addition, dielectric and optical constants are experimentally determined. The photocurrent and its bias-dependence that we simulate using the optoelectronic model we develop, which takes into account these physically measured parameters, shows less than 7% error with respect to the experimental photocurrent (when both experimentally and semi-analytically determined recombination coefficient is used). Free carrier generation and recombination rates of the photocurrent are modeled as a function of the position in the active layer at various applied biases. These results show that while free carrier generation is maximized in the center of the device, free carrier recombination is most dominant near the electrodes even in high performance devices. Such knowledge of carrier activity is essential for the optimization of the active layer by enhancing light trapping and minimizing recombination. Our simulation program is intended to be freely distributed for use in laboratories fabricating OPV devices.

  13. MODELING OF 2LIBH4 PLUS MGH2 HYDROGEN STORAGE SYSTEM ACCIDENT SCENARIOS USING EMPIRICAL AND THEORETICAL THERMODYNAMICS

    SciTech Connect

    James, C; David Tamburello, D; Joshua Gray, J; Kyle Brinkman, K; Bruce Hardy, B; Donald Anton, D

    2009-04-01

    It is important to understand and quantify the potential risk resulting from accidental environmental exposure of condensed phase hydrogen storage materials under differing environmental exposure scenarios. This paper describes a modeling and experimental study with the aim of predicting consequences of the accidental release of 2LiBH{sub 4}+MgH{sub 2} from hydrogen storage systems. The methodology and results developed in this work are directly applicable to any solid hydride material and/or accident scenario using appropriate boundary conditions and empirical data. The ability to predict hydride behavior for hypothesized accident scenarios facilitates an assessment of the of risk associated with the utilization of a particular hydride. To this end, an idealized finite volume model was developed to represent the behavior of dispersed hydride from a breached system. Semiempirical thermodynamic calculations and substantiating calorimetric experiments were performed in order to quantify the energy released, energy release rates and to quantify the reaction products resulting from water and air exposure of a lithium borohydride and magnesium hydride combination. The hydrides, LiBH{sub 4} and MgH{sub 2}, were studied individually in the as-received form and in the 2:1 'destabilized' mixture. Liquid water hydrolysis reactions were performed in a Calvet calorimeter equipped with a mixing cell using neutral water. Water vapor and oxygen gas phase reactivity measurements were performed at varying relative humidities and temperatures by modifying the calorimeter and utilizing a gas circulating flow cell apparatus. The results of these calorimetric measurements were compared with standardized United Nations (UN) based test results for air and water reactivity and used to develop quantitative kinetic expressions for hydrolysis and air oxidation in these systems. Thermodynamic parameters obtained from these tests were then inputted into a computational fluid dynamics model to

  14. PDE-based nonlinear diffusion techniques for denoising scientific and industrial images: an empirical study

    NASA Astrophysics Data System (ADS)

    Weeratunga, Sisira K.; Kamath, Chandrika

    2002-05-01

    Removing noise from data is often the first step in data analysis. Denoising techniques should not only reduce the noise, but do so without blurring or changing the location of the edges. Many approaches have been proposed to accomplish this; in this paper, we focus on one such approach, namely the use of non-linear diffusion operators. This approach has been studied extensively from a theoretical viewpoint ever since the 1987 work of Perona and Malik showed that non-linear filters outperformed the more traditional linear Canny edge detector. We complement this theoretical work by investigating the performance of several isotropic diffusion operators on test images from scientific domains. We explore the effects of various parameters such as the choice of diffusivity function, explicit and implicit methods for the discretization of the PDE, and approaches for the spatial discretization of the non-linear operator etc. We also compare these schemes with simple spatial filters and the more complex wavelet-based shrinkage techniques. Our empirical results show that, with an appropriate choice of parameters, diffusion-based schemes can be as effective as competitive techniques.

  15. Phospholipid-based nonlamellar mesophases for delivery systems: bridging the gap between empirical and rational design.

    PubMed

    Martiel, Isabelle; Sagalowicz, Laurent; Mezzenga, Raffaele

    2014-07-01

    Phospholipids are ubiquitous cell membrane components and relatively well-accepted ingredients due to their natural origin. Phosphatidylcholine (PC) in particular offers a promising alternative to monoglycerides for lyotropic liquid crystalline (LLC) delivery system applications in the food, cosmetics and pharmaceutical industries, provided its strong tendency to form zero-mean curvature lamellar mesophases in water can be overcome. Higher negative curvatures are usually reached through the addition of a third lipid component, forming a ternary diagram phospholipid/water/oil. The initial part of this work summarizes the potential advantages and the challenges of phospholipid-based delivery system applications. In the next part, various ternary PC/water/oil systems are discussed, with a special emphasis on the PC/water/cyclohexane and PC/water/α-tocopherol systems. We report that R-(+)-limonene has a quantitatively similar effect as cyclohexane. The last part is devoted to the theoretical interpretation of the observed phase behaviors. A fruitful parallel is drawn with PC polymer-like reverse micelles, leading to a thermodynamic description in terms of interfacial bending energy. Investigations at the molecular level are reviewed to help in bridging the empirical and theoretical approaches. Predictive rules are finally derived from this wide-ranging overview, thereby opening the way to a future rational design of PC-based LLC delivery systems.

  16. Meaningful learning: theoretical support for concept-based teaching.

    PubMed

    Getha-Eby, Teresa J; Beery, Theresa; Xu, Yin; O'Brien, Beth A

    2014-09-01

    Novice nurses’ inability to transfer classroom knowledge to the bedside has been implicated in adverse patient outcomes, including death. Concept-based teaching is a pedagogy found to improve knowledge transfer. Concept-based teaching emanates from a constructivist paradigm of teaching and learning and can be implemented most effectively when the underlying theory and principles are applied. Ausubel’s theory of meaningful learning and its construct of substantive knowledge integration provides a model to help educators to understand, implement, and evaluate concept-based teaching. Contemporary findings from the fields of cognitive psychology, human development, and neurobiology provide empirical evidence of the relationship between concept-based teaching, meaningful learning, and knowledge transfer. This article describes constructivist principles and meaningful learning as they apply to nursing pedagogy.

  17. Segmented Labor Markets: A Review of the Theoretical and Empirical Literature and Its Implication for Educational Planning.

    ERIC Educational Resources Information Center

    Carnoy, Martin

    The study reviews orthodox theories of labor markets, presents new formulations of segmentation theory, and provides empirical tests of segmentation in the United States and several developing nations. Orthodox labor market theory views labor as being paid for its contribution to production and that investment in education and vocational training…

  18. Empirical estimates and theoretical predictions of the shorting factor for the THEMIS double-probe electric field instrument

    NASA Astrophysics Data System (ADS)

    Califf, S.; Cully, C. M.

    2016-07-01

    Double-probe electric field measurements on board spacecraft present significant technical challenges, especially in the inner magnetosphere where the ambient plasma characteristics can vary dramatically and alter the behavior of the instrument. We explore the shorting factor for the Time History of Events and Macroscale Interactions during Substorms electric field instrument, which is a scale factor error on the measured electric field due to coupling between the sensing spheres and the long wire booms, using both an empirical technique and through simulations with varying levels of fidelity. The empirical data and simulations both show that there is effectively no shorting when the spacecraft is immersed in high-density plasma deep within the plasmasphere and that shorting becomes more prominent as plasma density decreases and the Debye length increases outside the plasmasphere. However, there is a significant discrepancy between the data and theory for the shorting factor in low-density plasmas: the empirical estimate indicates ~0.7 shorting for long Debye lengths, but the simulations predict a shorting factor of ~0.94. This paper systematically steps through the empirical and modeling methods leading to the disagreement with the intention of motivating further study on the topic.

  19. Fleet Fatality Risk and its Sensitivity to Vehicle Mass Change in Frontal Vehicle-to-Vehicle Crashes, Using a Combined Empirical and Theoretical Model.

    PubMed

    Shi, Yibing; Nusholtz, Guy S

    2015-11-01

    The objective of this study is to analytically model the fatality risk in frontal vehicle-to-vehicle crashes of the current vehicle fleet, and its sensitivity to vehicle mass change. A model is built upon an empirical risk ratio-mass ratio relationship from field data and a theoretical mass ratio-velocity change ratio relationship dictated by conservation of momentum. The fatality risk of each vehicle is averaged over the closing velocity distribution to arrive at the mean fatality risks. The risks of the two vehicles are summed and averaged over all possible crash partners to find the societal mean fatality risk associated with a subject vehicle of a given mass from a fleet specified by a mass distribution function. Based on risk exponent and mass distribution from a recent fleet, the subject vehicle mean fatality risk is shown to increase, while at the same time that for the partner vehicles decreases, as the mass of the subject vehicle decreases. The societal mean fatality risk, the sum of these, incurs a penalty with respect to a fleet with complete mass equality. This penalty reaches its minimum (~8% for the example fleet) for crashes with a subject vehicle whose mass is close to the fleet mean mass. The sensitivity, i.e., the rate of change of the societal mean fatality risk with respect to the mass of the subject vehicle is assessed. Results from two sets of fully regression-based analyses, Kahane (2012) and Van Auken and Zellner (2013), are approximately compared with the current result. The general magnitudes of the results are comparable, but differences exist at a more detailed level. The subject vehicle-oriented societal mean fatality risk is averaged over all possible subject vehicle masses of a given fleet to obtain the overall mean fatality risk of the fleet. It is found to increase approximately linearly at a rate of about 0.8% for each 100 lb decrease in mass of all vehicles in the fleet.

  20. Evidence-based ethics? On evidence-based practice and the "empirical turn" from normative bioethics

    PubMed Central

    Goldenberg, Maya J

    2005-01-01

    Background The increase in empirical methods of research in bioethics over the last two decades is typically perceived as a welcomed broadening of the discipline, with increased integration of social and life scientists into the field and ethics consultants into the clinical setting, however it also represents a loss of confidence in the typical normative and analytic methods of bioethics. Discussion The recent incipiency of "Evidence-Based Ethics" attests to this phenomenon and should be rejected as a solution to the current ambivalence toward the normative resolution of moral problems in a pluralistic society. While "evidence-based" is typically read in medicine and other life and social sciences as the empirically-adequate standard of reasonable practice and a means for increasing certainty, I propose that the evidence-based movement in fact gains consensus by displacing normative discourse with aggregate or statistically-derived empirical evidence as the "bottom line". Therefore, along with wavering on the fact/value distinction, evidence-based ethics threatens bioethics' normative mandate. The appeal of the evidence-based approach is that it offers a means of negotiating the demands of moral pluralism. Rather than appealing to explicit values that are likely not shared by all, "the evidence" is proposed to adjudicate between competing claims. Quantified measures are notably more "neutral" and democratic than liberal markers like "species normal functioning". Yet the positivist notion that claims stand or fall in light of the evidence is untenable; furthermore, the legacy of positivism entails the quieting of empirically non-verifiable (or at least non-falsifiable) considerations like moral claims and judgments. As a result, evidence-based ethics proposes to operate with the implicit normativity that accompanies the production and presentation of all biomedical and scientific facts unchecked. Summary The "empirical turn" in bioethics signals a need for

  1. Why Culture Matters: An Empirically-Based Pre-Deployment Training Program

    DTIC Science & Technology

    2005-09-01

    Johnston 1995; Monaghan and Just 2000; Salzman 2001; Lichbach and Zuckerman 2002; Wedeen 2002). Anthropologist Clifford Geertz (2000) agrees that......analysis of culture should be more descriptive or narrative in nature, rather than empirical. Geertz (2000) also took a multi-theoretical approach and

  2. On a goodness-of-fit between theoretical hypsometric curve and its empirical equivalents derived for various depth bins from 30 arc-second GEBCO bathymetry

    NASA Astrophysics Data System (ADS)

    Włosińska, M.; Niedzielski, T.; Priede, I. G.; Migoń, P.

    2012-04-01

    The poster reports ongoing investigations into hypsometric curve modelling and its implications for sea level change. Numerous large-scale geodynamic phenomena, including global tectonics and the related sea level changes, are well described by a hypsometric curve that quantifies how the area of sea floor varies along with depth. Although the notion of hypsometric curve is rather simple, it is difficult to provide a reasonable theoretical model that fits an empirical curve. An analytical equation for a hypsometric curve is well known, but its goodness-of-fit to an empirical one is far from perfect. Such a limited accuracy may result from either not entirely adequate theoretical assumptions and concepts of a theoretical hypsometric curve or rather poorly modelled global bathymetry. Recent progress in obtaining accurate data on sea floor topography is due to subsea surveying and remote sensing. There are bathymetric datasets, including Global Bathymetric Charts of the Oceans (GEBCO), that provide a global framework for hypsometric curve computation. The recent GEBCO bathymetry - a gridded dataset that consists a sea floor topography raster revealing a global coverage with a spatial resolution of 30 arc-seconds - can be analysed to verify a depth-area relationship and to re-evaluate classical models for sea level change in geological time. Processing of the geospatial data is feasible on the basis of modern powerful tools provided by Geographic Information System (GIS) and automated with Python, the programming language that allows the user to utilise the GIS geoprocessor.

  3. E-learning in engineering education: a theoretical and empirical study of the Algerian higher education institution

    NASA Astrophysics Data System (ADS)

    Benchicou, Soraya; Aichouni, Mohamed; Nehari, Driss

    2010-06-01

    Technology-mediated education or e-learning is growing globally both in scale and delivery capacity due to the large diffusion of the ubiquitous information and communication technologies (ICT) in general and the web technologies in particular. This statement has not yet been fully supported by research, especially in developing countries such as Algeria. The purpose of this paper was to identify directions for addressing the needs of academics in higher education institutions in Algeria in order to adopt the e-learning approach as a strategy to improve quality of education. The paper will report results of an empirical study that measures the readiness of the Algerian higher education institutions towards the implementation of ICT in the educational process and the attitudes of faculty members towards the application of the e-learning approach in engineering education. Three main objectives were targeted, namely: (a) to provide an initial evaluation of faculty members' attitudes and perceptions towards web-based education; (b) reporting on their perceived requirements for implementing e-learning in university courses; (c) providing an initial input for a collaborative process of developing an institutional strategy for e-learning. Statistical analysis of the survey results indicates that the Algerian higher education institution, which adopted the Licence - Master and Doctorate educational system, is facing a big challenge to take advantage of emerging technological innovations and the advent of e-learning to further develop its teaching programmes and to enhance the quality of education in engineering fields. The successful implementation of this modern approach is shown to depend largely on a set of critical success factors that would include: 1. The extent to which the institution will adopt a formal and official e-learning strategy. 2. The extent to which faculty members will adhere and adopt this strategy and develop ownership of the various measures in the

  4. Partial differential equation-based approach for empirical mode decomposition: application on image analysis.

    PubMed

    Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques

    2012-09-01

    The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.

  5. MAIS: An Empirically-Based Intelligent CBI System.

    ERIC Educational Resources Information Center

    Christensen, Dean L.; Tennyson, Robert D.

    The goal of the programmatic research program for the Minnesota Adaptive Instructional System (MAIS), an intelligent computer-assisted instruction system, is to empirically investigate generalizable instructional variables and conditions that improve learning through the use of adaptive instructional strategies. Research has been initiated in the…

  6. An empirical/theoretical model with dimensionless numbers to predict the performance of electrodialysis systems on the basis of operating conditions.

    PubMed

    Karimi, Leila; Ghassemi, Abbas

    2016-07-01

    Among the different technologies developed for desalination, the electrodialysis/electrodialysis reversal (ED/EDR) process is one of the most promising for treating brackish water with low salinity when there is high risk of scaling. Multiple researchers have investigated ED/EDR to optimize the process, determine the effects of operating parameters, and develop theoretical/empirical models. Previously published empirical/theoretical models have evaluated the effect of the hydraulic conditions of the ED/EDR on the limiting current density using dimensionless numbers. The reason for previous studies' emphasis on limiting current density is twofold: 1) to maximize ion removal, most ED/EDR systems are operated close to limiting current conditions if there is not a scaling potential in the concentrate chamber due to a high concentration of less-soluble salts; and 2) for modeling the ED/EDR system with dimensionless numbers, it is more accurate and convenient to use limiting current density, where the boundary layer's characteristics are known at constant electrical conditions. To improve knowledge of ED/EDR systems, ED/EDR models should be also developed for the Ohmic region, where operation reduces energy consumption, facilitates targeted ion removal, and prolongs membrane life compared to limiting current conditions. In this paper, theoretical/empirical models were developed for ED/EDR performance in a wide range of operating conditions. The presented ion removal and selectivity models were developed for the removal of monovalent ions and divalent ions utilizing the dominant dimensionless numbers obtained from laboratory scale electrodialysis experiments. At any system scale, these models can predict ED/EDR performance in terms of monovalent and divalent ion removal.

  7. Theoretical and empirical investigations of KCl:Eu{sup 2+} for nearly water-equivalent radiotherapy dosimetry

    SciTech Connect

    Zheng Yuanshui; Han Zhaohui; Driewer, Joseph P.; Low, Daniel A.; Li, H. Harold

    2010-01-15

    Purpose: The low effective atomic number, reusability, and other computed radiography-related advantages make europium doped potassium chloride (KCl:Eu{sup 2+}) a promising dosimetry material. The purpose of this study is to model KCl:Eu{sup 2+} point dosimeters with a Monte Carlo (MC) method and, using this model, to investigate the dose responses of two-dimensional (2D) KCl:Eu{sup 2+} storage phosphor films (SPFs). Methods: KCl:Eu{sup 2+} point dosimeters were irradiated using a 6 MV beam at four depths (5-20 cm) for each of five square field sizes (5x5-25x25 cm{sup 2}). The dose measured by KCl:Eu{sup 2+} was compared to that measured by an ionization chamber to obtain the magnitude of energy dependent dose measurement artifact. The measurements were simulated using DOSXYZnrc with phase space files generated by BEAMnrcMP. Simulations were also performed for KCl:Eu{sup 2+} films with thicknesses ranging from 1 {mu}m to 1 mm. The work function of the prototype KCl:Eu{sup 2+} material was determined by comparing the sensitivity of a 150 {mu}m thick KCl:Eu{sup 2+} film to a commercial BaFBr{sub 0.85}I{sub 0.15}:Eu{sup 2+}-based SPF with a known work function. The work function was then used to estimate the sensitivity of a 1 {mu}m thick KCl:Eu{sup 2+} film. Results: The simulated dose responses of prototype KCl:Eu{sup 2+} point dosimeters agree well with measurement data acquired by irradiating the dosimeters in the 6 MV beam with varying field size and depth. Furthermore, simulations with films demonstrate that an ultrathin KCl:Eu{sup 2+} film with thickness of the order of 1 {mu}m would have nearly water-equivalent dose response. The simulation results can be understood using classic cavity theories. Finally, preliminary experiments and theoretical calculations show that ultrathin KCl:Eu{sup 2+} film could provide excellent signal in a 1 cGy dose-to-water irradiation. Conclusions: In conclusion, the authors demonstrate that KCl:Eu{sup 2+}-based dosimeters can be

  8. Theoretical bases for conducting certain technological processes in space

    NASA Technical Reports Server (NTRS)

    Okhotin, A. S.

    1979-01-01

    Dimensionless conservation equations are presented and the theoretical bases of fluid behavior aboard orbiting satellites with application to the processes of manufacturing crystals in weightlessness. The small amount of gravitational acceleration is shown to increase the separation of bands of varying concentration. Natural convection is shown to have no practical effect on crystallization from a liquid melt. Barodiffusion is also negligibly small in realistic conditions of weightlessness. The effects of surface tension become increasingly large, and suggestions are made for further research.

  9. Theoretical geology

    NASA Astrophysics Data System (ADS)

    Mikeš, Daniel

    2010-05-01

    erroneous assumptions and do not solve the very fundamental issue that lies at the base of the problem. This problem is straighforward and obvious: a sedimentary system is inherently four-dimensional (3 spatial dimensions + 1 temporal dimension). Any method using an inferior number or dimensions is bound to fail to describe the evolution of a sedimentary system. It is indicative of the present day geological world that such fundamental issues be overlooked. The only reason for which one can appoint the socalled "rationality" in todays society. Simple "common sense" leads us to the conclusion that in this case the empirical method is bound to fail and the only method that can solve the problem is the theoretical approach. Reasoning that is completely trivial for the traditional exact sciences like physics and mathematics and applied sciences like engineering. However, not for geology, a science that was traditionally descriptive and jumped to empirical science, skipping the stage of theoretical science. I argue that the gap of theoretical geology is left open and needs to be filled. Every discipline in geology lacks a theoretical base. This base can only be filled by the theoretical/inductive approach and can impossibly be filled by the empirical/deductive approach. Once a critical mass of geologists realises this flaw in todays geology, we can start solving the fundamental problems in geology.

  10. Comparison of empirical, semi-empirical and physically based models of soil hydraulic functions derived for bi-modal soils

    NASA Astrophysics Data System (ADS)

    Kutílek, M.; Jendele, L.; Krejča, M.

    2009-02-01

    The accelerated flow in soil pores is responsible for a rapid transport of pollutants from the soil surface to deeper layers up to groundwater. The term preferential flow is used for this type of transport. Our study was aimed at the preferential flow realized in the structural porous domain in bi-modal soils. We compared equations describing the soil water retention function h( θ) and unsaturated hydraulic conductivity K( h), eventually K( θ) modified for bi-modal soils, where θ is the soil water content and h is the pressure head. The analytical description of a curve passing experimental data sets of the soil hydraulic function is typical for the empirical equation characterized by fitting parameters only. If the measured data are described by the equation derived by the physical model without using fitting parameters, we speak about a physically based model. There exist several transitional subtypes between empirical and physically based models. They are denoted as semi-empirical, or semi-physical. We tested 3 models of soil water retention function and 3 models of unsaturated conductivity using experimental data sets of sand, silt, silt loam and loam. All used soils are typical by their bi-modality of the soil porous system. The model efficiency was estimated by RMSE (Root mean square error) and by RSE (Relative square error). The semi-empirical equation of the soil water retention function had the lowest values of RMSE and RSE and was qualified as "optimal" for the formal description of the shape of the water retention function. With this equation, the fit of the modelled data to experiments was the closest one. The fitting parameters smoothed the difference between the model and the physical reality of the soil porous media. The physical equation based upon the model of the pore size distribution did not allow exact fitting of the modelled data to the experimental data due to the rigidity and simplicity of the physical model when compared to the real soil

  11. Mindfulness-based treatment to prevent addictive behavior relapse: theoretical models and hypothesized mechanisms of change.

    PubMed

    Witkiewitz, Katie; Bowen, Sarah; Harrop, Erin N; Douglas, Haley; Enkema, Matthew; Sedgwick, Carly

    2014-04-01

    Mindfulness-based treatments are growing in popularity among addiction treatment providers, and several studies suggest the efficacy of incorporating mindfulness practices into the treatment of addiction, including the treatment of substance use disorders and behavioral addictions (i.e., gambling). The current paper provides a review of theoretical models of mindfulness in the treatment of addiction and several hypothesized mechanisms of change. We provide an overview of mindfulness-based relapse prevention (MBRP), including session content, treatment targets, and client feedback from participants who have received MBRP in the context of empirical studies. Future research directions regarding operationalization and measurement, identifying factors that moderate treatment effects, and protocol adaptations for specific populations are discussed.

  12. Computer-Assisted Language Intervention Using Fast ForWord: Theoretical and Empirical Considerations for Clinical Decision-Making.

    ERIC Educational Resources Information Center

    Gillam, Ronald B.

    1999-01-01

    This article critiques the theoretical basis of the Fast ForWord program, a computer-assisted language intervention program for children with language-learning impairments. It notes undocumented treatment outcomes and questions the clinical methods associated with the procedures. Fifteen cautionary statements are provided that clinicians may want…

  13. The Importance of Emotion in Theories of Motivation: Empirical, Methodological, and Theoretical Considerations from a Goal Theory Perspective

    ERIC Educational Resources Information Center

    Turner, Julianne C.; Meyer, Debra K.; Schweinle, Amy

    2003-01-01

    Despite its importance to educational psychology, prominent theories of motivation have mostly ignored emotion. In this paper, we review theoretical conceptions of the relation between motivation and emotion and discuss the role of emotion in understanding student motivation in classrooms. We demonstrate that emotion is one of the best indicators…

  14. Accuracy of Population Validity and Cross-Validity Estimation: An Empirical Comparison of Formula-Based, Traditional Empirical, and Equal Weights Procedures.

    ERIC Educational Resources Information Center

    Raju, Nambury S.; Bilgic, Reyhan; Edwards, Jack E.; Fleer, Paul F.

    1999-01-01

    Performed an empirical Monte Carlo study using predictor and criterion data from 84,808 U.S. Air Force enlistees. Compared formula-based, traditional empirical, and equal-weights procedures. Discusses issues for basic research on validation and cross-validation. (SLD)

  15. Why resilience is unappealing to social science: Theoretical and empirical investigations of the scientific use of resilience.

    PubMed

    Olsson, Lennart; Jerneck, Anne; Thoren, Henrik; Persson, Johannes; O'Byrne, David

    2015-05-01

    Resilience is often promoted as a boundary concept to integrate the social and natural dimensions of sustainability. However, it is a troubled dialogue from which social scientists may feel detached. To explain this, we first scrutinize the meanings, attributes, and uses of resilience in ecology and elsewhere to construct a typology of definitions. Second, we analyze core concepts and principles in resilience theory that cause disciplinary tensions between the social and natural sciences (system ontology, system boundary, equilibria and thresholds, feedback mechanisms, self-organization, and function). Third, we provide empirical evidence of the asymmetry in the use of resilience theory in ecology and environmental sciences compared to five relevant social science disciplines. Fourth, we contrast the unification ambition in resilience theory with methodological pluralism. Throughout, we develop the argument that incommensurability and unification constrain the interdisciplinary dialogue, whereas pluralism drawing on core social scientific concepts would better facilitate integrated sustainability research.

  16. [Adaptation and quality of life in anorectal malformation: empirical findings, theoretical concept, Psychometric assessment, and cognitive-behavioral intervention].

    PubMed

    Noeker, Meinolf

    2010-01-01

    Anorectal malformations are inborn developmental defects that are associated with multiple functional Impairments (especially incontinence) and psychosocial burden with a major impact on body schema and self-esteem. Child psychology and psychiatry research begin to identify disorder-dependent and -independent risk and protective factors that predict the outcome of psychological adaptation and quality of life. The present paper analyses the interference of structural and functional disease parameters with the achievement of regular developmental tasks, presents a hypothetical conceptual framework concerning the development of psychological adaptation and quality of life in ARM, integrates findings from empirical research with the framework presented and outlines strategies of psychological support from a cognitive-behavioural perspective within a multidisciplinary treatment approach to enhance medical, functional, and psychosocial quality of life.

  17. Why resilience is unappealing to social science: Theoretical and empirical investigations of the scientific use of resilience

    PubMed Central

    Olsson, Lennart; Jerneck, Anne; Thoren, Henrik; Persson, Johannes; O’Byrne, David

    2015-01-01

    Resilience is often promoted as a boundary concept to integrate the social and natural dimensions of sustainability. However, it is a troubled dialogue from which social scientists may feel detached. To explain this, we first scrutinize the meanings, attributes, and uses of resilience in ecology and elsewhere to construct a typology of definitions. Second, we analyze core concepts and principles in resilience theory that cause disciplinary tensions between the social and natural sciences (system ontology, system boundary, equilibria and thresholds, feedback mechanisms, self-organization, and function). Third, we provide empirical evidence of the asymmetry in the use of resilience theory in ecology and environmental sciences compared to five relevant social science disciplines. Fourth, we contrast the unification ambition in resilience theory with methodological pluralism. Throughout, we develop the argument that incommensurability and unification constrain the interdisciplinary dialogue, whereas pluralism drawing on core social scientific concepts would better facilitate integrated sustainability research. PMID:26601176

  18. Viscoelastic shear properties of human vocal fold mucosa: theoretical characterization based on constitutive modeling.

    PubMed

    Chan, R W; Titze, I R

    2000-01-01

    The viscoelastic shear properties of human vocal fold mucosa (cover) were previously measured as a function of frequency [Chan and Titze, J. Acoust. Soc. Am. 106, 2008-2021 (1999)], but data were obtained only in a frequency range of 0.01-15 Hz, an order of magnitude below typical frequencies of vocal fold oscillation (on the order of 100 Hz). This study represents an attempt to extrapolate the data to higher frequencies based on two viscoelastic theories, (1) a quasilinear viscoelastic theory widely used for the constitutive modeling of the viscoelastic properties of biological tissues [Fung, Biomechanics (Springer-Verlag, New York, 1993), pp. 277-292], and (2) a molecular (statistical network) theory commonly used for the rheological modeling of polymeric materials [Zhu et al., J. Biomech. 24, 1007-1018 (1991)]. Analytical expressions of elastic and viscous shear moduli, dynamic viscosity, and damping ratio based on the two theories with specific model parameters were applied to curve-fit the empirical data. Results showed that the theoretical predictions matched the empirical data reasonably well, allowing for parametric descriptions of the data and their extrapolations to frequencies of phonation.

  19. Time Domain Strain/Stress Reconstruction Based on Empirical Mode Decomposition: Numerical Study and Experimental Validation

    PubMed Central

    He, Jingjing; Zhou, Yibin; Guan, Xuefei; Zhang, Wei; Zhang, Weifang; Liu, Yongming

    2016-01-01

    Structural health monitoring has been studied by a number of researchers as well as various industries to keep up with the increasing demand for preventive maintenance routines. This work presents a novel method for reconstruct prompt, informed strain/stress responses at the hot spots of the structures based on strain measurements at remote locations. The structural responses measured from usage monitoring system at available locations are decomposed into modal responses using empirical mode decomposition. Transformation equations based on finite element modeling are derived to extrapolate the modal responses from the measured locations to critical locations where direct sensor measurements are not available. Then, two numerical examples (a two-span beam and a 19956-degree of freedom simplified airfoil) are used to demonstrate the overall reconstruction method. Finally, the present work investigates the effectiveness and accuracy of the method through a set of experiments conducted on an aluminium alloy cantilever beam commonly used in air vehicle and spacecraft. The experiments collect the vibration strain signals of the beam via optical fiber sensors. Reconstruction results are compared with theoretical solutions and a detailed error analysis is also provided. PMID:27537889

  20. Landscape influences on dispersal behaviour: a theoretical model and empirical test using the fire salamander, Salamandra infraimmaculata.

    PubMed

    Kershenbaum, Arik; Blank, Lior; Sinai, Iftach; Merilä, Juha; Blaustein, Leon; Templeton, Alan R

    2014-06-01

    When populations reside within a heterogeneous landscape, isolation by distance may not be a good predictor of genetic divergence if dispersal behaviour and therefore gene flow depend on landscape features. Commonly used approaches linking landscape features to gene flow include the least cost path (LCP), random walk (RW), and isolation by resistance (IBR) models. However, none of these models is likely to be the most appropriate for all species and in all environments. We compared the performance of LCP, RW and IBR models of dispersal with the aid of simulations conducted on artificially generated landscapes. We also applied each model to empirical data on the landscape genetics of the endangered fire salamander, Salamandra infraimmaculata, in northern Israel, where conservation planning requires an understanding of the dispersal corridors. Our simulations demonstrate that wide dispersal corridors of the low-cost environment facilitate dispersal in the IBR model, but inhibit dispersal in the RW model. In our empirical study, IBR explained the genetic divergence better than the LCP and RW models (partial Mantel correlation 0.413 for IBR, compared to 0.212 for LCP, and 0.340 for RW). Overall dispersal cost in salamanders was also well predicted by landscape feature slope steepness (76%), and elevation (24%). We conclude that fire salamander dispersal is well characterised by IBR predictions. Together with our simulation findings, these results indicate that wide dispersal corridors facilitate, rather than hinder, salamander dispersal. Comparison of genetic data to dispersal model outputs can be a useful technique in inferring dispersal behaviour from population genetic data.

  1. The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction.

    PubMed

    Williamson, Ross S; Sahani, Maneesh; Pillow, Jonathan W

    2015-04-01

    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.

  2. Training-based interventions in motor rehabilitation after stroke: theoretical and clinical considerations.

    PubMed

    Sterr, Annette

    2004-01-01

    Basic neuroscience research on brain plasticity, motor learning and recovery has stimulated new concepts in neurological rehabilitation. Combined with the development of set methodological standards in clinical outcome research, these findings have led to a double-paradigm shift in motor rehabilitation: (a) the move towards evidence-based procedures for the assessment of clinical outcome & the employment of disablement models to anchor outcome parameters, and (b) the introduction of practice-based concepts that are derived from testable models that specify treatment mechanisms. In this context, constraint-induced movement therapy (CIT) has played a catalytic role in taking motor rehabilitation forward into the scientific arena. As a theoretically founded and hypothesis-driven intervention, CIT research focuses on two main issues. The first issue is the assessment of long-term clinical benefits in an increasing range of patient groups, and the second issue is the investigation of neuronal and behavioural treatment mechanisms and their interactive contribution to treatment success. These studies are mainly conducted in the research environment and will eventually lead to increased treatment benefits for patients in standard health care. However, gradual but presumably more immediate benefits for patients may be achieved by introducing and testing derivates of the CIT concept that are more compatible with current clinical practice. Here, we summarize the theoretical and empirical issues related to the translation of research-based CIT work into the clinical context of standard health care.

  3. Empirical Mode Decomposition Based Features for Diagnosis and Prognostics of Systems

    DTIC Science & Technology

    2008-04-01

    bearing fault diagnosis – their effectiveness and flexibilities. Journal of Vibration and Acoustics July 2001, ASME. 3. Staszewski, W. J. Structural...Empirical Mode Decomposition Based Features for Diagnosis and Prognostics of Systems by Hiralal Khatri, Kenneth Ranney, Kwok Tom, and Romeo...Laboratory Adelphi, MD 20783-1197 ARL-TR-4301 April 2008 Empirical Mode Decomposition Based Features for Diagnosis and Prognostics of Systems

  4. Landfill modelling in LCA - a contribution based on empirical data.

    PubMed

    Obersteiner, Gudrun; Binner, Erwin; Mostbauer, Peter; Salhofer, Stefan

    2007-01-01

    Landfills at various stages of development, depending on their age and location, can be found throughout Europe. The type of facilities goes from uncontrolled dumpsites to highly engineered facilities with leachate and gas management. In addition, some landfills are designed to receive untreated waste, while others can receive incineration residues (MSWI) or residues after mechanical biological treatment (MBT). Dimension, type and duration of the emissions from landfills depend on the quality of the disposed waste, the technical design, and the location of the landfill. Environmental impacts are produced by the leachate (heavy metals, organic loading), emissions into the air (CH(4), hydrocarbons, halogenated hydrocarbons) and from the energy or fuel requirements for the operation of the landfill (SO(2) and NO(x) from the production of electricity from fossil fuels). To include landfilling in an life-cycle assessment (LCA) approach entails several methodological questions (multi-input process, site-specific influence, time dependency). Additionally, no experiences are available with regard to mid-term behaviour (decades) for the relatively new types of landfill (MBT landfill, landfill for residues from MSWI). The present paper focuses on two main issues concerning modelling of landfills in LCA: Firstly, it is an acknowledged fact that emissions from landfills may prevail for a very long time, often thousands of years or longer. The choice of time frame in the LCA of landfilling may therefore clearly affect the results. Secondly, the reliability of results obtained through a life-cycle assessment depends on the availability and quality of Life Cycle Inventory (LCI) data. Therefore the choice of the general approach, using multi-input inventory tool versus empirical results, may also influence the results. In this paper the different approaches concerning time horizon and LCI will be introduced and discussed. In the application of empirical results, the presence of

  5. Empirically Estimable Classification Bounds Based on a Nonparametric Divergence Measure

    PubMed Central

    Berisha, Visar; Wisler, Alan; Hero, Alfred O.; Spanias, Andreas

    2015-01-01

    Information divergence functions play a critical role in statistics and information theory. In this paper we show that a non-parametric f-divergence measure can be used to provide improved bounds on the minimum binary classification probability of error for the case when the training and test data are drawn from the same distribution and for the case where there exists some mismatch between training and test distributions. We confirm the theoretical results by designing feature selection algorithms using the criteria from these bounds and by evaluating the algorithms on a series of pathological speech classification tasks. PMID:26807014

  6. Computer-Assisted Language Intervention Using Fast ForWord®: Theoretical and Empirical Considerations for Clinical Decision-Making.

    PubMed

    Gillam, Ronald B

    1999-10-01

    A computer-assisted language intervention program called Fast ForWord® (Scientific Learning Corporation, 1998) has received a great deal of attention at professional meetings and in the popular media. Newspaper and magazine articles about this program contain statements like, "On average, after only 6 to 7 weeks of training, language-learning impaired children ages 4 to 12 showed improvement of more than one and a half years in speech processing and language ability." (Scientific Learning Corporation, 1997). Are the claims that are being made about this intervention approach just a matter of product promotion, or is this really a scientifically proven remedy for language-learning impairments? This article critiques the theoretical basis of Fast ForWord®, the documented treatment outcomes, and the clinical methods associated with the procedure. Fifteen cautionary statements are provided that clinicians may want to consider before they recommend Fast ForWord® intervention for the children they serve.

  7. Theoretical and empirical studies of impurity incorporation into beta-SiC thin films during epitaxial growth

    NASA Astrophysics Data System (ADS)

    Kim, H. J.; Davis, R. F.

    1986-11-01

    A theoretical determination of the vapor species present, and their respective partial pressures, is made using the SOLGASMIX-PV program for the n-type and p-type dopants of N and P, and B, respectively, under conditions used to grow monocrystalline beta-SiC thin films via CVD. The model shows that Al and P behave ideally while B and N apparently interact with the C or Si in the SiC or fill normally unoccupied interstitial positions. The relationship between the carrier concentrations or the atomic concentrations and the partial pressure of the dopant source gases is linear and parallel. The more efficient n-type and p-type dopants of N and Al have been used to produce what is suggested to be the first p-n junction diode in a beta-SiC film.

  8. Attachment-Based Family Therapy: A Review of the Empirical Support.

    PubMed

    Diamond, Guy; Russon, Jody; Levy, Suzanne

    2016-09-01

    Attachment-based family therapy (ABFT) is an empirically supported treatment designed to capitalize on the innate, biological desire for meaningful and secure relationships. The therapy is grounded in attachment theory and provides an interpersonal, process-oriented, trauma-focused approach to treating adolescent depression, suicidality, and trauma. Although a process-oriented therapy, ABFT offers a clear structure and road map to help therapists quickly address attachment ruptures that lie at the core of family conflict. Several clinical trials and process studies have demonstrated empirical support for the model and its proposed mechanism of change. This article provides an overview of the clinical model and the existing empirical support for ABFT.

  9. An empirical formula based on Monte Carlo simulation for diffuse reflectance from turbid media

    NASA Astrophysics Data System (ADS)

    Gnanatheepam, Einstein; Aruna, Prakasa Rao; Ganesan, Singaravelu

    2016-03-01

    Diffuse reflectance spectroscopy has been widely used in diagnostic oncology and characterization of laser irradiated tissue. However, still accurate and simple analytical equation does not exist for estimation of diffuse reflectance from turbid media. In this work, a diffuse reflectance lookup table for a range of tissue optical properties was generated using Monte Carlo simulation. Based on the generated Monte Carlo lookup table, an empirical formula for diffuse reflectance was developed using surface fitting method. The variance between the Monte Carlo lookup table surface and the surface obtained from the proposed empirical formula is less than 1%. The proposed empirical formula may be used for modeling of diffuse reflectance from tissue.

  10. Personality traits and achievement motives: theoretical and empirical relations between the NEO Personality Inventory-Revised and the Achievement Motives Scale.

    PubMed

    Diseth, Age; Martinsen, Øyvind

    2009-04-01

    Theoretical and empirical relations between personality traits and motive dispositions were investigated by comparing scores of 315 undergraduate psychology students on the NEO Personality Inventory-Revised and the Achievement Motives Scale. Analyses showed all NEO Personality Inventory-Revised factors except agreeableness were significantly correlated with the motive for success and the motive to avoid failure. A structural equation model showed that motive for success was predicted by Extraversion, Openness, Conscientiousness, and Neuroticism (negative relation), and motive to avoid failure was predicted by Neuroticism and Openness (negative relation). Although both achievement motives were predicted by several personality factors, motive for success was most strongly predicted by Openness, and motive to avoid failure was most strongly predicted by neuroticism. These findings extended previous research on the relations of personality traits and achievement motives and provided a basis for the discussion of motive dispositions in personality. The results also added to the construct validity of the Achievement Motives Scale.

  11. Deep in Data. Empirical Data Based Software Accuracy Testing Using the Building America Field Data Repository

    SciTech Connect

    Neymark, J.; Roberts, D.

    2013-06-01

    This paper describes progress toward developing a usable, standardized, empirical data-based software accuracy test suite using home energy consumption and building description data. Empirical data collected from around the United States have been translated into a uniform Home Performance Extensible Markup Language format that may enable software developers to create translators to their input schemes for efficient access to the data. This could allow for modeling many homes expediently, and thus implementing software accuracy test cases by applying the translated data.

  12. Exploring the UMLS: a rough sets based theoretical framework.

    PubMed

    Srinivasan, P

    1999-01-01

    The Unified Medical Language System (UMLS) [1] has a unique and leading position in the evolution of thesauri and metathesauri. Features that set it apart are: its composition from more than fifty component health care vocabularies; the sophisticated UMLS ontology linking the Metathesaurus with structures such as the Semantic Network and the SPECIALIST lexicon; and the high level of social collaboration invested in its construction and growth. It is our thesis that in order to successfully harness such a complex vocabulary for text retrieval we need sophisticated methods derived from a deeper understanding of the UMLS system. Thus we propose a theoretical framework based on the theory of rough sets, that supports the systematic and exploratory investigation of the UMLS Metathesaurus for text retrieval. Our goal is to make it more feasible for individuals such as patients and health care professionals to access relevant information at the point of need.

  13. Genetic load, inbreeding depression, and hybrid vigor covary with population size: An empirical evaluation of theoretical predictions.

    PubMed

    Lohr, Jennifer N; Haag, Christoph R

    2015-12-01

    Reduced population size is thought to have strong consequences for evolutionary processes as it enhances the strength of genetic drift. In its interaction with selection, this is predicted to increase the genetic load, reduce inbreeding depression, and increase hybrid vigor, and in turn affect phenotypic evolution. Several of these predictions have been tested, but comprehensive studies controlling for confounding factors are scarce. Here, we show that populations of Daphnia magna, which vary strongly in genetic diversity, also differ in genetic load, inbreeding depression, and hybrid vigor in a way that strongly supports theoretical predictions. Inbreeding depression is positively correlated with genetic diversity (a proxy for Ne ), and genetic load and hybrid vigor are negatively correlated with genetic diversity. These patterns remain significant after accounting for potential confounding factors and indicate that, in small populations, a large proportion of the segregation load is converted into fixed load. Overall, the results suggest that the nature of genetic variation for fitness-related traits differs strongly between large and small populations. This has large consequences for evolutionary processes in natural populations, such as selection on dispersal, breeding systems, ageing, and local adaptation.

  14. Empirical and theoretical dosimetry in support of whole body resonant RF exposure (100 MHz) in human volunteers.

    PubMed

    Allen, Stewart J; Adair, Eleanor R; Mylacraine, Kevin S; Hurt, William; Ziriax, John

    2003-10-01

    This study reports the dosimetry performed to support an experiment that measured physiological responses of volunteer human subjects exposed to the resonant frequency for a seated human adult at 100 MHz. Exposures were performed in an anechoic chamber which was designed to provide uniform fields for frequencies of 100 MHz or greater. A half wave dipole with a 90 degrees reflector was used to optimize the field at the subject location. The dosimetry plan required measurement of transmitter harmonics, stationary probe drift, field strengths as a function of distance, electric and magnetic field maps at 200, 225, and 250 cm from the dipole antenna, and specific absorption rate (SAR) measurements using a human phantom, as well as theoretical predictions of SAR with the finite difference time domain (FDTD) method. On each exposure test day, a measurement was taken at 225 cm on the beam centerline with a NBS E field probe to assure consistently precise exposures. A NBS 10 cm loop antenna was positioned 150 cm to the right, 100 cm above, and 60 cm behind the subject and was read at 5 min intervals during all RF exposures. These dosimetry measurements assured accurate and consistent exposures. FDTD calculations were used to determine SAR distribution in a seated human subject. This study reports the necessary dosimetry for work on physiological consequences of human volunteer exposures to 100 MHz.

  15. On the Theoretical Breadth of Design-Based Research in Education

    ERIC Educational Resources Information Center

    Bell, Philip

    2004-01-01

    Over the past decade, design experimentation has become an increasingly accepted mode of research appropriate for the theoretical and empirical study of learning amidst complex educational interventions as they are enacted in everyday settings. There is still a significant lack of clarity surrounding methodological and epistemological features of…

  16. Empirical and theoretical dosimetry in support of whole body radio frequency (RF) exposure in seated human volunteers at 220 MHz.

    PubMed

    Allen, Stewart J; Adair, Eleanor R; Mylacraine, Kevin S; Hurt, William; Ziriax, John

    2005-09-01

    This study reports the dosimetry performed to support an experiment that measured physiological responses of seated volunteer human subjects exposed to 220 MHz fields. Exposures were performed in an anechoic chamber which was designed to provide uniform fields for frequencies of 100 MHz or greater. A vertical half-wave dipole with a 90 degrees reflector was used to optimize the field at the subject's location. The vertically polarized E field was incident on the dorsal side of the phantoms and human volunteers. The dosimetry plan required measurement of stationary probe drift, field strengths as a function of distance, electric and magnetic field maps at 200, 225, and 250 cm from the dipole antenna, and specific absorption rate (SAR) measurements using a human phantom, as well as theoretical predictions of SAR with the finite difference time domain (FDTD) method. A NBS (National Bureau of Standards, now NIST, National Institute of Standards and Technology, Boulder, CO) 10 cm loop antenna was positioned 150 cm to the right, 100 cm above and 60 cm behind the subject (toward the transmitting antenna) and was read prior to each subject's exposure and at 5 min intervals during all RF exposures. Transmitter stability was determined by measuring plate voltage, plate current, screen voltage and grid voltage for the driver and final amplifiers before and at 5 min intervals throughout the RF exposures. These dosimetry measurements assured accurate and consistent exposures. FDTD calculations were used to determine SAR distribution in a seated human subject. This study reports the necessary dosimetry to precisely control exposure levels for studies of the physiological consequences of human volunteer exposures to 220 MHz.

  17. Annett's theory that individuals heterozygous for the right shift gene are intellectually advantaged: theoretical and empirical problems.

    PubMed

    McManus, I C; Shergill, S; Bryden, M P

    1993-11-01

    Annett & Manning (1989; 1990a,b) have proposed that left-handedness is maintained by a balanced polymorphism, whereby the rs+/-heterozygote manifests increased intellectual ability compared with the rs-/- and rs+/+ homozygotes. In this paper we demonstrate that Annett's method of dividing subjects into putative genotypes does not allow the rs+/- genotype to be compared with the rs-/- genotype within handedness groups. Our alternative method does allow heterozygous right-handers to be compared both with rs+/+ and rs-/- homozygotes. Using this method in undergraduates we find no evidence that supposed heterozygotes are relatively more intellectually able than homozygotes on tests of verbal IQ, spatial IQ, diagrammatic IQ or vocabulary. Theoretical analysis of the balanced polymorphism hypothesis reveals additional limitations. Although estimation of the size of the heterozygote advantage suggests that it must be very large (21 or 45 IQ points) to explain the effects found by Annett & Manning, it nevertheless must be very small (3.4 IQ points) to be compatible with the known differences between right- and left-handers in social class and intelligence. Moreover power analysis shows that the latter effect size is too small for Annett & Manning to have found effects in their studies. Additional power analyses show that studies looking for effects in groups of high intellectual ability, such as university students, are incapable of finding significant results, despite Annett claiming such effects. If the Annett & Manning paradigm does demonstrate differences in intellectual ability related to skill asymmetry then those differences are unlikely to result from a balanced polymorphism, but instead probably reflect motivational or other differences between right-handers of high and low degrees of laterality.

  18. Empirical and Theoretical Evidence for the Role of MgSO4 Contact Ion-Pairs in Thermochemical Sulfate Reduction

    NASA Astrophysics Data System (ADS)

    Ellis, G. S.; Zhang, T.; Ma, Q.; Tang, Y.

    2006-12-01

    While the process of thermochemical sulfate reduction (TSR) has been recognized by geochemists for nearly fifty years, it has proven extremely difficult to simulate in the laboratory under conditions resembling those encountered in nature. Published estimates of the kinetic parameters that describe the rate of the TSR reaction vary widely and are inconsistent with geologic observations. Consequently, the prediction of the hydrogen sulfide (H2S) generation potential of a reservoir prior to drilling remains a major challenge for the oil industry. New experimental and theoretical evidence indicate that magnesium plays a significant role in controlling the rate of TSR in petroleum reservoirs. A novel reaction pathway for TSR is proposed that involves the reduction of sulfate as aqueous MgSO4 contact ion-pairs prior to the H2S-catalyzed TSR mechanism that is generally accepted. Ab initio quantum chemical calculations have been applied to this model in order to locate a potential transition state and to determine the activation energy for the contact ion- pair reaction (56 kcal/mol). Detailed experimental work shows that the rate of TSR increases significantly with increasing concentration of H2S, which may help to explain why previous estimates of TSR activation energies were so divergent. Preliminary experimental evidence indicates that H2S catalysis of TSR is a multi-step process, involving the formation of labile organic sulfur compounds that, in turn, generate sulfur radicals upon thermal decomposition. A new conceptual model for understanding the process of TSR in geologic environments has been developed that involves an H2S-threshold concentration required to sustain rapid sulfate reduction rates. Although this approach appears to be more consistent with field observations than previous mechanisms, validation of this model requires detailed integration with other geologic data in basin models. These findings may explain the common association of H2S-rich petroleum

  19. An Empirical Analysis of Knowledge Based Hypertext Navigation

    PubMed Central

    Snell, J.R.; Boyle, C.

    1990-01-01

    Our purpose is to investigate the effectiveness of knowledge-based navigation in a dermatology hypertext network. The chosen domain is a set of dermatology class notes implemented in Hypercard and SINS. The study measured time, number of moves, and success rates for subjects to find solutions to ten questions. The subjects were required to navigate within a dermatology hypertext network in order to find the solutions to a question. Our results indicate that knowledge-based navigation can assist the user in finding information of interest in a fewer number of node visits (moves) than with traditional button-based browsing or keyword searching. The time necessary to find an item of interest was lower for traditional-based methods. There was no difference in success rates for the two test groups.

  20. Performance-Based Service Quality Model: An Empirical Study on Japanese Universities

    ERIC Educational Resources Information Center

    Sultan, Parves; Wong, Ho

    2010-01-01

    Purpose: This paper aims to develop and empirically test the performance-based higher education service quality model. Design/methodology/approach: The study develops 67-item instrument for measuring performance-based service quality with a particular focus on the higher education sector. Scale reliability is confirmed using the Cronbach's alpha.…

  1. Empirically Based School Interventions Targeted at Academic and Mental Health Functioning

    ERIC Educational Resources Information Center

    Hoagwood, Kimberly E.; Olin, S. Serene; Kerker, Bonnie D.; Kratochwill, Thomas R.; Crowe, Maura; Saka, Noa

    2007-01-01

    This review examines empirically based studies of school-based mental health interventions. The review identified 64 out of more than 2,000 articles published between 1990 and 2006 that met methodologically rigorous criteria for inclusion. Of these 64 articles, only 24 examined both mental health "and" educational outcomes. The majority of…

  2. Assessing differential expression in two-color microarrays: a resampling-based empirical Bayes approach.

    PubMed

    Li, Dongmei; Le Pape, Marc A; Parikh, Nisha I; Chen, Will X; Dye, Timothy D

    2013-01-01

    Microarrays are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. Multiple testing methods in microarray data analysis aim at controlling both Type I and Type II error rates; however, real microarray data do not always fit their distribution assumptions. Smyth's ubiquitous parametric method, for example, inadequately accommodates violations of normality assumptions, resulting in inflated Type I error rates. The Significance Analysis of Microarrays, another widely used microarray data analysis method, is based on a permutation test and is robust to non-normally distributed data; however, the Significance Analysis of Microarrays method fold change criteria are problematic, and can critically alter the conclusion of a study, as a result of compositional changes of the control data set in the analysis. We propose a novel approach, combining resampling with empirical Bayes methods: the Resampling-based empirical Bayes Methods. This approach not only reduces false discovery rates for non-normally distributed microarray data, but it is also impervious to fold change threshold since no control data set selection is needed. Through simulation studies, sensitivities, specificities, total rejections, and false discovery rates are compared across the Smyth's parametric method, the Significance Analysis of Microarrays, and the Resampling-based empirical Bayes Methods. Differences in false discovery rates controls between each approach are illustrated through a preterm delivery methylation study. The results show that the Resampling-based empirical Bayes Methods offer significantly higher specificity and lower false discovery rates compared to Smyth's parametric method when data are not normally distributed. The Resampling-based empirical Bayes Methods also offers higher statistical power than the Significance Analysis of Microarrays method when the proportion of significantly differentially

  3. Theoretical and empirical study of single-substance, upward two-phase flow in a constant-diameter adiabatic pipe

    SciTech Connect

    Laoulache, R.N.; Maeder, P.F.; DiPippo, R.

    1987-05-01

    A scheme is developed to describe the upward flow of a two-phase mixture of a single substance in a vertical adiabatic constant area pipe. The scheme is based on dividing the mixture into a homogeneous core surrounded by a liquid film. This core may be a mixture of bubbles in a contiguous liquid phase, or a mixture of droplets in a contiguous vapor phase. The core is turbulent, whereas the liquid film may be laminar or turbulent. The working fluid is Dichlorotetrafluoroethane CClF/sub 2/-CClF/sub 2/ known as refrigerant 114 (R-114); the two-phase mixture is generated from the single phase substance by the process of flashing. In this study, the effect of the Froude and Reynolds numbers on the liquid film characteristics is examined. An expression for an interfacial friction coefficient between the turbulent core and the liquid film is developed; it is similar to Darcy's friction coefficient for a single phase flow in a rough pipe. Results indicate that for the range of Reynolds and Froude numbers considered, the liquid film is likely to be turbulent rather than laminar. The study also shows that two-dimensional effects are important, and the flow is never fully developed either in the film or the core. In addition, the new approach for the turbulent film is capable of predicting a local net flow rate that may be upward, downward, stationary, or stalled. An actual steam-water geothermal well is simulated. A similarity theory is used to predict the steam-water mixture pressure and temperature starting with laboratory measurements on the flow of R-114. Results indicate that the theory can be used to predict the pressure gradient in the two-phase region based on laboratory measurements.

  4. Theoretical and empirical study of single-substance, upward two-phase flow in a constant-diameter adiabatic pipe

    SciTech Connect

    Laoulache, R.N.; Maeder, P.F.; DiPippo, R.

    1987-05-01

    A Scheme is developed to describe the upward flow of a two-phase mixture of a single substance in a vertical adiabatic constant area pipe. The scheme is based on dividing the mixture into a homogeneous core surrounded by a liquid film. This core may be a mixture of bubbles in a contiguous liquid phase, or a mixture of droplets in a contiguous vapor phase. Emphasis is placed upon the latter case since the range of experimental measurements of pressure, temperature, and void fraction collected in this study fall in the slug-churn''- annular'' flow regimes. The core is turbulent, whereas the liquid film may be laminar or turbulent. Turbulent stresses are modeled by using Prandtl's mixing-length theory. The working fluid is Dichlorotetrafluoroethane CCIF{sub 2}-CCIF{sub 2} known as refrigerant 114 (R-114); the two-phase mixture is generated from the single phase substance by the process of flashing. In this study, the effect of the Froude and Reynolds numbers on the liquid film characteristics is examined. The compressibility is accounted for through the acceleration pressure gradient of the core and not directly through the Mach number. An expression for an interfacial friction coefficient between the turbulent core and the liquid film is developed; it is similar to Darcy's friction coefficient for a single phase flow in a rough pipe. Finally, an actual steam-water geothermal well is simulated; it is based on actual field data from New Zealand. A similarity theory is used to predict the steam-water mixture pressure and temperature starting with laboratory measurements on the flow of R-114.

  5. A new entropy based on a group-theoretical structure

    NASA Astrophysics Data System (ADS)

    Curado, Evaldo M. F.; Tempesta, Piergiulio; Tsallis, Constantino

    2016-03-01

    A multi-parametric version of the nonadditive entropy Sq is introduced. This new entropic form, denoted by S a , b , r, possesses many interesting statistical properties, and it reduces to the entropy Sq for b = 0, a = r : = 1 - q (hence Boltzmann-Gibbs entropy SBG for b = 0, a = r → 0). The construction of the entropy S a , b , r is based on a general group-theoretical approach recently proposed by one of us, Tempesta (2016). Indeed, essentially all the properties of this new entropy are obtained as a consequence of the existence of a rational group law, which expresses the structure of S a , b , r with respect to the composition of statistically independent subsystems. Depending on the choice of the parameters, the entropy S a , b , r can be used to cover a wide range of physical situations, in which the measure of the accessible phase space increases say exponentially with the number of particles N of the system, or even stabilizes, by increasing N, to a limiting value. This paves the way to the use of this entropy in contexts where the size of the phase space does not increase as fast as the number of its constituting particles (or subsystems) increases.

  6. Toward an Empirically-Based Parametric Explosion Spectral Model

    DTIC Science & Technology

    2011-09-01

    Figure 6. Analysis of Vp/Vs () ratio from USGS database (Wood, 2007) at Pahute Mesa and Yucca Flat. The ratio as a function of depth...from Leonard and Johnson (1987) and Ferguson (1988) are shown for Pahute Mesa and Yucca Flat, respectively. Based on the distribution, we estimate...constant Vp/Vs ratios of 1.671 and 1.871 () at Pahute Mesa and Yucca Flat, respectively. In order to obtain the shear modulus and shear

  7. The Empirical Investigation of Perspective-Based Reading

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Green, Scott; Laitenberger, Oliver; Shull, Forrest; Sorumgard, Sivert; Zelkowitz, Marvin V.

    1996-01-01

    We consider reading techniques a fundamental means of achieving high quality software. Due to the lack of research in this area, we are experimenting with the application and comparison of various reading techniques. This paper deals with our experiences with Perspective-Based Reading (PBR), a particular reading technique for requirements documents. The goal of PBR is to provide operational scenarios where members of a review team read a document from a particular perspective (e.g., tester, developer, user). Our assumption is that the combination of different perspectives provides better coverage of the document than the same number of readers using their usual technique.

  8. Lightning Detection Efficiency Analysis Process: Modeling Based on Empirical Data

    NASA Technical Reports Server (NTRS)

    Rompala, John T.

    2005-01-01

    A ground based lightning detection system employs a grid of sensors, which record and evaluate the electromagnetic signal produced by a lightning strike. Several detectors gather information on that signal s strength, time of arrival, and behavior over time. By coordinating the information from several detectors, an event solution can be generated. That solution includes the signal s point of origin, strength and polarity. Determination of the location of the lightning strike uses algorithms based on long used techniques of triangulation. Determination of the event s original signal strength relies on the behavior of the generated magnetic field over distance and time. In general the signal from the event undergoes geometric dispersion and environmental attenuation as it progresses. Our knowledge of that radial behavior together with the strength of the signal received by detecting sites permits an extrapolation and evaluation of the original strength of the lightning strike. It also limits the detection efficiency (DE) of the network. For expansive grids and with a sparse density of detectors, the DE varies widely over the area served. This limits the utility of the network in gathering information on regional lightning strike density and applying it to meteorological studies. A network of this type is a grid of four detectors in the Rondonian region of Brazil. The service area extends over a million square kilometers. Much of that area is covered by rain forests. Thus knowledge of lightning strike characteristics over the expanse is of particular value. I have been developing a process that determines the DE over the region [3]. In turn, this provides a way to produce lightning strike density maps, corrected for DE, over the entire region of interest. This report offers a survey of that development to date and a record of present activity.

  9. An Empirical Investigation of a Theoretically Based Measure of Perceived Wellness

    ERIC Educational Resources Information Center

    Harari, Marc J.; Waehler, Charles A.; Rogers, James R.

    2005-01-01

    The Perceived Wellness Survey (PWS; T. Adams, 1995; T. Adams, J. Bezner, & M. Steinhardt, 1997) is a recently developed instrument intended to operationalize the comprehensive Perceived Wellness Model (T. Adams, J. Bezner, & M. Steinhardt, 1997), an innovative model that attempts to include the balance of multiple life activities in its evaluation…

  10. Theoretical and Empirical Bases of Character Development in Adolescence: A View of the Issues.

    PubMed

    Seider, Scott; Jayawickreme, Eranda; Lerner, Richard M

    2017-03-11

    Traditional models of character development have conceptualized character as a set of psychological attributes that motivate or enable individuals to function as competent moral agents. In this special section, we present seven articles, including two commentaries, that seek to make innovative conceptual and methodological contributions to traditional understandings in the extant scholarship of character and character development in youth. In the introduction to this special section, we provide overviews of these contributions, and discuss the implications of these articles both to the current scholarship and to applications aimed at promoting character and positive youth development.

  11. The Empirical Investigation of Perspective-Based Reading

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Green, Scott; Laitenberger, Oliver; Shull, Forrest; Sorumgard, Sivert; Zelkowitz, Marvin V.

    1995-01-01

    We consider reading techniques a fundamental means of achieving high quality software. Due to lack of research in this area, we are experimenting with the application and comparison of various reading techniques. This paper deals with our experiences with Perspective Based Reading (PBR) a particular reading technique for requirement documents. The goal of PBR is to provide operation scenarios where members of a review team read a document from a particular perspective (eg., tester, developer, user). Our assumption is that the combination of different perspective provides better coverage of the document than the same number of readers using their usual technique. To test the efficacy of PBR, we conducted two runs of a controlled experiment in the environment of NASA GSFC Software Engineering Laboratory (SEL), using developers from the environment. The subjects read two types of documents, one generic in nature and the other from the NASA Domain, using two reading techniques, PBR and their usual technique. The results from these experiment as well as the experimental design, are presented and analyzed. When there is a statistically significant distinction, PBR performs better than the subjects' usual technique. However, PBR appears to be more effective on the generic documents than on the NASA documents.

  12. Towards an Empirically Based Parametric Explosion Spectral Model

    SciTech Connect

    Ford, S R; Walter, W R; Ruppert, S; Matzel, E; Hauk, T; Gok, R

    2009-08-31

    Small underground nuclear explosions need to be confidently detected, identified, and characterized in regions of the world where they have never before been tested. The focus of our work is on the local and regional distances (< 2000 km) and phases (Pn, Pg, Sn, Lg) necessary to see small explosions. We are developing a parametric model of the nuclear explosion seismic source spectrum that is compatible with the earthquake-based geometrical spreading and attenuation models developed using the Magnitude Distance Amplitude Correction (MDAC) techniques (Walter and Taylor, 2002). The explosion parametric model will be particularly important in regions without any prior explosion data for calibration. The model is being developed using the available body of seismic data at local and regional distances for past nuclear explosions at foreign and domestic test sites. Parametric modeling is a simple and practical approach for widespread monitoring applications, prior to the capability to carry out fully deterministic modeling. The achievable goal of our parametric model development is to be able to predict observed local and regional distance seismic amplitudes for event identification and yield determination in regions with incomplete or no prior history of underground nuclear testing. The relationship between the parametric equations and the geologic and containment conditions will assist in our physical understanding of the nuclear explosion source.

  13. Toward an Empirically-based Parametric Explosion Spectral Model

    NASA Astrophysics Data System (ADS)

    Ford, S. R.; Walter, W. R.; Ruppert, S.; Matzel, E.; Hauk, T. F.; Gok, R.

    2010-12-01

    Small underground nuclear explosions need to be confidently detected, identified, and characterized in regions of the world where they have never occurred. We develop a parametric model of the nuclear explosion seismic source spectrum derived from regional phases (Pn, Pg, and Lg) that is compatible with earthquake-based geometrical spreading and attenuation. Earthquake spectra are fit with a generalized version of the Brune spectrum, which is a three-parameter model that describes the long-period level, corner-frequency, and spectral slope at high-frequencies. These parameters are then correlated with near-source geology and containment conditions. There is a correlation of high gas-porosity (low strength) with increased spectral slope. However, there are trade-offs between the slope and corner-frequency, which we try to independently constrain using Mueller-Murphy relations and coda-ratio techniques. The relationship between the parametric equation and the geologic and containment conditions will assist in our physical understanding of the nuclear explosion source, and aid in the prediction of observed local and regional distance seismic amplitudes for event identification and yield determination in regions with incomplete or no prior history of underground nuclear testing.

  14. Empirical wind retrieval model based on SAR spectrum measurements

    NASA Astrophysics Data System (ADS)

    Panfilova, Maria; Karaev, Vladimir; Balandina, Galina; Kanevsky, Mikhail; Portabella, Marcos; Stoffelen, Ad

    ambiguity from polarimetric SAR. A criterion based on the complex correlation coefficient between the VV and VH signals sign is applied to select the wind direction. An additional quality control on the wind speed value retrieved with the spectral method is applied. Here, we use the direction obtained with the spectral method and the backscattered signal for CMOD wind speed estimate. The algorithm described above may be refined by the use of numerous SAR data and wind measurements. In the present preliminary work the first results of SAR images combined with in situ data processing are presented. Our results are compared to the results obtained using previously developed models CMOD, C-2PO for VH polarization and statistical wind retrieval approaches [1]. Acknowledgments. This work is supported by the Russian Foundation of Basic Research (grants 13-05-00852-a). [1] M. Portabella, A. Stoffelen, J. A. Johannessen, Toward an optimal inversion method for synthetic aperture radar wind retrieval, Journal of geophysical research, V. 107, N C8, 2002

  15. A Rigorous Test of the Fit of the Circumplex Model to Big Five Personality Data: Theoretical and Methodological Issues and Two Large Sample Empirical Tests.

    PubMed

    DeGeest, David Scott; Schmidt, Frank

    2015-01-01

    Our objective was to apply the rigorous test developed by Browne (1992) to determine whether the circumplex model fits Big Five personality data. This test has yet to be applied to personality data. Another objective was to determine whether blended items explained correlations among the Big Five traits. We used two working adult samples, the Eugene-Springfield Community Sample and the Professional Worker Career Experience Survey. Fit to the circumplex was tested via Browne's (1992) procedure. Circumplexes were graphed to identify items with loadings on multiple traits (blended items), and to determine whether removing these items changed five-factor model (FFM) trait intercorrelations. In both samples, the circumplex structure fit the FFM traits well. Each sample had items with dual-factor loadings (8 items in the first sample, 21 in the second). Removing blended items had little effect on construct-level intercorrelations among FFM traits. We conclude that rigorous tests show that the fit of personality data to the circumplex model is good. This finding means the circumplex model is competitive with the factor model in understanding the organization of personality traits. The circumplex structure also provides a theoretically and empirically sound rationale for evaluating intercorrelations among FFM traits. Even after eliminating blended items, FFM personality traits remained correlated.

  16. Task-Based Language Teaching: An Empirical Study of Task Transfer

    ERIC Educational Resources Information Center

    Benson, Susan D.

    2016-01-01

    Since the 1980s, task-based language teaching (TBLT) has enjoyed considerable interest from researchers of second language acquisition (SLA), resulting in a growing body of empirical evidence to support how and to what extent this approach can promote language learning. Although transferability and generalizability are critical assumptions for…

  17. Untangling the Evidence: Introducing an Empirical Model for Evidence-Based Library and Information Practice

    ERIC Educational Resources Information Center

    Gillespie, Ann

    2014-01-01

    Introduction: This research is the first to investigate the experiences of teacher-librarians as evidence-based practice. An empirically derived model is presented in this paper. Method: This qualitative study utilised the expanded critical incident approach, and investigated the real-life experiences of fifteen Australian teacher-librarians,…

  18. Implementing Evidence-Based Practice: A Review of the Empirical Research Literature

    ERIC Educational Resources Information Center

    Gray, Mel; Joy, Elyssa; Plath, Debbie; Webb, Stephen A.

    2013-01-01

    The article reports on the findings of a review of empirical studies examining the implementation of evidence-based practice (EBP) in the human services. Eleven studies were located that defined EBP as a research-informed, clinical decision-making process and identified barriers and facilitators to EBP implementation. A thematic analysis of the…

  19. Empirical vs. Expected IRT-Based Reliability Estimation in Computerized Multistage Testing (MST)

    ERIC Educational Resources Information Center

    Zhang, Yanwei; Breithaupt, Krista; Tessema, Aster; Chuah, David

    2006-01-01

    Two IRT-based procedures to estimate test reliability for a certification exam that used both adaptive (via a MST model) and non-adaptive design were considered in this study. Both procedures rely on calibrated item parameters to estimate error variance. In terms of score variance, one procedure (Method 1) uses the empirical ability distribution…

  20. Satellite-based empirical models linking river plume dynamics with hypoxic area andvolume

    EPA Science Inventory

    Satellite-based empirical models explaining hypoxic area and volume variation were developed for the seasonally hypoxic (O2 < 2 mg L−1) northern Gulf of Mexico adjacent to the Mississippi River. Annual variations in midsummer hypoxic area and ...

  1. Feasibility of an Empirically Based Program for Parents of Preschoolers with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Dababnah, Sarah; Parish, Susan L.

    2016-01-01

    This article reports on the feasibility of implementing an existing empirically based program, "The Incredible Years," tailored to parents of young children with autism spectrum disorder. Parents raising preschool-aged children (aged 3-6?years) with autism spectrum disorder (N?=?17) participated in a 15-week pilot trial of the…

  2. Deriving Empirically-Based Design Guidelines for Advanced Learning Technologies that Foster Disciplinary Comprehension

    ERIC Educational Resources Information Center

    Poitras, Eric; Trevors, Gregory

    2012-01-01

    Planning, conducting, and reporting leading-edge research requires professionals who are capable of highly skilled reading. This study reports the development of an empirically informed computer-based learning environment designed to foster the acquisition of reading comprehension strategies that mediate expertise in the social sciences. Empirical…

  3. An Empirically-Based Statewide System for Identifying Quality Pre-Kindergarten Programs

    ERIC Educational Resources Information Center

    Williams, Jeffrey M.; Landry, Susan H.; Anthony, Jason L.; Swank, Paul R.; Crawford, April D.

    2012-01-01

    This study presents an empirically-based statewide system that links information about pre-kindergarten programs with children's school readiness scores to certify pre-kindergarten classrooms as promoting school readiness. Over 8,000 children from 1,255 pre-kindergarten classrooms were followed longitudinally for one year. Pre-kindergarten quality…

  4. PDE-based Non-Linear Diffusion Techniques for Denoising Scientific and Industrial Images: An Empirical Study

    SciTech Connect

    Weeratunga, S K; Kamath, C

    2001-12-20

    Removing noise from data is often the first step in data analysis. Denoising techniques should not only reduce the noise, but do so without blurring or changing the location of the edges. Many approaches have been proposed to accomplish this; in this paper, they focus on one such approach, namely the use of non-linear diffusion operators. This approach has been studied extensively from a theoretical viewpoint ever since the 1987 work of Perona and Malik showed that non-linear filters outperformed the more traditional linear Canny edge detector. They complement this theoretical work by investigating the performance of several isotropic diffusion operators on test images from scientific domains. They explore the effects of various parameters such as the choice of diffusivity function, explicit and implicit methods for the discretization of the PDE, and approaches for the spatial discretization of the non-linear operator etc. They also compare these schemes with simple spatial filters and the more complex wavelet-based shrinkage techniques. The empirical results show that, with an appropriate choice of parameters, diffusion-based schemes can be as effective as competitive techniques.

  5. An Empirical Kaiser Criterion.

    PubMed

    Braeken, Johan; van Assen, Marcel A L M

    2016-03-31

    In exploratory factor analysis (EFA), most popular methods for dimensionality assessment such as the screeplot, the Kaiser criterion, or-the current gold standard-parallel analysis, are based on eigenvalues of the correlation matrix. To further understanding and development of factor retention methods, results on population and sample eigenvalue distributions are introduced based on random matrix theory and Monte Carlo simulations. These results are used to develop a new factor retention method, the Empirical Kaiser Criterion. The performance of the Empirical Kaiser Criterion and parallel analysis is examined in typical research settings, with multiple scales that are desired to be relatively short, but still reliable. Theoretical and simulation results illustrate that the new Empirical Kaiser Criterion performs as well as parallel analysis in typical research settings with uncorrelated scales, but much better when scales are both correlated and short. We conclude that the Empirical Kaiser Criterion is a powerful and promising factor retention method, because it is based on distribution theory of eigenvalues, shows good performance, is easily visualized and computed, and is useful for power analysis and sample size planning for EFA. (PsycINFO Database Record

  6. Theoretical magnetograms based on quantitative simulation of a magnetospheric substorm

    NASA Technical Reports Server (NTRS)

    Chen, C.-K.; Wolf, R. A.; Karty, J. L.; Harel, M.

    1982-01-01

    Substorm currents derived from the Rice University computer simulation of the September 19, 1976 substorm event are used to compute theoretical magnetograms as a function of universal time for various stations, integrating the Biot-Savart law over a maze of about 2700 wires and bands that carry the ring, Birkeland and horizontal ionospheric currents. A comparison of theoretical results with corresponding observations leads to a claim of general agreement, especially for stations at high and middle magnetic latitudes. Model results suggest that the ground magnetic field perturbations arise from complicated combinations of different kinds of currents, and that magnetic field disturbances due to different but related currents cancel each other out despite the inapplicability of Fukushima's (1973) theorem. It is also found that the dawn-dusk asymmetry in the horizontal magnetic field disturbance component at low latitudes is due to a net downward Birkeland current at noon, a net upward current at midnight, and, generally, antisunward-flowing electrojets.

  7. TheoReTS - An information system for theoretical spectra based on variational predictions from molecular potential energy and dipole moment surfaces

    NASA Astrophysics Data System (ADS)

    Rey, Michaël; Nikitin, Andrei V.; Babikov, Yurii L.; Tyuterev, Vladimir G.

    2016-09-01

    Knowledge of intensities of rovibrational transitions of various molecules and theirs isotopic species in wide spectral and temperature ranges is essential for the modeling of optical properties of planetary atmospheres, brown dwarfs and for other astrophysical applications. TheoReTS ("Theoretical Reims-Tomsk Spectral data") is an Internet accessible information system devoted to ab initio based rotationally resolved spectra predictions for some relevant molecular species. All data were generated from potential energy and dipole moment surfaces computed via high-level electronic structure calculations using variational methods for vibration-rotation energy levels and transitions. When available, empirical corrections to band centers were applied, all line intensities remaining purely ab initio. The current TheoReTS implementation contains information on four-to-six atomic molecules, including phosphine, methane, ethylene, silane, methyl-fluoride, and their isotopic species 13CH4 , 12CH3D , 12CH2D2 , 12CD4 , 13C2H4, … . Predicted hot methane line lists up to T = 2000 K are included. The information system provides the associated software for spectra simulation including absorption coefficient, absorption and emission cross-sections, transmittance and radiance. The simulations allow Lorentz, Gauss and Voight line shapes. Rectangular, triangular, Lorentzian, Gaussian, sinc and sinc squared apparatus function can be used with user-defined specifications for broadening parameters and spectral resolution. All information is organized as a relational database with the user-friendly graphical interface according to Model-View-Controller architectural tools. The full-featured web application is written on PHP using Yii framework and C++ software modules. In case of very large high-temperature line lists, a data compression is implemented for fast interactive spectra simulations of a quasi-continual absorption due to big line density. Applications for the TheoReTS may

  8. Effectiveness of a Theoretically-Based Judgment and Decision Making Intervention for Adolescents

    PubMed Central

    Knight, Danica K.; Dansereau, Donald F.; Becan, Jennifer E.; Rowan, Grace A.; Flynn, Patrick M.

    2014-01-01

    Although adolescents demonstrate capacity for rational decision making, their tendency to be impulsive, place emphasis on peers, and ignore potential consequences of their actions often translates into higher risk-taking including drug use, illegal activity, and physical harm. Problems with judgment and decision making contribute to risky behavior and are core issues for youth in treatment. Based on theoretical and empirical advances in cognitive science, the Treatment Readiness and Induction Program (TRIP) represents a curriculum-based decision making intervention that can be easily inserted into a variety of content-oriented modalities as well as administered as a separate therapeutic course. The current study examined the effectiveness of TRIP for promoting better judgment among 519 adolescents (37% female; primarily Hispanic and Caucasian) in residential substance abuse treatment. Change over time in decision making and premeditation (i.e., thinking before acting) was compared among youth receiving standard operating practice (n = 281) versus those receiving standard practice plus TRIP (n = 238). Change in TRIP-specific content knowledge was examined among clients receiving TRIP. Premeditation improved among youth in both groups; TRIP clients showed greater improvement in decision making. TRIP clients also reported significant increases over time in self-awareness, positive-focused thinking (e.g., positive self-talk, goal setting), and recognition of the negative effects of drug use. While both genders showed significant improvement, males showed greater gains in metacognitive strategies (i.e., awareness of one’s own cognitive process) and recognition of the negative effects of drug use. These results suggest that efforts to teach core thinking strategies and apply/practice them through independent intervention modules may benefit adolescents when used in conjunction with content-based programs designed to change problematic behaviors. PMID:24760288

  9. Effectiveness of a theoretically-based judgment and decision making intervention for adolescents.

    PubMed

    Knight, Danica K; Dansereau, Donald F; Becan, Jennifer E; Rowan, Grace A; Flynn, Patrick M

    2015-05-01

    Although adolescents demonstrate capacity for rational decision making, their tendency to be impulsive, place emphasis on peers, and ignore potential consequences of their actions often translates into higher risk-taking including drug use, illegal activity, and physical harm. Problems with judgment and decision making contribute to risky behavior and are core issues for youth in treatment. Based on theoretical and empirical advances in cognitive science, the Treatment Readiness and Induction Program (TRIP) represents a curriculum-based decision making intervention that can be easily inserted into a variety of content-oriented modalities as well as administered as a separate therapeutic course. The current study examined the effectiveness of TRIP for promoting better judgment among 519 adolescents (37 % female; primarily Hispanic and Caucasian) in residential substance abuse treatment. Change over time in decision making and premeditation (i.e., thinking before acting) was compared among youth receiving standard operating practice (n = 281) versus those receiving standard practice plus TRIP (n = 238). Change in TRIP-specific content knowledge was examined among clients receiving TRIP. Premeditation improved among youth in both groups; TRIP clients showed greater improvement in decision making. TRIP clients also reported significant increases over time in self-awareness, positive-focused thinking (e.g., positive self-talk, goal setting), and recognition of the negative effects of drug use. While both genders showed significant improvement, males showed greater gains in metacognitive strategies (i.e., awareness of one's own cognitive process) and recognition of the negative effects of drug use. These results suggest that efforts to teach core thinking strategies and apply/practice them through independent intervention modules may benefit adolescents when used in conjunction with content-based programs designed to change problematic behaviors.

  10. Self-adaptive image denoising based on bidimensional empirical mode decomposition (BEMD).

    PubMed

    Guo, Song; Luan, Fangjun; Song, Xiaoyu; Li, Changyou

    2014-01-01

    To better analyze images with the Gaussian white noise, it is necessary to remove the noise before image processing. In this paper, we propose a self-adaptive image denoising method based on bidimensional empirical mode decomposition (BEMD). Firstly, normal probability plot confirms that 2D-IMF of Gaussian white noise images decomposed by BEMD follow the normal distribution. Secondly, energy estimation equation of the ith 2D-IMF (i=2,3,4,......) is proposed referencing that of ith IMF (i=2,3,4,......) obtained by empirical mode decomposition (EMD). Thirdly, the self-adaptive threshold of each 2D-IMF is calculated. Eventually, the algorithm of the self-adaptive image denoising method based on BEMD is described. From the practical perspective, this is applied for denoising of the magnetic resonance images (MRI) of the brain. And the results show it has a better denoising performance compared with other methods.

  11. An Empirical Study for Impacts of Measurement Errors on EHR based Association Studies

    PubMed Central

    Duan, Rui; Cao, Ming; Wu, Yonghui; Huang, Jing; Denny, Joshua C; Xu, Hua; Chen, Yong

    2016-01-01

    Over the last decade, Electronic Health Records (EHR) systems have been increasingly implemented at US hospitals. Despite their great potential, the complex and uneven nature of clinical documentation and data quality brings additional challenges for analyzing EHR data. A critical challenge is the information bias due to the measurement errors in outcome and covariates. We conducted empirical studies to quantify the impacts of the information bias on association study. Specifically, we designed our simulation studies based on the characteristics of the Electronic Medical Records and Genomics (eMERGE) Network. Through simulation studies, we quantified the loss of power due to misclassifications in case ascertainment and measurement errors in covariate status extraction, with respect to different levels of misclassification rates, disease prevalence, and covariate frequencies. These empirical findings can inform investigators for better understanding of the potential power loss due to misclassification and measurement errors under a variety of conditions in EHR based association studies. PMID:28269935

  12. Bacterial clonal diagnostics as a tool for evidence-based empiric antibiotic selection.

    PubMed

    Tchesnokova, Veronika; Avagyan, Hovhannes; Rechkina, Elena; Chan, Diana; Muradova, Mariya; Haile, Helen Ghirmai; Radey, Matthew; Weissman, Scott; Riddell, Kim; Scholes, Delia; Johnson, James R; Sokurenko, Evgeni V

    2017-01-01

    Despite the known clonal distribution of antibiotic resistance in many bacteria, empiric (pre-culture) antibiotic selection still relies heavily on species-level cumulative antibiograms, resulting in overuse of broad-spectrum agents and excessive antibiotic/pathogen mismatch. Urinary tract infections (UTIs), which account for a large share of antibiotic use, are caused predominantly by Escherichia coli, a highly clonal pathogen. In an observational clinical cohort study of urgent care patients with suspected UTI, we assessed the potential for E. coli clonal-level antibiograms to improve empiric antibiotic selection. A novel PCR-based clonotyping assay was applied to fresh urine samples to rapidly detect E. coli and the urine strain's clonotype. Based on a database of clonotype-specific antibiograms, the acceptability of various antibiotics for empiric therapy was inferred using a 20%, 10%, and 30% allowed resistance threshold. The test's performance characteristics and possible effects on prescribing were assessed. The rapid test identified E. coli clonotypes directly in patients' urine within 25-35 minutes, with high specificity and sensitivity compared to culture. Antibiotic selection based on a clonotype-specific antibiogram could reduce the relative likelihood of antibiotic/pathogen mismatch by ≥ 60%. Compared to observed prescribing patterns, clonal diagnostics-guided antibiotic selection could safely double the use of trimethoprim/sulfamethoxazole and minimize fluoroquinolone use. In summary, a rapid clonotyping test showed promise for improving empiric antibiotic prescribing for E. coli UTI, including reversing preferential use of fluoroquinolones over trimethoprim/sulfamethoxazole. The clonal diagnostics approach merges epidemiologic surveillance, antimicrobial stewardship, and molecular diagnostics to bring evidence-based medicine directly to the point of care.

  13. Bacterial clonal diagnostics as a tool for evidence-based empiric antibiotic selection

    PubMed Central

    Tchesnokova, Veronika; Avagyan, Hovhannes; Rechkina, Elena; Chan, Diana; Muradova, Mariya; Haile, Helen Ghirmai; Radey, Matthew; Weissman, Scott; Riddell, Kim; Scholes, Delia; Johnson, James R.

    2017-01-01

    Despite the known clonal distribution of antibiotic resistance in many bacteria, empiric (pre-culture) antibiotic selection still relies heavily on species-level cumulative antibiograms, resulting in overuse of broad-spectrum agents and excessive antibiotic/pathogen mismatch. Urinary tract infections (UTIs), which account for a large share of antibiotic use, are caused predominantly by Escherichia coli, a highly clonal pathogen. In an observational clinical cohort study of urgent care patients with suspected UTI, we assessed the potential for E. coli clonal-level antibiograms to improve empiric antibiotic selection. A novel PCR-based clonotyping assay was applied to fresh urine samples to rapidly detect E. coli and the urine strain's clonotype. Based on a database of clonotype-specific antibiograms, the acceptability of various antibiotics for empiric therapy was inferred using a 20%, 10%, and 30% allowed resistance threshold. The test's performance characteristics and possible effects on prescribing were assessed. The rapid test identified E. coli clonotypes directly in patients’ urine within 25–35 minutes, with high specificity and sensitivity compared to culture. Antibiotic selection based on a clonotype-specific antibiogram could reduce the relative likelihood of antibiotic/pathogen mismatch by ≥ 60%. Compared to observed prescribing patterns, clonal diagnostics-guided antibiotic selection could safely double the use of trimethoprim/sulfamethoxazole and minimize fluoroquinolone use. In summary, a rapid clonotyping test showed promise for improving empiric antibiotic prescribing for E. coli UTI, including reversing preferential use of fluoroquinolones over trimethoprim/sulfamethoxazole. The clonal diagnostics approach merges epidemiologic surveillance, antimicrobial stewardship, and molecular diagnostics to bring evidence-based medicine directly to the point of care. PMID:28350870

  14. Center for Army Leadership’s Response to Empirically Based Leadership

    DTIC Science & Technology

    2013-02-01

    Douglas MacArthur Military Leader- ship Writing award for his article, “Empirically Based Leadership: Integrating the Science of Psychology in...experience. The paper states that integrating relevant empiricism into the process is required to construct a more complete model of leadership; however...review was con- ducted of psychology literature among other bodies of knowledge. The expert review used a Delphi technique to obtain independent

  15. Scaling up explanation generation: Large-scale knowledge bases and empirical studies

    SciTech Connect

    Lester, J.C.; Porter, B.W.

    1996-12-31

    To explain complex phenomena, an explanation system must be able to select information from a formal representation of domain knowledge, organize the selected information into multisentential discourse plans, and realize the discourse plans in text. Although recent years have witnessed significant progress in the development of sophisticated computational mechanisms for explanation, empirical results have been limited. This paper reports on a seven year effort to empirically study explanation generation from semantically rich, large-scale knowledge bases. We first describe Knight, a robust explanation system that constructs multi-sentential and multi-paragraph explanations from the Biology Knowledge Base, a large-scale knowledge base in the domain of botanical anatomy, physiology, and development. We then introduce the Two Panel evaluation methodology and describe how Knight`s performance was assessed with this methodology in the most extensive empirical evaluation conducted on an explanation system. In this evaluation, Knight scored within {open_quotes}half a grade{close_quote} of domain experts, and its performance exceeded that of one of the domain experts.

  16. Measuring microscopic evolution processes of complex networks based on empirical data

    NASA Astrophysics Data System (ADS)

    Chi, Liping

    2015-04-01

    Aiming at understanding the microscopic mechanism of complex systems in real world, we perform the measurement that characterizes the evolution properties on two empirical data sets. In the Autonomous Systems Internet data, the network size keeps growing although the system suffers a high rate of node deletion (r = 0.4) and link deletion (q = 0.81). However, the average degree keeps almost unchanged during the whole time range. At each time step the external links attached to a new node are about c = 1.1 and the internal links added between existing nodes are approximately m = 8. For the Scientific Collaboration data, it is a cumulated result of all the authors from 1893 up to the considered year. There is no deletion of nodes and links, r = q = 0. The external and internal links at each time step are c = 1.04 and m = 0, correspondingly. The exponents of degree distribution p(k) ∼ k-γ of these two empirical datasets γdata are in good agreement with that obtained theoretically γtheory. The results indicate that these evolution quantities may provide an insight into capturing the microscopic dynamical processes that govern the network topology.

  17. A theoretical drought classification method for the multivariate drought index based on distribution properties of standardized drought indices

    NASA Astrophysics Data System (ADS)

    Hao, Zengchao; Hao, Fanghua; Singh, Vijay P.; Xia, Youlong; Ouyang, Wei; Shen, Xinyi

    2016-06-01

    Drought indices have been commonly used to characterize different properties of drought and the need to combine multiple drought indices for accurate drought monitoring has been well recognized. Based on linear combinations of multiple drought indices, a variety of multivariate drought indices have recently been developed for comprehensive drought monitoring to integrate drought information from various sources. For operational drought management, it is generally required to determine thresholds of drought severity for drought classification to trigger a mitigation response during a drought event to aid stakeholders and policy makers in decision making. Though the classification of drought categories based on the univariate drought indices has been well studied, drought classification method for the multivariate drought index has been less explored mainly due to the lack of information about its distribution property. In this study, a theoretical drought classification method is proposed for the multivariate drought index, based on a linear combination of multiple indices. Based on the distribution property of the standardized drought index, a theoretical distribution of the linear combined index (LDI) is derived, which can be used for classifying drought with the percentile approach. Application of the proposed method for drought classification of LDI, based on standardized precipitation index (SPI), standardized soil moisture index (SSI), and standardized runoff index (SRI) is illustrated with climate division data from California, United States. Results from comparison with the empirical methods show a satisfactory performance of the proposed method for drought classification.

  18. Theoretic base of Edge Local Mode triggering by vertical displacements

    SciTech Connect

    Wang, Z. T.; He, Z. X.; Wang, Z. H.; Wu, N.; Tang, C. J.

    2015-05-15

    Vertical instability is studied with R-dependent displacement. For Solovev's configuration, the stability boundary of the vertical instability is calculated. The pressure gradient is a destabilizing factor which is contrary to Rebhan's result. Equilibrium parallel current density, j{sub //}, at plasma boundary is a drive of the vertical instability similar to Peeling-ballooning modes; however, the vertical instability cannot be stabilized by the magnetic shear which tends towards infinity near the separatrix. The induced current observed in the Edge Local Mode (ELM) triggering experiment by vertical modulation is derived. The theory provides some theoretic explanation for the mitigation of type-I ELMS on ASDEX Upgrade. The principle could be also used for ITER.

  19. Theoretic base of Edge Local Mode triggering by vertical displacements

    NASA Astrophysics Data System (ADS)

    Wang, Z. T.; He, Z. X.; Wang, Z. H.; Wu, N.; Tang, C. J.

    2015-05-01

    Vertical instability is studied with R-dependent displacement. For Solovev's configuration, the stability boundary of the vertical instability is calculated. The pressure gradient is a destabilizing factor which is contrary to Rebhan's result. Equilibrium parallel current density, j// , at plasma boundary is a drive of the vertical instability similar to Peeling-ballooning modes; however, the vertical instability cannot be stabilized by the magnetic shear which tends towards infinity near the separatrix. The induced current observed in the Edge Local Mode (ELM) triggering experiment by vertical modulation is derived. The theory provides some theoretic explanation for the mitigation of type-I ELMS on ASDEX Upgrade. The principle could be also used for ITER.

  20. Subsystem-based theoretical spectroscopy of biomolecules and biomolecular assemblies.

    PubMed

    Neugebauer, Johannes

    2009-12-21

    The absorption properties of chromophores in biomolecular systems are subject to several fine-tuning mechanisms. Specific interactions with the surrounding protein environment often lead to significant changes in the excitation energies, but bulk dielectric effects can also play an important role. Moreover, strong excitonic interactions can occur in systems with several chromophores at close distances. For interpretation purposes, it is often desirable to distinguish different types of environmental effects, such as geometrical, electrostatic, polarization, and response (or differential polarization) effects. Methods that can be applied for theoretical analyses of such effects are reviewed herein, ranging from continuum and point-charge models to explicit quantum chemical subsystem methods for environmental effects. Connections to physical model theories are also outlined. Prototypical applications to optical spectra and excited states of fluorescent proteins, biomolecular photoreceptors, and photosynthetic protein complexes are discussed.

  1. Exploring multi/full polarised SAR imagery for understanding surface soil moisture and roughness by using semi-empirical and theoretical models and field experiments

    NASA Astrophysics Data System (ADS)

    Dong, Lu; Marzahn, Philip; Ludwig, Ralf

    2010-05-01

    -range digital photogrammetry for surface roughness retrieval. A semi-empirical model is tested and a theoretical model AIEM is utilised for further understanding. Results demonstrate that the semi-empirical soil moisture retrieval algorithm, which was developed in studies in humid climate conditions, must be carefully adapted to the drier Mediterranean environment. Modifying the approach by incorporating regional field data, led to a considerable improvement of the algorithms performance. In addition, it is found that the current representation of soil surface roughness in the AIEM is insufficient to account for the specific heterogeneities on the field scale. The findings in this study indicate the necessity for future research, which must be extended to a more integrated combination of current sensors, e.g. ENVISAT/ASAR, ALOS/PALSAR and Radarsat-2 imagery and advanced development of soil moisture retrieval model for multi/full polarised radar imagery.

  2. Information Theoretic Similarity Measures for Content Based Image Retrieval.

    ERIC Educational Resources Information Center

    Zachary, John; Iyengar, S. S.

    2001-01-01

    Content-based image retrieval is based on the idea of extracting visual features from images and using them to index images in a database. Proposes similarity measures and an indexing algorithm based on information theory that permits an image to be represented as a single number. When used in conjunction with vectors, this method displays…

  3. The Theoretical Astrophysical Observatory: Cloud-based Mock Galaxy Catalogs

    NASA Astrophysics Data System (ADS)

    Bernyk, Maksym; Croton, Darren J.; Tonini, Chiara; Hodkinson, Luke; Hassan, Amr H.; Garel, Thibault; Duffy, Alan R.; Mutch, Simon J.; Poole, Gregory B.; Hegarty, Sarah

    2016-03-01

    We introduce the Theoretical Astrophysical Observatory (TAO), an online virtual laboratory that houses mock observations of galaxy survey data. Such mocks have become an integral part of the modern analysis pipeline. However, building them requires expert knowledge of galaxy modeling and simulation techniques, significant investment in software development, and access to high performance computing. These requirements make it difficult for a small research team or individual to quickly build a mock catalog suited to their needs. To address this TAO offers access to multiple cosmological simulations and semi-analytic galaxy formation models from an intuitive and clean web interface. Results can be funnelled through science modules and sent to a dedicated supercomputer for further processing and manipulation. These modules include the ability to (1) construct custom observer light cones from the simulation data cubes; (2) generate the stellar emission from star formation histories, apply dust extinction, and compute absolute and/or apparent magnitudes; and (3) produce mock images of the sky. All of TAO’s features can be accessed without any programming requirements. The modular nature of TAO opens it up for further expansion in the future.

  4. THE THEORETICAL ASTROPHYSICAL OBSERVATORY: CLOUD-BASED MOCK GALAXY CATALOGS

    SciTech Connect

    Bernyk, Maksym; Croton, Darren J.; Tonini, Chiara; Hodkinson, Luke; Hassan, Amr H.; Garel, Thibault; Duffy, Alan R.; Mutch, Simon J.; Poole, Gregory B.; Hegarty, Sarah

    2016-03-15

    We introduce the Theoretical Astrophysical Observatory (TAO), an online virtual laboratory that houses mock observations of galaxy survey data. Such mocks have become an integral part of the modern analysis pipeline. However, building them requires expert knowledge of galaxy modeling and simulation techniques, significant investment in software development, and access to high performance computing. These requirements make it difficult for a small research team or individual to quickly build a mock catalog suited to their needs. To address this TAO offers access to multiple cosmological simulations and semi-analytic galaxy formation models from an intuitive and clean web interface. Results can be funnelled through science modules and sent to a dedicated supercomputer for further processing and manipulation. These modules include the ability to (1) construct custom observer light cones from the simulation data cubes; (2) generate the stellar emission from star formation histories, apply dust extinction, and compute absolute and/or apparent magnitudes; and (3) produce mock images of the sky. All of TAO’s features can be accessed without any programming requirements. The modular nature of TAO opens it up for further expansion in the future.

  5. Band structure calculation of GaSe-based nanostructures using empirical pseudopotential method

    NASA Astrophysics Data System (ADS)

    Osadchy, A. V.; Volotovskiy, S. G.; Obraztsova, E. D.; Savin, V. V.; Golovashkin, D. L.

    2016-08-01

    In this paper we present the results of band structure computer simulation of GaSe- based nanostructures using the empirical pseudopotential method. Calculations were performed using a specially developed software that allows performing simulations using cluster computing. Application of this method significantly reduces the demands on computing resources compared to traditional approaches based on ab-initio techniques and provides receiving the adequate comparable results. The use of cluster computing allows to obtain information for structures that require an explicit account of a significant number of atoms, such as quantum dots and quantum pillars.

  6. An Empirical Study of Plan-Based Representations of Pascal and Fortran Code.

    DTIC Science & Technology

    1987-06-01

    COMPUTING LABORATORY lReport No. CCL-0687-0 00 IAN EMPIRICAL STUDY OF PLAN-BASED REPRESENTATIONS OF PASCAL AND FORTRAN CODE Scott P. Robertson Chiung-Chen Yu...82173 ’, " Office of Naval Research Contract No. N00014-86-K-0876 Work Unit No. NR 4424203-01 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED...researchers have argued recenitly that programmers utilize a plan-based representation when composing or comprehending program code. In a series of studies we

  7. Development of Empirically Based Time-to-death Curves for Combat Casualty Deaths in Iraq and Afghanistan

    DTIC Science & Technology

    2015-01-01

    Naval Health Research Center Development of Empirically Based Time-to- death Curves for Combat Casualty Deaths In Iraq and Afghanistan Edwin...10.1177/1548512914531353 dms.sagepub.com Development of empirically based time-to- death curves for combat casualty deaths in Iraq and Afghanistan...casualties with life-threatening injuries. The curves developed from that research were based on a small dataset (n = 160, with 26 deaths and 134

  8. HIRS-AMTS satellite sounding system test - Theoretical and empirical vertical resolving power. [High resolution Infrared Radiation Sounder - Advanced Moisture and Temperature Sounder

    NASA Technical Reports Server (NTRS)

    Thompson, O. E.

    1982-01-01

    The present investigation is concerned with the vertical resolving power of satellite-borne temperature sounding instruments. Information is presented on the capabilities of the High Resolution Infrared Radiation Sounder (HIRS) and a proposed sounding instrument called the Advanced Moisture and Temperature Sounder (AMTS). Two quite different methods for assessing the vertical resolving power of satellite sounders are discussed. The first is the theoretical method of Conrath (1972) which was patterned after the work of Backus and Gilbert (1968) The Backus-Gilbert-Conrath (BGC) approach includes a formalism for deriving a retrieval algorithm for optimizing the vertical resolving power. However, a retrieval algorithm constructed in the BGC optimal fashion is not necessarily optimal as far as actual temperature retrievals are concerned. Thus, an independent criterion for vertical resolving power is discussed. The criterion is based on actual retrievals of signal structure in the temperature field.

  9. Theoretical performance analysis for CMOS based high resolution detectors.

    PubMed

    Jain, Amit; Bednarek, Daniel R; Rudin, Stephen

    2013-03-06

    High resolution imaging capabilities are essential for accurately guiding successful endovascular interventional procedures. Present x-ray imaging detectors are not always adequate due to their inherent limitations. The newly-developed high-resolution micro-angiographic fluoroscope (MAF-CCD) detector has demonstrated excellent clinical image quality; however, further improvement in performance and physical design may be possible using CMOS sensors. We have thus calculated the theoretical performance of two proposed CMOS detectors which may be used as a successor to the MAF. The proposed detectors have a 300 μm thick HL-type CsI phosphor, a 50 μm-pixel CMOS sensor with and without a variable gain light image intensifier (LII), and are designated MAF-CMOS-LII and MAF-CMOS, respectively. For the performance evaluation, linear cascade modeling was used. The detector imaging chains were divided into individual stages characterized by one of the basic processes (quantum gain, binomial selection, stochastic and deterministic blurring, additive noise). Ranges of readout noise and exposure were used to calculate the detectors' MTF and DQE. The MAF-CMOS showed slightly better MTF than the MAF-CMOS-LII, but the MAF-CMOS-LII showed far better DQE, especially for lower exposures. The proposed detectors can have improved MTF and DQE compared with the present high resolution MAF detector. The performance of the MAF-CMOS is excellent for the angiography exposure range; however it is limited at fluoroscopic levels due to additive instrumentation noise. The MAF-CMOS-LII, having the advantage of the variable LII gain, can overcome the noise limitation and hence may perform exceptionally for the full range of required exposures; however, it is more complex and hence more expensive.

  10. Fault Diagnosis of Rotating Machinery Based on an Adaptive Ensemble Empirical Mode Decomposition

    PubMed Central

    Lei, Yaguo; Li, Naipeng; Lin, Jing; Wang, Sizhe

    2013-01-01

    The vibration based signal processing technique is one of the principal tools for diagnosing faults of rotating machinery. Empirical mode decomposition (EMD), as a time-frequency analysis technique, has been widely used to process vibration signals of rotating machinery. But it has the shortcoming of mode mixing in decomposing signals. To overcome this shortcoming, ensemble empirical mode decomposition (EEMD) was proposed accordingly. EEMD is able to reduce the mode mixing to some extent. The performance of EEMD, however, depends on the parameters adopted in the EEMD algorithms. In most of the studies on EEMD, the parameters were selected artificially and subjectively. To solve the problem, a new adaptive ensemble empirical mode decomposition method is proposed in this paper. In the method, the sifting number is adaptively selected, and the amplitude of the added noise changes with the signal frequency components during the decomposition process. The simulation, the experimental and the application results demonstrate that the adaptive EEMD provides the improved results compared with the original EEMD in diagnosing rotating machinery. PMID:24351666

  11. Fault diagnosis of rotating machinery based on an adaptive ensemble empirical mode decomposition.

    PubMed

    Lei, Yaguo; Li, Naipeng; Lin, Jing; Wang, Sizhe

    2013-12-09

    The vibration based signal processing technique is one of the principal tools for diagnosing faults of rotating machinery. Empirical mode decomposition (EMD), as a time-frequency analysis technique, has been widely used to process vibration signals of rotating machinery. But it has the shortcoming of mode mixing in decomposing signals. To overcome this shortcoming, ensemble empirical mode decomposition (EEMD) was proposed accordingly. EEMD is able to reduce the mode mixing to some extent. The performance of EEMD, however, depends on the parameters adopted in the EEMD algorithms. In most of the studies on EEMD, the parameters were selected artificially and subjectively. To solve the problem, a new adaptive ensemble empirical mode decomposition method is proposed in this paper. In the method, the sifting number is adaptively selected, and the amplitude of the added noise changes with the signal frequency components during the decomposition process. The simulation, the experimental and the application results demonstrate that the adaptive EEMD provides the improved results compared with the original EEMD in diagnosing rotating machinery.

  12. An Empirical Typology of Residential Care/Assisted Living Based on a Four-State Study

    ERIC Educational Resources Information Center

    Park, Nan Sook; Zimmerman, Sheryl; Sloane, Philip D.; Gruber-Baldini, Ann L.; Eckert, J. Kevin

    2006-01-01

    Purpose: Residential care/assisted living describes diverse facilities providing non-nursing home care to a heterogeneous group of primarily elderly residents. This article derives typologies of assisted living based on theoretically and practically grounded evidence. Design and Methods: We obtained data from the Collaborative Studies of Long-Term…

  13. Empirical and physics based mathematical models of uranium hydride decomposition kinetics with quantified uncertainties.

    SciTech Connect

    Salloum, Maher N.; Gharagozloo, Patricia E.

    2013-10-01

    Metal particle beds have recently become a major technique for hydrogen storage. In order to extract hydrogen from such beds, it is crucial to understand the decomposition kinetics of the metal hydride. We are interested in obtaining a a better understanding of the uranium hydride (UH3) decomposition kinetics. We first developed an empirical model by fitting data compiled from different experimental studies in the literature and quantified the uncertainty resulting from the scattered data. We found that the decomposition time range predicted by the obtained kinetics was in a good agreement with published experimental results. Secondly, we developed a physics based mathematical model to simulate the rate of hydrogen diffusion in a hydride particle during the decomposition. We used this model to simulate the decomposition of the particles for temperatures ranging from 300K to 1000K while propagating parametric uncertainty and evaluated the kinetics from the results. We compared the kinetics parameters derived from the empirical and physics based models and found that the uncertainty in the kinetics predicted by the physics based model covers the scattered experimental data. Finally, we used the physics-based kinetics parameters to simulate the effects of boundary resistances and powder morphological changes during decomposition in a continuum level model. We found that the species change within the bed occurring during the decomposition accelerates the hydrogen flow by increasing the bed permeability, while the pressure buildup and the thermal barrier forming at the wall significantly impede the hydrogen extraction.

  14. A Theoretical Approach to School-based HIV Prevention.

    ERIC Educational Resources Information Center

    DeMuth, Diane; Symons, Cynthia Wolford

    1989-01-01

    Presents examples of appropriate intervention strategies for professionals working with school-based human immunodeficiency virus (HIV) prevention among adolescents. A multidisciplinary approach is advisable because influencing adolescent sexual behavior is a complex matter. Consistent, continuous messages through multiple channels and by multiple…

  15. Why Problem-Based Learning Works: Theoretical Foundations

    ERIC Educational Resources Information Center

    Marra, Rose M.; Jonassen, David H.; Palmer, Betsy; Luft, Steve

    2014-01-01

    Problem-based learning (PBL) is an instructional method where student learning occurs in the context of solving an authentic problem. PBL was initially developed out of an instructional need to help medical school students learn their basic sciences knowledge in a way that would be more lasting while helping to develop clinical skills…

  16. Theoretical Foundations of "Competitive Team-Based Learning"

    ERIC Educational Resources Information Center

    Hosseini, Seyed Mohammad Hassan

    2010-01-01

    This paper serves as a platform to precisely substantiate the success of "Competitive Team-Based Learning" (CTBL) as an effective and rational educational approach. To that end, it brings to the fore part of the (didactic) theories and hypotheses which in one way or another delineate and confirm the mechanisms under which successful…

  17. An ISAR imaging algorithm for the space satellite based on empirical mode decomposition theory

    NASA Astrophysics Data System (ADS)

    Zhao, Tao; Dong, Chun-zhu

    2014-11-01

    Currently, high resolution imaging of the space satellite is a popular topic in the field of radar technology. In contrast with regular targets, the satellite target often moves along with its trajectory and simultaneously its solar panel substrate changes the direction toward the sun to obtain energy. Aiming at the imaging problem, a signal separating and imaging approach based on the empirical mode decomposition (EMD) theory is proposed, and the approach can realize separating the signal of two parts in the satellite target, the main body and the solar panel substrate and imaging for the target. The simulation experimentation can demonstrate the validity of the proposed method.

  18. Evaluating Process Quality Based on Change Request Data - An Empirical Study of the Eclipse Project

    NASA Astrophysics Data System (ADS)

    Schackmann, Holger; Schaefer, Henning; Lichter, Horst

    The information routinely collected in change request management systems contains valuable information for monitoring of the process quality. However this data is currently utilized in a very limited way. This paper presents an empirical study of the process quality in the product portfolio of the Eclipse project. It is based on a systematic approach for the evaluation of process quality characteristics using change request data. Results of the study offer insights into the development process of Eclipse. Moreover the study allows assessing applicability and limitations of the proposed approach for the evaluation of process quality.

  19. Empirical Equation Based Chirality (n, m) Assignment of Semiconducting Single Wall Carbon Nanotubes from Resonant Raman Scattering Data

    PubMed Central

    Arefin, Md Shamsul

    2012-01-01

    This work presents a technique for the chirality (n, m) assignment of semiconducting single wall carbon nanotubes by solving a set of empirical equations of the tight binding model parameters. The empirical equations of the nearest neighbor hopping parameters, relating the term (2n− m) with the first and second optical transition energies of the semiconducting single wall carbon nanotubes, are also proposed. They provide almost the same level of accuracy for lower and higher diameter nanotubes. An algorithm is presented to determine the chiral index (n, m) of any unknown semiconducting tube by solving these empirical equations using values of radial breathing mode frequency and the first or second optical transition energy from resonant Raman spectroscopy. In this paper, the chirality of 55 semiconducting nanotubes is assigned using the first and second optical transition energies. Unlike the existing methods of chirality assignment, this technique does not require graphical comparison or pattern recognition between existing experimental and theoretical Kataura plot.

  20. Systematic Review of Empirically Evaluated School-Based Gambling Education Programs.

    PubMed

    Keen, Brittany; Blaszczynski, Alex; Anjoul, Fadi

    2017-03-01

    Adolescent problem gambling prevalence rates are reportedly five times higher than in the adult population. Several school-based gambling education programs have been developed in an attempt to reduce problem gambling among adolescents; however few have been empirically evaluated. The aim of this review was to report the outcome of studies empirically evaluating gambling education programs across international jurisdictions. A systematic review following guidelines outlined in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Statement searching five academic databases: PubMed, Scopus, Medline, PsycINFO, and ERIC, was conducted. A total of 20 papers and 19 studies were included after screening and exclusion criteria were applied. All studies reported intervention effects on cognitive outcomes such as knowledge, perceptions, and beliefs. Only nine of the studies attempted to measure intervention effects on behavioural outcomes, and only five of those reported significant changes in gambling behaviour. Of these five, methodological inadequacies were commonly found including brief follow-up periods, lack of control comparison in post hoc analyses, and inconsistencies and misclassifications in the measurement of gambling behaviour, including problem gambling. Based on this review, recommendations are offered for the future development and evaluation of school-based gambling education programs relating to both methodological and content design and delivery considerations.

  1. Theoretically predicted Fox-7 based new high energy density molecules

    NASA Astrophysics Data System (ADS)

    Ghanta, Susanta

    2016-08-01

    Computational investigation of CHNO based high energy density molecules (HEDM) are designed with FOX-7 (1, 1-dinitro 2, 2-diamino ethylene) skeleton. We report structures, stability and detonation properties of these new molecules. A systematic analysis is presented for the crystal density, activation energy for nitro to nitrite isomerisation and the C-NO2 bond dissociation energy of these molecules. The Atoms in molecules (AIM) calculations have been performed to interpret the intra-molecular weak H-bonding interactions and the stability of C-NO2 bonds. The structure optimization, frequency and bond dissociation energy calculations have been performed at B3LYP level of theory by using G03 quantum chemistry package. Some of the designed molecules are found to be more promising HEDM than FOX-7 molecule, and are proposed to be candidate for synthetic purpose.

  2. Awareness-based game-theoretic space resource management

    NASA Astrophysics Data System (ADS)

    Chen, Genshe; Chen, Huimin; Pham, Khanh; Blasch, Erik; Cruz, Jose B., Jr.

    2009-05-01

    Over recent decades, the space environment becomes more complex with a significant increase in space debris and a greater density of spacecraft, which poses great difficulties to efficient and reliable space operations. In this paper we present a Hierarchical Sensor Management (HSM) method to space operations by (a) accommodating awareness modeling and updating and (b) collaborative search and tracking space objects. The basic approach is described as follows. Firstly, partition the relevant region of interest into district cells. Second, initialize and model the dynamics of each cell with awareness and object covariance according to prior information. Secondly, explicitly assign sensing resources to objects with user specified requirements. Note that when an object has intelligent response to the sensing event, the sensor assigned to observe an intelligent object may switch from time-to-time between a strong, active signal mode and a passive mode to maximize the total amount of information to be obtained over a multi-step time horizon and avoid risks. Thirdly, if all explicitly specified requirements are satisfied and there are still more sensing resources available, we assign the additional sensing resources to objects without explicitly specified requirements via an information based approach. Finally, sensor scheduling is applied to each sensor-object or sensor-cell pair according to the object type. We demonstrate our method with realistic space resources management scenario using NASA's General Mission Analysis Tool (GMAT) for space object search and track with multiple space borne observers.

  3. A theoretically based determination of bowen-ratio fetch requirements

    USGS Publications Warehouse

    Stannard, D.I.

    1997-01-01

    Determination of fetch requirements for accurate Bowen-ratio measurements of latent- and sensible-heat fluxes is more involved than for eddy-correlation measurements because Bowen-ratio sensors are located at two heights, rather than just one. A simple solution to the diffusion equation is used to derive an expression for Bowen-ratio fetch requirements, downwind of a step change in surface fluxes. These requirements are then compared to eddy-correlation fetch requirements based on the same diffusion equation solution. When the eddy-correlation and upper Bowen-ratio sensor heights are equal, and the available energy upwind and downwind of the step change is constant, the Bowen-ratio method requires less fetch than does eddy correlation. Differences in fetch requirements between the two methods are greatest over relatively smooth surfaces. Bowen-ratio fetch can be reduced significantly by lowering the lower sensor, as well as the upper sensor. The Bowen-ratio fetch model was tested using data from a field experiment where multiple Bowen-ratio systems were deployed simultaneously at various fetches and heights above a field of bermudagrass. Initial comparisons were poor, but improved greatly when the model was modified (and operated numerically) to account for the large roughness of the upwind cotton field.

  4. Dip-separated structural filtering using seislet transform and adaptive empirical mode decomposition based dip filter

    NASA Astrophysics Data System (ADS)

    Chen, Yangkang

    2016-07-01

    The seislet transform has been demonstrated to have a better compression performance for seismic data compared with other well-known sparsity promoting transforms, thus it can be used to remove random noise by simply applying a thresholding operator in the seislet domain. Since the seislet transform compresses the seismic data along the local structures, the seislet thresholding can be viewed as a simple structural filtering approach. Because of the dependence on a precise local slope estimation, the seislet transform usually suffers from low compression ratio and high reconstruction error for seismic profiles that have dip conflicts. In order to remove the limitation of seislet thresholding in dealing with conflicting-dip data, I propose a dip-separated filtering strategy. In this method, I first use an adaptive empirical mode decomposition based dip filter to separate the seismic data into several dip bands (5 or 6). Next, I apply seislet thresholding to each separated dip component to remove random noise. Then I combine all the denoised components to form the final denoised data. Compared with other dip filters, the empirical mode decomposition based dip filter is data-adaptive. One only needs to specify the number of dip components to be separated. Both complicated synthetic and field data examples show superior performance of my proposed approach than the traditional alternatives. The dip-separated structural filtering is not limited to seislet thresholding, and can also be extended to all those methods that require slope information.

  5. Towards high performing hospital enterprise systems: an empirical and literature based design framework

    NASA Astrophysics Data System (ADS)

    dos Santos Fradinho, Jorge Miguel

    2014-05-01

    Our understanding of enterprise systems (ES) is gradually evolving towards a sense of design which leverages multidisciplinary bodies of knowledge that may bolster hybrid research designs and together further the characterisation of ES operation and performance. This article aims to contribute towards ES design theory with its hospital enterprise systems design (HESD) framework, which reflects a rich multidisciplinary literature and two in-depth hospital empirical cases from the US and UK. In doing so it leverages systems thinking principles and traditionally disparate bodies of knowledge to bolster the theoretical evolution and foundation of ES. A total of seven core ES design elements are identified and characterised with 24 main categories and 53 subcategories. In addition, it builds on recent work which suggests that hospital enterprises are comprised of multiple internal ES configurations which may generate different levels of performance. Multiple sources of evidence were collected including electronic medical records, 54 recorded interviews, observation, and internal documents. Both in-depth cases compare and contrast higher and lower performing ES configurations. Following literal replication across in-depth cases, this article concludes that hospital performance can be improved through an enriched understanding of hospital ES design.

  6. A theoretical model of drumlin formation based on observations at Múlajökull, Iceland

    NASA Astrophysics Data System (ADS)

    Iverson, Neal R.; McCracken, Reba; Zoet, Lucas; Benediktsson, Ívar; Schomacker, Anders; Johnson, Mark; Finlayson, Andrew; Phillips, Emrys; Everest, Jeremy

    2016-04-01

    Theoretical models of drumlin formation have generally been developed in isolation from observations in modern drumlin forming environments - a major limitation on the empiricism necessary to confidently formulate models and test them. Observations at a rare modern drumlin field exposed by the recession of the Icelandic surge-type glacier, Múlajökull, allow an empirically-grounded and physically-based model of drumlin formation to be formulated and tested. Till fabrics based on anisotropy of magnetic susceptibility and clast orientations, along with stratigraphic observations and results of ground penetrating radar, indicate that drumlin relief results from basal till deposition on drumlins and erosion between them. These data also indicate that surges cause till deposition both on and between drumlins and provide no evidence of the longitudinally compressive or extensional strain in till that would be expected if flux divergence in a deforming bed were significant. Over 2000 measurements of till density, together with consolidation tests on the till, indicate that effective stresses on the bed were higher between drumlins than within them. This observation agrees with evidence that subglacial water drainage during normal flow of the glacier is through channels in low areas between drumlins and that crevasse swarms, which reduce total normal stresses on the bed, are coincident with drumlins. In the new model slip of ice over a bed with a sinusoidal perturbation, crevasse swarms, and flow of subglacial water toward R-channels that bound the bed undulation during periods of normal flow result in effective stresses that increase toward channels and decrease from the stoss to the lee sides of the undulation. This effective-stress pattern causes till entrainment and erosion by regelation infiltration (Rempel, 2008, JGR, 113) that peaks at the heads of incipient drumlins and near R-channels, while bed shear is inhibited by effective stresses too high to allow

  7. Semi-empirical versus process-based sea-level projections for the twenty-first century

    NASA Astrophysics Data System (ADS)

    Orlić, Mirko; Pasarić, Zoran

    2013-08-01

    Two dynamical methods are presently used to project sea-level changes during the next century. The process-based method relies on coupled atmosphere-ocean models to estimate the effects of thermal expansion and on sea-level models combined with certain empirical relationships to determine the influence of land-ice mass changes. The semi-empirical method uses various physically motivated relationships between temperature and sea level, with parameters determined from the data, to project total sea level. However, semi-empirical projections far exceed process-based projections. Here, we test the robustness of semi-empirical projections to the underlying assumptions about the inertial and equilibrium responses of sea level to temperature forcing and the impacts of groundwater depletion and dam retention during the twentieth century. Our results show that these projections are sensitive to the dynamics considered and the terrestrial-water corrections applied. For B1, which is a moderate climate-change scenario, the lowest semi-empirical projection of sea-level rise over the twenty-first century equals 62+/-14cm. The average value is substantially smaller than previously published semi-empirical projections and is therefore closer to the corresponding process-based values. The standard deviation is larger than the uncertainties of process-based estimates.

  8. Tissue artifact removal from respiratory signals based on empirical mode decomposition.

    PubMed

    Liu, Shaopeng; Gao, Robert X; John, Dinesh; Staudenmayer, John; Freedson, Patty

    2013-05-01

    On-line measurement of respiration plays an important role in monitoring human physical activities. Such measurement commonly employs sensing belts secured around the rib cage and abdomen of the test object. Affected by the movement of body tissues, respiratory signals typically have a low signal-to-noise ratio. Removing tissue artifacts therefore is critical to ensuring effective respiration analysis. This paper presents a signal decomposition technique for tissue artifact removal from respiratory signals, based on the empirical mode decomposition (EMD). An algorithm based on the mutual information and power criteria was devised to automatically select appropriate intrinsic mode functions for tissue artifact removal and respiratory signal reconstruction. Performance of the EMD-algorithm was evaluated through simulations and real-life experiments (N = 105). Comparison with low-pass filtering that has been conventionally applied confirmed the effectiveness of the technique in tissue artifacts removal.

  9. Polarizable Empirical Force Field for Hexopyranose Monosaccharides Based on the Classical Drude Oscillator

    PubMed Central

    2015-01-01

    A polarizable empirical force field based on the classical Drude oscillator is presented for the hexopyranose form of selected monosaccharides. Parameter optimization targeted quantum mechanical (QM) dipole moments, solute–water interaction energies, vibrational frequencies, and conformational energies. Validation of the model was based on experimental data on crystals, densities of aqueous-sugar solutions, diffusion constants of glucose, and rotational preferences of the exocylic hydroxymethyl of d-glucose and d-galactose in aqueous solution as well as additional QM data. Notably, the final model involves a single electrostatic model for all sixteen diastereomers of the monosaccharides, indicating the transferability of the polarizable model. The presented parameters are anticipated to lay the foundation for a comprehensive polarizable force field for saccharides that will be compatible with the polarizable Drude parameters for lipids and proteins, allowing for simulations of glycolipids and glycoproteins. PMID:24564643

  10. Polarizable empirical force field for acyclic polyalcohols based on the classical Drude oscillator.

    PubMed

    He, Xibing; Lopes, Pedro E M; Mackerell, Alexander D

    2013-10-01

    A polarizable empirical force field for acyclic polyalcohols based on the classical Drude oscillator is presented. The model is optimized with an emphasis on the transferability of the developed parameters among molecules of different sizes in this series and on the condensed-phase properties validated against experimental data. The importance of the explicit treatment of electronic polarizability in empirical force fields is demonstrated in the cases of this series of molecules with vicinal hydroxyl groups that can form cooperative intra- and intermolecular hydrogen bonds. Compared to the CHARMM additive force field, improved treatment of the electrostatic interactions avoids overestimation of the gas-phase dipole moments resulting in significant improvement in the treatment of the conformational energies and leads to the correct balance of intra- and intermolecular hydrogen bonding of glycerol as evidenced by calculated heat of vaporization being in excellent agreement with experiment. Computed condensed phase data, including crystal lattice parameters and volumes and densities of aqueous solutions are in better agreement with experimental data as compared to the corresponding additive model. Such improvements are anticipated to significantly improve the treatment of polymers in general, including biological macromolecules.

  11. Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition

    PubMed Central

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.

    2015-01-01

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714

  12. Network-based empirical Bayes methods for linear models with applications to genomic data.

    PubMed

    Li, Caiyan; Wei, Zhi; Li, Hongzhe

    2010-03-01

    Empirical Bayes methods are widely used in the analysis of microarray gene expression data in order to identify the differentially expressed genes or genes that are associated with other general phenotypes. Available methods often assume that genes are independent. However, genes are expected to function interactively and to form molecular modules to affect the phenotypes. In order to account for regulatory dependency among genes, we propose in this paper a network-based empirical Bayes method for analyzing genomic data in the framework of linear models, where the dependency of genes is modeled by a discrete Markov random field defined on a predefined biological network. This method provides a statistical framework for integrating the known biological network information into the analysis of genomic data. We present an iterated conditional mode algorithm for parameter estimation and for estimating the posterior probabilities using Gibbs sampling. We demonstrate the application of the proposed methods using simulations and analysis of a human brain aging microarray gene expression data set.

  13. An empirical model for the plasma environment along Titan's orbit based on Cassini plasma observations

    NASA Astrophysics Data System (ADS)

    Smith, H. Todd; Rymer, Abigail M.

    2014-07-01

    Prior to Cassini's arrival at Saturn, the nitrogen-rich dense atmosphere of Titan was considered as a significant, if not dominant, source of heavy ions in Saturn's magnetosphere. While nitrogen was detected in Saturn's magnetosphere based on Cassini observations, Enceladus instead of Titan appears to be the primary source. However, it is difficult to imagine that Titan's dense atmosphere is not a source of nitrogen. In this paper, we apply the Rymer et al.'s (2009) Titan plasma environment categorization model to the plasma environment along Titan's orbit when Titan is not present. We next categorize the Titan encounters that occurred since Rymer et al. (2009). We also produce an empirical model for applying the probabilistic occurrence of each plasma environment as a function of Saturn local time (SLT). Finally, we summarized the electron energy spectra in order to allow one to calculate more accurate electron-impact interaction rates for each plasma environment category. The combination of this full categorization versus SLT and empirical model for the electron spectrum is critical for understanding the magnetospheric plasma and will allow for more accurate modeling of the Titan plasma torus.

  14. Empirically based Suggested Insights into the Concept of False-Self Defense: Contributions From a Study on Normalization of Children With Disabilities.

    PubMed

    Eichengreen, Adva; Hoofien, Dan; Bachar, Eytan

    2016-02-01

    The concept of the false self has been used widely in psychoanalytic theory and practice but seldom in empirical research. In this empirically based study, elevated features of false-self defense were hypothetically associated with risk factors attendant on processes of rehabilitation and integration of children with disabilities, processes that encourage adaptation of the child to the able-bodied environment. Self-report questionnaires and in-depth interviews were conducted with 88 deaf and hard-of-hearing students and a comparison group of 88 hearing counterparts. Results demonstrate that despite the important contribution of rehabilitation and integration to the well-being of these children, these efforts may put the child at risk of increased use of the false-self defense. The empirical findings suggest two general theoretical conclusions: (1) The Winnicottian concept of the environment, usually confined to the parent-child relationship, can be understood more broadly as including cultural, social, and rehabilitational variables that both influence the parent-child relationship and operate independently of it. (2) The monolithic conceptualization of the false self may be more accurately unpacked to reveal two distinct subtypes: the compliant and the split false self.

  15. The children of divorce parenting intervention: outcome evaluation of an empirically based program.

    PubMed

    Wolchik, S A; West, S G; Westover, S; Sandler, I N; Martin, A; Lustig, J; Tein, J Y; Fisher, J

    1993-06-01

    Examined efficacy of an empirically based intervention using 70 divorced mothers who participated in a 12-session program or a wait-list condition. The program targeted five putative mediators: quality of the mother-child relationship, discipline, negative divorce events, contact with fathers, and support from nonparental adults. Posttest comparisons showed higher quality mother-child relationships and discipline, fewer negative divorce events, and better mental health outcomes for program participants than controls. More positive program effects occurred for mothers' than children's reports of variables and for families with poorest initial levels of functioning. Analyses indicated that improvement in the mother-child relationship partially mediated the effects of the program on mental health.

  16. Empirical likelihood based detection procedure for change point in mean residual life functions under random censorship.

    PubMed

    Chen, Ying-Ju; Ning, Wei; Gupta, Arjun K

    2016-05-01

    The mean residual life (MRL) function is one of the basic parameters of interest in survival analysis that describes the expected remaining time of an individual after a certain age. The study of changes in the MRL function is practical and interesting because it may help us to identify some factors such as age and gender that may influence the remaining lifetimes of patients after receiving a certain surgery. In this paper, we propose a detection procedure based on the empirical likelihood for the changes in MRL functions with right censored data. Two real examples are also given: Veterans' administration lung cancer study and Stanford heart transplant to illustrate the detecting procedure. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Confidence Interval Estimation for Sensitivity to the Early Diseased Stage Based on Empirical Likelihood.

    PubMed

    Dong, Tuochuan; Tian, Lili

    2015-01-01

    Many disease processes can be divided into three stages: the non-diseased stage: the early diseased stage, and the fully diseased stage. To assess the accuracy of diagnostic tests for such diseases, various summary indexes have been proposed, such as volume under the surface (VUS), partial volume under the surface (PVUS), and the sensitivity to the early diseased stage given specificity and the sensitivity to the fully diseased stage (P2). This paper focuses on confidence interval estimation for P2 based on empirical likelihood. Simulation studies are carried out to assess the performance of the new methods compared to the existing parametric and nonparametric ones. A real dataset from Alzheimer's Disease Neuroimaging Initiative (ADNI) is analyzed.

  18. On the pathophysiology of migraine--links for "empirically based treatment" with neurofeedback.

    PubMed

    Kropp, Peter; Siniatchkin, Michael; Gerber, Wolf-Dieter

    2002-09-01

    Psychophysiological data support the concept that migraine is the result of cortical hypersensitivity, hyperactivity, and a lack of habituation. There is evidence that this is a brain-stem related information processing dysfunction. This cortical activity reflects a periodicity between 2 migraine attacks and it may be due to endogenous or exogenous factors. In the few days preceding the next attack slow cortical potentials are highest and habituation delay experimentally recorded during contingent negative variation is at a maximum. These striking features of slow cortical potentials are predictors of the next attack. The pronounced negativity can be fed back to the patient. The data support the hypothesis that a change in amplitudes of slow cortical potentials is caused by altered habituation during the recording session. This kind of neurofeedback can be characterized as "empirically based" because it improves habituation and it proves to be clinically efficient.

  19. A Human ECG Identification System Based on Ensemble Empirical Mode Decomposition

    PubMed Central

    Zhao, Zhidong; Yang, Lei; Chen, Diandian; Luo, Yi

    2013-01-01

    In this paper, a human electrocardiogram (ECG) identification system based on ensemble empirical mode decomposition (EEMD) is designed. A robust preprocessing method comprising noise elimination, heartbeat normalization and quality measurement is proposed to eliminate the effects of noise and heart rate variability. The system is independent of the heart rate. The ECG signal is decomposed into a number of intrinsic mode functions (IMFs) and Welch spectral analysis is used to extract the significant heartbeat signal features. Principal component analysis is used reduce the dimensionality of the feature space, and the K-nearest neighbors (K-NN) method is applied as the classifier tool. The proposed human ECG identification system was tested on standard MIT-BIH ECG databases: the ST change database, the long-term ST database, and the PTB database. The system achieved an identification accuracy of 95% for 90 subjects, demonstrating the effectiveness of the proposed method in terms of accuracy and robustness. PMID:23698274

  20. Modeling invariant object processing based on tight integration of simulated and empirical data in a Common Brain Space

    PubMed Central

    Peters, Judith C.; Reithler, Joel; Goebel, Rainer

    2012-01-01

    Recent advances in Computer Vision and Experimental Neuroscience provided insights into mechanisms underlying invariant object recognition. However, due to the different research aims in both fields models tended to evolve independently. A tighter integration between computational and empirical work may contribute to cross-fertilized development of (neurobiologically plausible) computational models and computationally defined empirical theories, which can be incrementally merged into a comprehensive brain model. After reviewing theoretical and empirical work on invariant object perception, this article proposes a novel framework in which neural network activity and measured neuroimaging data are interfaced in a common representational space. This enables direct quantitative comparisons between predicted and observed activity patterns within and across multiple stages of object processing, which may help to clarify how high-order invariant representations are created from low-level features. Given the advent of columnar-level imaging with high-resolution fMRI, it is time to capitalize on this new window into the brain and test which predictions of the various object recognition models are supported by this novel empirical evidence. PMID:22408617

  1. Co-ordinating Co-operation in Complex Information Flows: A Theoretical Analysis and Empirical Description of Competence-determined Leadership. No. 61.

    ERIC Educational Resources Information Center

    Rasmussen, Ole Elstrup

    "Scanator" (a modern, ecological psychophysics encompassing a cohesive set of theories and methods for the study of mental functions) provides the basis for a study of "competence," the capacity for making sense in complex situations. The paper develops a functional model that forms a theoretical expression of the phenomenon of…

  2. Empirical Analysis of Human Capital, Learning Culture, and Knowledge Management as Antecedents to Organizational Performance: Theoretical and Practical Implications for Logistics Readiness Officer Force Development

    DTIC Science & Technology

    2014-03-27

    MANAGEMENT AS ANTECEDENTS TO ORGANIZATIONAL PERFORMANCE: THEORETICAL AND PRACTICAL IMPLICATIONS FOR LOGISTICS READINESS OFFICER FORCE DEVELOPMENT...IMPLICATIONS FOR LOGISTICS READINESS OFFICER FORCE DEVELOPMENT THESIS Presented to the Faculty Department of Operational Sciences...Command In Partial Fulfillment of the Requirements for the Degree of Master of Science in Logistics Management Matt J. Cherry, BS

  3. Web-based application for Data INterpolation Empirical Orthogonal Functions (DINEOF) analysis

    NASA Astrophysics Data System (ADS)

    Tomazic, Igor; Alvera-Azcarate, Aida; Barth, Alexander; Beckers, Jean-Marie

    2014-05-01

    DINEOF (Data INterpolating Empirical Orthogonal Functions) is a powerful tool based on EOF decomposition developed at the University of Liege/GHER for the reconstruction of missing data in satellite datasets, as well as for the reduction of noise and detection of outliers. DINEOF is openly available as a series of Fortran routines to be compiled by the user, and as binaries (that can be run directly without any compilation) both for Windows and Linux platforms. In order to facilitate the use of DINEOF and increase the number of interested users, we developed a web-based interface for DINEOF with the necessary parameters available to run high-quality DINEOF analysis. This includes choosing variable within selected dataset, defining a domain, time range, filtering criteria based on available variables in the dataset (e.g. quality flag, satellite zenith angle …) and defining necessary DINEOF parameters. Results, including reconstructed data and calculated EOF modes will be disseminated in NetCDF format using OpenDAP and WMS server allowing easy visualisation and analysis. First, we will include several satellite datasets of sea surface temperature and chlorophyll concentration obtained from MyOcean data centre and already remapped to the regular grid (L3C). Later, based on user's request, we plan to extend number of datasets available for reconstruction.

  4. An Empirical Study on Washback Effects of the Internet-Based College English Test Band 4 in China

    ERIC Educational Resources Information Center

    Wang, Chao; Yan, Jiaolan; Liu, Bao

    2014-01-01

    Based on Bailey's washback model, in respect of participants, process and products, the present empirical study was conducted to find the actual washback effects of the internet-based College English Test Band 4 (IB CET-4). The methods adopted are questionnaires, class observation, interview and the analysis of both the CET-4 teaching and testing…

  5. Genetic-program-based data mining for hybrid decision-theoretic algorithms and theories

    NASA Astrophysics Data System (ADS)

    Smith, James F., III

    2005-03-01

    A genetic program (GP) based data mining (DM) procedure has been developed that automatically creates decision theoretic algorithms. A GP is an algorithm that uses the theory of evolution to automatically evolve other computer programs or mathematical expressions. The output of the GP is a computer program or mathematical expression that is optimal in the sense that it maximizes a fitness function. The decision theoretic algorithms created by the DM algorithm are typically designed for making real-time decisions about the behavior of systems. The database that is mined by the DM typically consists of many scenarios characterized by sensor output and labeled by experts as to the status of the scenario. The DM procedure will call a GP as a data mining function. The GP incorporates the database and expert"s rules into its fitness function to evolve an optimal decision theoretic algorithm. A decision theoretic algorithm created through this process will be discussed as well as validation efforts showing the utility of the decision theoretic algorithm created by the DM process. GP based data mining to determine equations related to scientific theories and automatic simplification methods based on computer algebra will also be discussed.

  6. An Empirical Model of Solar Indices and Hemispheric Power based on DMSP/SSUSI Data

    NASA Astrophysics Data System (ADS)

    Shaikh, D.; Jones, J.

    2014-09-01

    Aurorae are produced by the collision of charged energetic particles, typically the electrons, with the Earths neutral atmosphere particularly in the high latitude regions. These particles originate predominantly from the solar wind that traverses through the Earths magnetosphere and precipitates into the Earths atmosphere thereby resulting in emission of radiation in various frequency ranges. During this process, energetic electrons deposit their kinetic energy (10s of KeV) in the upper atmosphere. The rate of electron kinetic energy deposited over the northern or southern region is called electron hemispheric power (Hpe), measured in Gigawatt (GW). Since the origin and dynamics of these precipitating charged particles is intimately connected to the kinetic and magnetic activities taking place in our Sun, they can be used as a proxy to determine many physical processes that drive the space weather on our Earth. In this paper, we examine correlations that can possibly exist between the Hpe component and various other geomagnetic parameters such as kp, Ap, solar flux and sun spot numbers. For this purpose, we evaluate a year (2012) of data from the Special Sensor Ultraviolet Spectrographic Imager (SSUSI) of the Defense Meteorological Satellite Program (DMSP) Flight 18 - satellite. We find substantially strong correlations between the Hpe and Kp, Ap, the Sun spot number (SSN) and the solar flux density. The practical application of our empirical model is multifold. (i) We can determine/forecast Kp index directly from the electron flux density and use it to drive a variety of space weather models that heavily rely on the Kp input. (ii) The Kp and Ap forecasts from our empirical correlation model could be complementary to the traditional ground-based magnetometer data.

  7. Written institutional ethics policies on euthanasia: an empirical-based organizational-ethical framework.

    PubMed

    Lemiengre, Joke; Dierckx de Casterlé, Bernadette; Schotsmans, Paul; Gastmans, Chris

    2014-05-01

    As euthanasia has become a widely debated issue in many Western countries, hospitals and nursing homes especially are increasingly being confronted with this ethically sensitive societal issue. The focus of this paper is how healthcare institutions can deal with euthanasia requests on an organizational level by means of a written institutional ethics policy. The general aim is to make a critical analysis whether these policies can be considered as organizational-ethical instruments that support healthcare institutions to take their institutional responsibility for dealing with euthanasia requests. By means of an interpretative analysis, we conducted a process of reinterpretation of results of former Belgian empirical studies on written institutional ethics policies on euthanasia in dialogue with the existing international literature. The study findings revealed that legal regulations, ethical and care-oriented aspects strongly affected the development, the content, and the impact of written institutional ethics policies on euthanasia. Hence, these three cornerstones-law, care and ethics-constituted the basis for the empirical-based organizational-ethical framework for written institutional ethics policies on euthanasia that is presented in this paper. However, having a euthanasia policy does not automatically lead to more legal transparency, or to a more professional and ethical care practice. The study findings suggest that the development and implementation of an ethics policy on euthanasia as an organizational-ethical instrument should be considered as a dynamic process. Administrators and ethics committees must take responsibility to actively create an ethical climate supporting care providers who have to deal with ethical dilemmas in their practice.

  8. The Role of Social Network Technologies in Online Health Promotion: A Narrative Review of Theoretical and Empirical Factors Influencing Intervention Effectiveness

    PubMed Central

    Kennedy, Catriona M; Buchan, Iain; Powell, John; Ainsworth, John

    2015-01-01

    Background Social network technologies have become part of health education and wider health promotion—either by design or happenstance. Social support, peer pressure, and information sharing in online communities may affect health behaviors. If there are positive and sustained effects, then social network technologies could increase the effectiveness and efficiency of many public health campaigns. Social media alone, however, may be insufficient to promote health. Furthermore, there may be unintended and potentially harmful consequences of inaccurate or misleading health information. Given these uncertainties, there is a need to understand and synthesize the evidence base for the use of online social networking as part of health promoting interventions to inform future research and practice. Objective Our aim was to review the research on the integration of expert-led health promotion interventions with online social networking in order to determine the extent to which the complementary benefits of each are understood and used. We asked, in particular, (1) How is effectiveness being measured and what are the specific problems in effecting health behavior change?, and (2) To what extent is the designated role of social networking grounded in theory? Methods The narrative synthesis approach to literature review was used to analyze the existing evidence. We searched the indexed scientific literature using keywords associated with health promotion and social networking. The papers included were only those making substantial study of both social networking and health promotion—either reporting the results of the intervention or detailing evidence-based plans. General papers about social networking and health were not included. Results The search identified 162 potentially relevant documents after review of titles and abstracts. Of these, 42 satisfied the inclusion criteria after full-text review. Six studies described randomized controlled trials (RCTs) evaluating

  9. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User's Head Movement.

    PubMed

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-08-31

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest.

  10. Empirical Evaluation Indicators in Thai Higher Education: Theory-Based Multidimensional Learners' Assessment

    ERIC Educational Resources Information Center

    Sritanyarat, Dawisa; Russ-Eft, Darlene

    2016-01-01

    This study proposed empirical indicators which can be validated and adopted in higher education institutions to evaluate quality of teaching and learning, and to serve as an evaluation criteria for human resource management and development of higher institutions in Thailand. The main purpose of this study was to develop empirical indicators of a…

  11. Impact of Inadequate Empirical Therapy on the Mortality of Patients with Bloodstream Infections: a Propensity Score-Based Analysis

    PubMed Central

    Retamar, Pilar; Portillo, María M.; López-Prieto, María Dolores; Rodríguez-López, Fernando; de Cueto, Marina; García, María V.; Gómez, María J.; del Arco, Alfonso; Muñoz, Angel; Sánchez-Porto, Antonio; Torres-Tortosa, Manuel; Martín-Aspas, Andrés; Arroyo, Ascensión; García-Figueras, Carolina; Acosta, Federico; Corzo, Juan E.; León-Ruiz, Laura; Escobar-Lara, Trinidad

    2012-01-01

    The impact of the adequacy of empirical therapy on outcome for patients with bloodstream infections (BSI) is key for determining whether adequate empirical coverage should be prioritized over other, more conservative approaches. Recent systematic reviews outlined the need for new studies in the field, using improved methodologies. We assessed the impact of inadequate empirical treatment on the mortality of patients with BSI in the present-day context, incorporating recent methodological recommendations. A prospective multicenter cohort including all BSI episodes in adult patients was performed in 15 hospitals in Andalucía, Spain, over a 2-month period in 2006 to 2007. The main outcome variables were 14- and 30-day mortality. Adjusted analyses were performed by multivariate analysis and propensity score-based matching. Eight hundred one episodes were included. Inadequate empirical therapy was administered in 199 (24.8%) episodes; mortality at days 14 and 30 was 18.55% and 22.6%, respectively. After controlling for age, Charlson index, Pitt score, neutropenia, source, etiology, and presentation with severe sepsis or shock, inadequate empirical treatment was associated with increased mortality at days 14 and 30 (odds ratios [ORs], 2.12 and 1.56; 95% confidence intervals [95% CI], 1.34 to 3.34 and 1.01 to 2.40, respectively). The adjusted ORs after a propensity score-based matched analysis were 3.03 and 1.70 (95% CI, 1.60 to 5.74 and 0.98 to 2.98, respectively). In conclusion, inadequate empirical therapy is independently associated with increased mortality in patients with BSI. Programs to improve the quality of empirical therapy in patients with suspicion of BSI and optimization of definitive therapy should be implemented. PMID:22005999

  12. Is Project Based Learning More Effective than Direct Instruction in School Science Classrooms? An Analysis of the Empirical Research Evidence

    NASA Astrophysics Data System (ADS)

    Dann, Clifford

    An increasingly loud call by parents, school administrators, teachers, and even business leaders for "authentic learning", emphasizing both group-work and problem solving, has led to growing enthusiasm for inquiry-based learning over the past decade. Although "inquiry" can be defined in many ways, a curriculum called "project-based learning" has recently emerged as the inquiry practice-of-choice with roots in the educational constructivism that emerged in the mid-twentieth century. Often, project-based learning is framed as an alternative instructional strategy to direct instruction for maximizing student content knowledge. This study investigates the empirical evidence for such a comparison while also evaluating the overall quality of the available studies in the light of accepted standards for educational research. Specifically, this thesis investigates what the body of quantitative research says about the efficacy of project-based learning vs. direct instruction when considering student acquisition of content knowledge in science classrooms. Further, existing limitations of the research pertaining to project based learning and secondary school education are explored. The thesis concludes with a discussion of where and how we should focus our empirical efforts in the future. The research revealed that the available empirical research contains flaws in both design and instrumentation. In particular, randomization is poor amongst all the studies considered. The empirical evidence indicates that project-based learning curricula improved student content knowledge but that, while the results were statistically significant, increases in raw test scores were marginal.

  13. Evaluation of Physically and Empirically Based Models for the Estimation of Green Roof Evapotranspiration

    NASA Astrophysics Data System (ADS)

    Digiovanni, K. A.; Montalto, F. A.; Gaffin, S.; Rosenzweig, C.

    2010-12-01

    Green roofs and other urban green spaces can provide a variety of valuable benefits including reduction of the urban heat island effect, reduction of stormwater runoff, carbon sequestration, oxygen generation, air pollution mitigation etc. As many of these benefits are directly linked to the processes of evaporation and transpiration, accurate and representative estimation of urban evapotranspiration (ET) is a necessary tool for predicting and quantifying such benefits. However, many common ET estimation procedures were developed for agricultural applications, and thus carry inherent assumptions that may only be rarely applicable to urban green spaces. Various researchers have identified the estimation of expected urban ET rates as critical, yet poorly studied components of urban green space performance prediction and cite that further evaluation is needed to reconcile differences in predictions from varying ET modeling approaches. A small scale green roof lysimeter setup situated on the green roof of the Ethical Culture Fieldston School in the Bronx, NY has been the focus of ongoing monitoring initiated in June 2009. The experimental setup includes a 0.6 m by 1.2 m Lysimeter replicating the anatomy of the 500 m2 green roof of the building, with a roof membrane, drainage layer, 10 cm media depth, and planted with a variety of Sedum species. Soil moisture sensors and qualitative runoff measurements are also recorded in the Lysimeter, while a weather station situated on the rooftop records climatologic data. Direct quantification of actual evapotranspiration (AET) from the green roof weighing lysimeter was achieved through a mass balance approaches during periods absent of precipitation and drainage. A comparison of AET to estimates of potential evapotranspiration (PET) calculated from empirically and physically based ET models was performed in order to evaluate the applicability of conventional ET equations for the estimation of ET from green roofs. Results have

  14. Theoretical Bases for Teacher- and Peer-Delivered Sexual Health Promotion

    ERIC Educational Resources Information Center

    Wight, Daniel

    2008-01-01

    Purpose: This paper seeks to explore the theoretical bases for teacher-delivered and peer-delivered sexual health promotion and education. Design/methodology/approach: The first section briefly outlines the main theories informing sexual health interventions for young people, and the second discusses their implications for modes of delivery.…

  15. Study on the Theoretical Foundation of Business English Curriculum Design Based on ESP and Needs Analysis

    ERIC Educational Resources Information Center

    Zhu, Wenzhong; Liu, Dan

    2014-01-01

    Based on a review of the literature on ESP and needs analysis, this paper is intended to offer some theoretical supports and inspirations for BE instructors to develop BE curricula for business contexts. It discusses how the theory of need analysis can be used in Business English curriculum design, and proposes some principles of BE curriculum…

  16. Effects of a Theoretically Based Large-Scale Reading Intervention in a Multicultural Urban School District

    ERIC Educational Resources Information Center

    Sadoski, Mark; Willson, Victor L.

    2006-01-01

    In 1997, Lindamood-Bell Learning Processes partnered with Pueblo School District 60 (PSD60), a heavily minority urban district with many Title I schools, to implement a theoretically based initiative designed to improve Colorado Student Assessment Program reading scores. In this study, the authors examined achievement in Grades 3-5 during the…

  17. Problem decomposition and domain-based parallelism via group theoretic principles

    SciTech Connect

    Makai, M.; Orechwa, Y.

    1997-10-01

    A systematic approach based on group theoretic principles, is presented for the decomposition of the solution algorithm of boundary value problems specified over symmetric domains, which is amenable to implementation for parallel computation. The principles are applied to the linear transport equation in general, and the decomposition is demonstrated for a square node in particular.

  18. Electrocardiogram signal denoising based on empirical mode decomposition technique: an overview

    NASA Astrophysics Data System (ADS)

    Han, G.; Lin, B.; Xu, Z.

    2017-03-01

    Electrocardiogram (ECG) signal is nonlinear and non-stationary weak signal which reflects whether the heart is functioning normally or abnormally. ECG signal is susceptible to various kinds of noises such as high/low frequency noises, powerline interference and baseline wander. Hence, the removal of noises from ECG signal becomes a vital link in the ECG signal processing and plays a significant role in the detection and diagnosis of heart diseases. The review will describe the recent developments of ECG signal denoising based on Empirical Mode Decomposition (EMD) technique including high frequency noise removal, powerline interference separation, baseline wander correction, the combining of EMD and Other Methods, EEMD technique. EMD technique is a quite potential and prospective but not perfect method in the application of processing nonlinear and non-stationary signal like ECG signal. The EMD combined with other algorithms is a good solution to improve the performance of noise cancellation. The pros and cons of EMD technique in ECG signal denoising are discussed in detail. Finally, the future work and challenges in ECG signal denoising based on EMD technique are clarified.

  19. An empirically based steady state friction law and implications for fault stability.

    PubMed

    Spagnuolo, E; Nielsen, S; Violay, M; Di Toro, G

    2016-04-16

    Empirically based rate-and-state friction laws (RSFLs) have been proposed to model the dependence of friction forces with slip and time. The relevance of the RSFL for earthquake mechanics is that few constitutive parameters define critical conditions for fault stability (i.e., critical stiffness and frictional fault behavior). However, the RSFLs were determined from experiments conducted at subseismic slip rates (V < 1 cm/s), and their extrapolation to earthquake deformation conditions (V > 0.1 m/s) remains questionable on the basis of the experimental evidence of (1) large dynamic weakening and (2) activation of particular fault lubrication processes at seismic slip rates. Here we propose a modified RSFL (MFL) based on the review of a large published and unpublished data set of rock friction experiments performed with different testing machines. The MFL, valid at steady state conditions from subseismic to seismic slip rates (0.1 µm/s < V < 3 m/s), describes the initiation of a substantial velocity weakening in the 1-20 cm/s range resulting in a critical stiffness increase that creates a peak of potential instability in that velocity regime. The MFL leads to a new definition of fault frictional stability with implications for slip event styles and relevance for models of seismic rupture nucleation, propagation, and arrest.

  20. Feasibility of an empirically based program for parents of preschoolers with autism spectrum disorder.

    PubMed

    Dababnah, Sarah; Parish, Susan L

    2016-01-01

    This article reports on the feasibility of implementing an existing empirically based program, The Incredible Years, tailored to parents of young children with autism spectrum disorder. Parents raising preschool-aged children (aged 3-6 years) with autism spectrum disorder (N = 17) participated in a 15-week pilot trial of the intervention. Quantitative assessments of the program revealed fidelity was generally maintained, with the exception of program-specific videos. Qualitative data from individual post-intervention interviews reported parents benefited most from child emotion regulation strategies, play-based child behavior skills, parent stress management, social support, and visual resources. More work is needed to further refine the program to address parent self-care, partner relationships, and the diverse behavioral and communication challenges of children across the autism spectrum. Furthermore, parent access and retention could potentially be increased by providing in-home childcare vouchers and a range of times and locations in which to offer the program. The findings suggest The Incredible Years is a feasible intervention for parents seeking additional support for child- and family-related challenges and offers guidance to those communities currently using The Incredible Years or other related parenting programs with families of children with autism spectrum disorder.

  1. Predicting Protein Secondary Structure Using Consensus Data Mining (CDM) Based on Empirical Statistics and Evolutionary Information.

    PubMed

    Kandoi, Gaurav; Leelananda, Sumudu P; Jernigan, Robert L; Sen, Taner Z

    2017-01-01

    Predicting the secondary structure of a protein from its sequence still remains a challenging problem. The prediction accuracies remain around 80 %, and for very diverse methods. Using evolutionary information and machine learning algorithms in particular has had the most impact. In this chapter, we will first define secondary structures, then we will review the Consensus Data Mining (CDM) technique based on the robust GOR algorithm and Fragment Database Mining (FDM) approach. GOR V is an empirical method utilizing a sliding window approach to model the secondary structural elements of a protein by making use of generalized evolutionary information. FDM uses data mining from experimental structure fragments, and is able to successfully predict the secondary structure of a protein by combining experimentally determined structural fragments based on sequence similarities of the fragments. The CDM method combines predictions from GOR V and FDM in a hierarchical manner to produce consensus predictions for secondary structure. In other words, if sequence fragment are not available, then it uses GOR V to make the secondary structure prediction. The online server of CDM is available at http://gor.bb.iastate.edu/cdm/ .

  2. An empirically based steady state friction law and implications for fault stability

    PubMed Central

    Nielsen, S.; Violay, M.; Di Toro, G.

    2016-01-01

    Abstract Empirically based rate‐and‐state friction laws (RSFLs) have been proposed to model the dependence of friction forces with slip and time. The relevance of the RSFL for earthquake mechanics is that few constitutive parameters define critical conditions for fault stability (i.e., critical stiffness and frictional fault behavior). However, the RSFLs were determined from experiments conducted at subseismic slip rates (V < 1 cm/s), and their extrapolation to earthquake deformation conditions (V > 0.1 m/s) remains questionable on the basis of the experimental evidence of (1) large dynamic weakening and (2) activation of particular fault lubrication processes at seismic slip rates. Here we propose a modified RSFL (MFL) based on the review of a large published and unpublished data set of rock friction experiments performed with different testing machines. The MFL, valid at steady state conditions from subseismic to seismic slip rates (0.1 µm/s < V < 3 m/s), describes the initiation of a substantial velocity weakening in the 1–20 cm/s range resulting in a critical stiffness increase that creates a peak of potential instability in that velocity regime. The MFL leads to a new definition of fault frictional stability with implications for slip event styles and relevance for models of seismic rupture nucleation, propagation, and arrest. PMID:27667875

  3. An empirically based steady state friction law and implications for fault stability

    NASA Astrophysics Data System (ADS)

    Spagnuolo, E.; Nielsen, S.; Violay, M.; Di Toro, G.

    2016-04-01

    Empirically based rate-and-state friction laws (RSFLs) have been proposed to model the dependence of friction forces with slip and time. The relevance of the RSFL for earthquake mechanics is that few constitutive parameters define critical conditions for fault stability (i.e., critical stiffness and frictional fault behavior). However, the RSFLs were determined from experiments conducted at subseismic slip rates (V < 1 cm/s), and their extrapolation to earthquake deformation conditions (V > 0.1 m/s) remains questionable on the basis of the experimental evidence of (1) large dynamic weakening and (2) activation of particular fault lubrication processes at seismic slip rates. Here we propose a modified RSFL (MFL) based on the review of a large published and unpublished data set of rock friction experiments performed with different testing machines. The MFL, valid at steady state conditions from subseismic to seismic slip rates (0.1 µm/s < V < 3 m/s), describes the initiation of a substantial velocity weakening in the 1-20 cm/s range resulting in a critical stiffness increase that creates a peak of potential instability in that velocity regime. The MFL leads to a new definition of fault frictional stability with implications for slip event styles and relevance for models of seismic rupture nucleation, propagation, and arrest.

  4. Theoretical and Experimental Study on Secondary Piezoelectric Effect Based on PZT-5

    NASA Astrophysics Data System (ADS)

    Zhang, Z. H.; Sun, B. Y.; Shi1, L. P.

    2006-10-01

    The purpose of this paper is to confirm the existence of secondary and multiple piezoelectric effect theoretically and experimentally. Based on Heckmann model showing the relationship among mechanical, electric and heat energy and the physical model on mechanical, electric, heat, and magnetic energy, theoretical analysis of multiple piezoelectric effect is made through four kinds of piezoelectric equations. Experimental research of secondary direct piezoelectric effect is conducted through adopting PZT-5 piles. The result of the experiment indicates that charge generated by secondary direct piezoelectric effect as well as displacement caused by first converse piezoelectric effect keeps fine linearity with the applied voltage.

  5. How much does participatory flood management contribute to stakeholders' social capacity building? Empirical findings based on a triangulation of three evaluation approaches

    NASA Astrophysics Data System (ADS)

    Buchecker, M.; Menzel, S.; Home, R.

    2013-06-01

    Recent literature suggests that dialogic forms of risk communication are more effective to build stakeholders' hazard-related social capacities. In spite of the high theoretical expectations, there is a lack of univocal empirical evidence on the relevance of these effects. This is mainly due to the methodological limitations of the existing evaluation approaches. In our paper we aim at eliciting the contribution of participatory river revitalisation projects on stakeholders' social capacity building by triangulating the findings of three evaluation studies that were based on different approaches: a field-experimental, a qualitative long-term ex-post and a cross-sectional household survey approach. The results revealed that social learning and avoiding the loss of trust were more relevant benefits of participatory flood management than acceptance building. The results suggest that stakeholder involvements should be more explicitly designed as tools for long-term social learning.

  6. Computing theoretical rates of part C eligibility based on developmental delays.

    PubMed

    Rosenberg, Steven A; Ellison, Misoo C; Fast, Bruce; Robinson, Cordelia C; Lazar, Radu

    2013-02-01

    Part C early intervention is a nationwide program that serves infants and toddlers who have developmental delays. This article presents a methodology for computing a theoretical estimate of the proportion of children who are likely to be eligible for Part C services based on delays in any of the 5 developmental domains (cognitive, motor, communication, social-emotional and adaptive) that are assessed to determine eligibility. Rates of developmental delays were estimated from a multivariate normal cumulative distribution function. This approach calculates theoretical rates of occurrence for conditions that are defined in terms of standard deviations from the mean on several variables that are approximately normally distributed. Evidence is presented to suggest that the procedures described produce accurate estimates of rates of child developmental delays. The methodology used in this study provides a useful tool for computing theoretical rates of occurrence of developmental delays that make children candidates for early intervention.

  7. Theoretical analysis of cell separation based on cell surface marker density.

    PubMed

    Chalmers, J J; Zborowski, M; Moore, L; Mandal, S; Fang, B B; Sun, L

    1998-07-05

    A theoretical analysis was performed to determine the number of fractions a multidisperse, immunomagnetically labeled cell population can be separated into based on the surface marker (antigen) density. A number of assumptions were made in this analysis: that there is a proportionality between the number of surface markers on the cell surface and the number of immunomagnetic labels bound; that this surface marker density is independent of the cell diameter; and that there is only the presence of magnetic and drag forces acting on the cell. Due to the normal distribution of cell diameters, a "randomizing" effect enters into the analysis, and an analogy between the "theoretical plate" analysis of distillation, adsorption, and chromatography can be made. Using the experimentally determined, normal distribution of cell diameters for human lymphocytes and a breast cancer cell line, and fluorescent activated cell screening data of specific surface marker distributions, examples of theoretical plate calculations were made and discussed.

  8. The Potential for Empirically Based Estimates of Expected Progress for Students with Learning Disabilities: Legal and Conceptual Issues.

    ERIC Educational Resources Information Center

    Stone, C. Addison; Doane, J. Abram

    2001-01-01

    The purpose of this article is to spark discussion regarding the value and feasibility of empirically based procedures for goal setting and evaluation of educational services. Recent legal decisions and policy debates point to the need for clearer criteria in decisions regarding appropriate educational services. Possible roles for school…

  9. Empirically Based Phenotypic Profiles of Children with Pervasive Developmental Disorders: Interpretation in the Light of the DSM-5

    ERIC Educational Resources Information Center

    Greaves-Lord, Kirstin; Eussen, Mart L. J. M.; Verhulst, Frank C.; Minderaa, Ruud B.; Mandy, William; Hudziak, James J.; Steenhuis, Mark Peter; de Nijs, Pieter F.; Hartman, Catharina A.

    2013-01-01

    This study aimed to contribute to the Diagnostic and Statistical Manual (DSM) debates on the conceptualization of autism by investigating (1) whether empirically based distinct phenotypic profiles could be distinguished within a sample of mainly cognitively able children with pervasive developmental disorder (PDD), and (2) how profiles related to…

  10. An Empirical Introduction to the Concept of Chemical Element Based on Van Hiele's Theory of Level Transitions

    ERIC Educational Resources Information Center

    Vogelezang, Michiel; Van Berkel, Berry; Verdonk, Adri

    2015-01-01

    Between 1970 and 1990, the Dutch working group "Empirical Introduction to Chemistry" developed a secondary school chemistry education curriculum based on the educational vision of the mathematicians van Hiele and van Hiele-Geldof. This approach viewed learning as a process in which students must go through discontinuous level transitions…

  11. Empirically Guided Coordination of Multiple Evidence-Based Treatments: An Illustration of Relevance Mapping in Children's Mental Health Services

    ERIC Educational Resources Information Center

    Chorpita, Bruce F.; Bernstein, Adam; Daleiden, Eric L.

    2011-01-01

    Objective: Despite substantial progress in the development and identification of psychosocial evidence-based treatments (EBTs) in mental health, there is minimal empirical guidance for selecting an optimal "set" of EBTs maximally applicable and generalizable to a chosen service sample. Relevance mapping is a proposed methodology that…

  12. Comparison of subset-based local and FE-based global digital image correlation: Theoretical error analysis and validation

    NASA Astrophysics Data System (ADS)

    Pan, B.; Wang, B.; Lubineau, G.

    2016-07-01

    Subset-based local and finite-element-based (FE-based) global digital image correlation (DIC) approaches are the two primary image matching algorithms widely used for full-field displacement mapping. Very recently, the performances of these different DIC approaches have been experimentally investigated using numerical and real-world experimental tests. The results have shown that in typical cases, where the subset (element) size is no less than a few pixels and the local deformation within a subset (element) can be well approximated by the adopted shape functions, the subset-based local DIC outperforms FE-based global DIC approaches because the former provides slightly smaller root-mean-square errors and offers much higher computation efficiency. Here we investigate the theoretical origin and lay a solid theoretical basis for the previous comparison. We assume that systematic errors due to imperfect intensity interpolation and undermatched shape functions are negligibly small, and perform a theoretical analysis of the random errors or standard deviation (SD) errors in the displacements measured by two local DIC approaches (i.e., a subset-based local DIC and an element-based local DIC) and two FE-based global DIC approaches (i.e., Q4-DIC and Q8-DIC). The equations that govern the random errors in the displacements measured by these local and global DIC approaches are theoretically derived. The correctness of the theoretically predicted SD errors is validated through numerical translation tests under various noise levels. We demonstrate that the SD errors induced by the Q4-element-based local DIC, the global Q4-DIC and the global Q8-DIC are 4, 1.8-2.2 and 1.2-1.6 times greater, respectively, than that associated with the subset-based local DIC, which is consistent with our conclusions from previous work.

  13. Empirical likelihood-based confidence intervals for length-biased data

    PubMed Central

    Ning, J.; Qin, J.; Asgharian, M.; Shen, Y.

    2013-01-01

    Logistic or other constraints often preclude the possibility of conducting incident cohort studies. A feasible alternative in such cases is to conduct a cross-sectional prevalent cohort study for which we recruit prevalent cases, i.e. subjects who have already experienced the initiating event, say the onset of a disease. When the interest lies in estimating the lifespan between the initiating event and a terminating event, say death for instance, such subjects may be followed prospectively until the terminating event or loss to follow-up, whichever happens first. It is well known that prevalent cases have, on average, longer lifespans. As such they do not constitute a representative random sample from the target population; they comprise a biased sample. If the initiating events are generated from a stationary Poisson process, the so-called stationarity assumption, this bias is called length bias. The current literature on length-biased sampling lacks a simple method for estimating the margin of errors of commonly used summary statistics. We fill this gap using the empirical likelihood-based confidence intervals by adapting this method to right-censored length-biased survival data. Both large and small sample behaviors of these confidence intervals are studied. We illustrate our method using a set of data on survival with dementia, collected as part of the Canadian Study of Health and Aging. PMID:23027662

  14. Polarizable Empirical Force Field for Aromatic Compounds Based on the Classical Drude Oscillator

    PubMed Central

    Lopes, Pedro E. M.; Lamoureux, Guillaume; Roux, Benoit; MacKerell, Alexander D.

    2008-01-01

    The polarizable empirical CHARMM force field based on the classical Drude oscillator has been extended to the aromatic compounds benzene and toluene. Parameters were optimized for benzene and then transferred directly to toluene, with parameters for the methyl moiety of toluene taken from the previously published work on the alkanes. Optimization of all parameters was performed against an extensive set of quantum mechanical and experimental data. Ab initio data was used for determination of the electrostatic parameters, the vibrational analysis, and in the optimization of the relative magnitudes of the Lennard-Jones parameters. The absolute values of the Lennard-Jones parameters were determined by comparing computed and experimental heats of vaporization, molecular volumes, free energies of hydration and dielectric constants. The newly developed parameter set was extensively tested against additional experimental data such as vibrational spectra in the condensed phase, diffusion constants, heat capacities at constant pressure and isothermal compressibilities including data as a function of temperature. Moreover, the structure of liquid benzene, liquid toluene and of solutions of each in water were studied. In the case of benzene, the computed and experimental total distribution function were compared, with the developed model shown to be in excellent agreement with experiment. PMID:17388420

  15. Interdigitated silver-polymer-based antibacterial surface system activated by oligodynamic iontophoresis - an empirical characterization study.

    PubMed

    Shirwaiker, Rohan A; Wysk, Richard A; Kariyawasam, Subhashinie; Voigt, Robert C; Carrion, Hector; Nembhard, Harriet Black

    2014-02-01

    There is a pressing need to control the occurrences of nosocomial infections due to their detrimental effects on patient well-being and the rising treatment costs. To prevent the contact transmission of such infections via health-critical surfaces, a prophylactic surface system that consists of an interdigitated array of oppositely charged silver electrodes with polymer separations and utilizes oligodynamic iontophoresis has been recently developed. This paper presents a systematic study that empirically characterizes the effects of the surface system parameters on its antibacterial efficacy, and validates the system's effectiveness. In the first part of the study, a fractional factorial design of experiments (DOE) was conducted to identify the statistically significant system parameters. The data were used to develop a first-order response surface model to predict the system's antibacterial efficacy based on the input parameters. In the second part of the study, the effectiveness of the surface system was validated by evaluating it against four bacterial species responsible for several nosocomial infections - Staphylococcus aureus, Escherichia coli, Pseudomonas aeruginosa, and Enterococcus faecalis - alongside non-antibacterial polymer (acrylic) control surfaces. The system demonstrated statistically significant efficacy against all four bacteria. The results indicate that given a constant total effective surface area, the system designed with micro-scale features (minimum feature width: 20 μm) and activated by 15 μA direct current will provide the most effective antibacterial prophylaxis.

  16. Topological phase transition of single-crystal Bi based on empirical tight-binding calculations

    NASA Astrophysics Data System (ADS)

    Ohtsubo, Yoshiyuki; Kimura, Shin-ichi

    2016-12-01

    The topological order of single-crystal Bi and its surface states on the (111) surface are studied in detail based on empirical tight-binding (TB) calculations. New TB parameters are presented that are used to calculate the surface states of semi-infinite single-crystal Bi(111), which agree with the experimental angle-resolved photoelectron spectroscopy results. The influence of the crystal lattice distortion is surveyed and it is revealed that a topological phase transition is driven by in-plane expansion with topologically non-trivial bulk bands. In contrast with the semi-infinite system, the surface-state dispersions on finite-thickness slabs are non-trivial irrespective of the bulk topological order. The role of the interaction between the top and bottom surfaces in the slab is systematically studied, and it is revealed that a very thick slab is required to properly obtain the bulk topological order of Bi from the (111) surface state: above 150 biatomic layers in this case.

  17. Empirical prediction of Indian summer monsoon rainfall with different lead periods based on global SST anomalies

    NASA Astrophysics Data System (ADS)

    Pai, D. S.; Rajeevan, M.

    2006-02-01

    The main objective of this study was to develop empirical models with different seasonal lead time periods for the long range prediction of seasonal (June to September) Indian summer monsoon rainfall (ISMR). For this purpose, 13 predictors having significant and stable relationships with ISMR were derived by the correlation analysis of global grid point seasonal Sea-Surface Temperature (SST) anomalies and the tendency in the SST anomalies. The time lags of the seasonal SST anomalies were varied from 1 season to 4 years behind the reference monsoon season. The basic SST data set used was the monthly NOAA Extended Reconstructed Global SST (ERSST) data at 2° × 2° spatial grid for the period 1951 2003. The time lags of the 13 predictors derived from various areas of all three tropical ocean basins (Indian, Pacific and Atlantic Oceans) varied from 1 season to 3 years. Based on these inter-correlated predictors, 3 predictor sub sets A, B and C were formed with prediction lead time periods of 0, 1 and 2 seasons, respectively, from the beginning of the monsoon season. The selected principal components (PCs) of these predictor sets were used as the input parameters for the models A, B and C, respectively. The model development period was 1955 1984. The correct model size was derived using all-possible regressions procedure and Mallow’s “Cp” statistics.

  18. Ship classification using nonlinear features of radiated sound: an approach based on empirical mode decomposition.

    PubMed

    Bao, Fei; Li, Chen; Wang, Xinlong; Wang, Qingfu; Du, Shuanping

    2010-07-01

    Classification for ship-radiated underwater sound is one of the most important and challenging subjects in underwater acoustical signal processing. An approach to ship classification is proposed in this work based on analysis of ship-radiated acoustical noise in subspaces of intrinsic mode functions attained via the ensemble empirical mode decomposition. It is shown that detection and acquisition of stable and reliable nonlinear features become practically feasible by nonlinear analysis of the time series of individual decomposed components, each of which is simple enough and well represents an oscillatory mode of ship dynamics. Surrogate and nonlinear predictability analysis are conducted to probe and measure the nonlinearity and regularity. The results of both methods, which verify each other, substantiate that ship-radiated noises contain components with deterministic nonlinear features well serving for efficient classification of ships. The approach perhaps opens an alternative avenue in the direction toward object classification and identification. It may also import a new view of signals as complex as ship-radiated sound.

  19. Satellite-based empirical models linking river plume dynamics with hypoxic area and volume

    NASA Astrophysics Data System (ADS)

    Le, Chengfeng; Lehrter, John C.; Hu, Chuanmin; Obenour, Daniel R.

    2016-03-01

    Satellite-based empirical models explaining hypoxic area and volume variation were developed for the seasonally hypoxic (O2 < 2 mg L-1) northern Gulf of Mexico adjacent to the Mississippi River. Annual variations in midsummer hypoxic area and volume were related to Moderate Resolution Imaging Spectroradiometer-derived monthly estimates of river plume area (km2) and average, inner shelf chlorophyll a concentration (Chl a, mg m-3). River plume area in June was negatively related with midsummer hypoxic area (km2) and volume (km3), while July inner shelf Chl a was positively related to hypoxic area and volume. Multiple regression models using river plume area and Chl a as independent variables accounted for most of the variability in hypoxic area (R2 = 0.92) or volume (R2 = 0.89). These models explain more variation in hypoxic area than models using Mississippi River nutrient loads as independent variables. The results here also support a hypothesis that confinement of the river plume to the inner shelf is an important mechanism controlling hypoxia area and volume in this region.

  20. Empirical Study of User Preferences Based on Rating Data of Movies.

    PubMed

    Zhao, YingSi; Shen, Bo

    2016-01-01

    User preference plays a prominent role in many fields, including electronic commerce, social opinion, and Internet search engines. Particularly in recommender systems, it directly influences the accuracy of the recommendation. Though many methods have been presented, most of these have only focused on how to improve the recommendation results. In this paper, we introduce an empirical study of user preferences based on a set of rating data about movies. We develop a simple statistical method to investigate the characteristics of user preferences. We find that the movies have potential characteristics of closure, which results in the formation of numerous cliques with a power-law size distribution. We also find that a user related to a small clique always has similar opinions on the movies in this clique. Then, we suggest a user preference model, which can eliminate the predictions that are considered to be impracticable. Numerical results show that the model can reflect user preference with remarkable accuracy when data elimination is allowed, and random factors in the rating data make prediction error inevitable. In further research, we will investigate many other rating data sets to examine the universality of our findings.

  1. Empirical Study of User Preferences Based on Rating Data of Movies

    PubMed Central

    Zhao, YingSi; Shen, Bo

    2016-01-01

    User preference plays a prominent role in many fields, including electronic commerce, social opinion, and Internet search engines. Particularly in recommender systems, it directly influences the accuracy of the recommendation. Though many methods have been presented, most of these have only focused on how to improve the recommendation results. In this paper, we introduce an empirical study of user preferences based on a set of rating data about movies. We develop a simple statistical method to investigate the characteristics of user preferences. We find that the movies have potential characteristics of closure, which results in the formation of numerous cliques with a power-law size distribution. We also find that a user related to a small clique always has similar opinions on the movies in this clique. Then, we suggest a user preference model, which can eliminate the predictions that are considered to be impracticable. Numerical results show that the model can reflect user preference with remarkable accuracy when data elimination is allowed, and random factors in the rating data make prediction error inevitable. In further research, we will investigate many other rating data sets to examine the universality of our findings. PMID:26735847

  2. Empirical mode decomposition of digital mammograms for the statistical based characterization of architectural distortion.

    PubMed

    Zyout, Imad; Togneri, Roberto

    2015-01-01

    Among the different and common mammographic signs of the early-stage breast cancer, the architectural distortion is the most difficult to be identified. In this paper, we propose a new multiscale statistical texture analysis to characterize the presence of architectural distortion by distinguishing between textural patterns of architectural distortion and normal breast parenchyma. The proposed approach, firstly, applies the bidimensional empirical mode decomposition algorithm to decompose each mammographic region of interest into a set of adaptive and data-driven two-dimensional intrinsic mode functions (IMF) layers that capture details or high-frequency oscillations of the input image. Then, a model-based approach is applied to IMF histograms to acquire the first order statistics. The normalized entropy measure is also computed from each IMF and used as a complementary textural feature for the recognition of architectural distortion patterns. For evaluating the proposed AD characterization approach, we used a mammographic dataset of 187 true positive regions (i.e. depicting architectural distortion) and 887 true negative (normal parenchyma) regions, extracted from the DDSM database. Using the proposed multiscale textural features and the nonlinear support vector machine classifier, the best classification performance, in terms of the area under the receiver operating characteristic curve (or Az value), achieved was 0.88.

  3. Robust multitask learning with three-dimensional empirical mode decomposition-based features for hyperspectral classification

    NASA Astrophysics Data System (ADS)

    He, Zhi; Liu, Lin

    2016-11-01

    Empirical mode decomposition (EMD) and its variants have recently been applied for hyperspectral image (HSI) classification due to their ability to extract useful features from the original HSI. However, it remains a challenging task to effectively exploit the spectral-spatial information by the traditional vector or image-based methods. In this paper, a three-dimensional (3D) extension of EMD (3D-EMD) is proposed to naturally treat the HSI as a cube and decompose the HSI into varying oscillations (i.e. 3D intrinsic mode functions (3D-IMFs)). To achieve fast 3D-EMD implementation, 3D Delaunay triangulation (3D-DT) is utilized to determine the distances of extrema, while separable filters are adopted to generate the envelopes. Taking the extracted 3D-IMFs as features of different tasks, robust multitask learning (RMTL) is further proposed for HSI classification. In RMTL, pairs of low-rank and sparse structures are formulated by trace-norm and l1,2 -norm to capture task relatedness and specificity, respectively. Moreover, the optimization problems of RMTL can be efficiently solved by the inexact augmented Lagrangian method (IALM). Compared with several state-of-the-art feature extraction and classification methods, the experimental results conducted on three benchmark data sets demonstrate the superiority of the proposed methods.

  4. The mature minor: some critical psychological reflections on the empirical bases.

    PubMed

    Partridge, Brian C

    2013-06-01

    Moral and legal notions engaged in clinical ethics should not only possess analytic clarity but a sound basis in empirical findings. The latter condition brings into question the expansion of the mature minor exception. The mature minor exception in the healthcare law of the United States has served to enable those under the legal age to consent to medical treatment. Although originally developed primarily for minors in emergency or quasi-emergency need for health care, it was expanded especially from the 1970s in order to cover unemancipated minors older than 14 years. This expansion initially appeared plausible, given psychological data that showed the intellectual capacity of minors over 14 to recognize the causal connection between their choices and the consequences of their choices. However, subsequent psychological studies have shown that minors generally fail to have realistic affective and evaluative appreciations of the consequences of their decisions, because they tend to over-emphasize short-term benefits and underestimate long-term risks. Also, unlike most decisionmakers over 21, the decisions of minors are more often marked by the lack of adequate impulse control, all of which is reflected in the far higher involvement of adolescents in acts of violence, intentional injury, and serious automobile accidents. These effects are more evident in circumstances that elicit elevated affective responses. The advent of brain imaging has allowed the actual visualization of qualitative differences between how minors versus persons over the age of 21 generally assess risks and benefits and make decisions. In the case of most under the age of 21, subcortical systems fail adequately to be checked by the prefrontal systems that are involved in adult executive decisions. The neuroanatomical and psychological model developed by Casey, Jones, and Summerville offers an empirical insight into the qualitative differences in the neuroanatomical and neuropsychological bases

  5. Theoretical results on the tandem junction solar cell based on its Ebers-Moll transistor model

    NASA Technical Reports Server (NTRS)

    Goradia, C.; Vaughn, J.; Baraona, C. R.

    1980-01-01

    A one-dimensional theoretical model of the tandem junction solar cell (TJC) with base resistivity greater than about 1 ohm-cm and under low level injection has been derived. This model extends a previously published conceptual model which treats the TJC as an npn transistor. The model gives theoretical expressions for each of the Ebers-Moll type currents of the illuminated TJC and allows for the calculation of the spectral response, I(sc), V(oc), FF and eta under variation of one or more of the geometrical and material parameters and 1MeV electron fluence. Results of computer calculations based on this model are presented and discussed. These results indicate that for space applications, both a high beginning of life efficiency, greater than 15% AM0, and a high radiation tolerance can be achieved only with thin (less than 50 microns) TJC's with high base resistivity (greater than 10 ohm-cm).

  6. Pseudopotential-based electron quantum transport: Theoretical formulation and application to nanometer-scale silicon nanowire transistors

    SciTech Connect

    Fang, Jingtian Vandenberghe, William G.; Fu, Bo; Fischetti, Massimo V.

    2016-01-21

    We present a formalism to treat quantum electronic transport at the nanometer scale based on empirical pseudopotentials. This formalism offers explicit atomistic wavefunctions and an accurate band structure, enabling a detailed study of the characteristics of devices with a nanometer-scale channel and body. Assuming externally applied potentials that change slowly along the electron-transport direction, we invoke the envelope-wavefunction approximation to apply the open boundary conditions and to develop the transport equations. We construct the full-band open boundary conditions (self-energies of device contacts) from the complex band structure of the contacts. We solve the transport equations and present the expressions required to calculate the device characteristics, such as device current and charge density. We apply this formalism to study ballistic transport in a gate-all-around (GAA) silicon nanowire field-effect transistor with a body-size of 0.39 nm, a gate length of 6.52 nm, and an effective oxide thickness of 0.43 nm. Simulation results show that this device exhibits a subthreshold slope (SS) of ∼66 mV/decade and a drain-induced barrier-lowering of ∼2.5 mV/V. Our theoretical calculations predict that low-dimensionality channels in a 3D GAA architecture are able to meet the performance requirements of future devices in terms of SS swing and electrostatic control.

  7. A novel signal compression method based on optimal ensemble empirical mode decomposition for bearing vibration signals

    NASA Astrophysics Data System (ADS)

    Guo, Wei; Tse, Peter W.

    2013-01-01

    Today, remote machine condition monitoring is popular due to the continuous advancement in wireless communication. Bearing is the most frequently and easily failed component in many rotating machines. To accurately identify the type of bearing fault, large amounts of vibration data need to be collected. However, the volume of transmitted data cannot be too high because the bandwidth of wireless communication is limited. To solve this problem, the data are usually compressed before transmitting to a remote maintenance center. This paper proposes a novel signal compression method that can substantially reduce the amount of data that need to be transmitted without sacrificing the accuracy of fault identification. The proposed signal compression method is based on ensemble empirical mode decomposition (EEMD), which is an effective method for adaptively decomposing the vibration signal into different bands of signal components, termed intrinsic mode functions (IMFs). An optimization method was designed to automatically select appropriate EEMD parameters for the analyzed signal, and in particular to select the appropriate level of the added white noise in the EEMD method. An index termed the relative root-mean-square error was used to evaluate the decomposition performances under different noise levels to find the optimal level. After applying the optimal EEMD method to a vibration signal, the IMF relating to the bearing fault can be extracted from the original vibration signal. Compressing this signal component obtains a much smaller proportion of data samples to be retained for transmission and further reconstruction. The proposed compression method were also compared with the popular wavelet compression method. Experimental results demonstrate that the optimization of EEMD parameters can automatically find appropriate EEMD parameters for the analyzed signals, and the IMF-based compression method provides a higher compression ratio, while retaining the bearing defect

  8. Empirically Supported Family-Based Treatments for Conduct Disorder and Delinquency in Adolescents

    PubMed Central

    Henggeler, Scott W.; Sheidow, Ashli J.

    2011-01-01

    Several family-based treatments of conduct disorder and delinquency in adolescents have emerged as evidence-based and, in recent years, have been transported to more than 800 community practice settings. These models include multisystemic therapy, functional family therapy, multidimensional treatment foster care, and, to a lesser extent, brief strategic family therapy. In addition to summarizing the theoretical and clinical bases of these treatments, their results in efficacy and effectiveness trials are examined with particular emphasis on any demonstrated capacity to achieve favorable outcomes when implemented by real world practitioners in community practice settings. Special attention is also devoted to research on purported mechanisms of change as well as the long-term sustainability of outcomes achieved by these treatment models. Importantly, we note that the developers of each of the models have developed quality assurance systems to support treatment fidelity and youth and family outcomes; and the developers have formed purveyor organizations to facilitate the large scale transport of their respective treatments to community settings nationally and internationally. PMID:22283380

  9. Shape of the self-concept clarity change during group psychotherapy predicts the outcome: an empirical validation of the theoretical model of the self-concept change

    PubMed Central

    Styła, Rafał

    2015-01-01

    Background: Self-Concept Clarity (SCC) describes the extent to which the schemas of the self are internally integrated, well defined, and temporally stable. This article presents a theoretical model that describes how different shapes of SCC change (especially stable increase and “V” shape) observed in the course of psychotherapy are related to the therapy outcome. Linking the concept of Jean Piaget and the dynamic systems theory, the study postulates that a stable SCC increase is needed for the participants with a rather healthy personality structure, while SCC change characterized by a “V” shape or fluctuations is optimal for more disturbed patients. Method: Correlational study in a naturalistic setting with repeated measurements (M = 5.8) was conducted on the sample of 85 patients diagnosed with neurosis and personality disorders receiving intensive eclectic group psychotherapy under routine inpatient conditions. Participants filled in the Self-Concept Clarity Scale (SCCS), Symptoms' Questionnaire KS-II, and Neurotic Personality Questionnaire KON-2006 at the beginning and at the end of the course of psychotherapy. The SCCS was also administered every 2 weeks during psychotherapy. Results: As hypothesized, among the relatively healthiest group of patients the stable SCC increase was related to positive treatment outcome, while more disturbed patients benefited from the fluctuations and “V” shape of SCC change. Conclusions: The findings support the idea that for different personality dispositions either a monotonic increase or transient destabilization of SCC is a sign of a good treatment prognosis. PMID:26579001

  10. Knowledge-based immunosuppressive therapy for kidney transplant patients--from theoretical model to clinical integration.

    PubMed

    Seeling, Walter; Plischke, Max; de Bruin, Jeroen S; Schuh, Christian

    2015-01-01

    Immunosuppressive therapy is a risky necessity after a patient received a kidney transplant. To reduce risks, a knowledge-based system was developed that determines the right dosage of the immunosuppresive agent Tacrolimus. A theoretical model, to classify medication blood levels as well as medication adaptions, was created using data from almost 500 patients, and over 13.000 examinations. This model was then translated into an Arden Syntax knowledge base, and integrated directly into the hospital information system of the Vienna General Hospital. In this paper we give an overview of the construction and integration of such a system.

  11. Simulation of Long Lived Tracers Using an Improved Empirically Based Two-Dimensional Model Transport Algorithm

    NASA Technical Reports Server (NTRS)

    Fleming, E. L.; Jackman, C. H.; Stolarski, R. S.; Considine, D. B.

    1998-01-01

    We have developed a new empirically-based transport algorithm for use in our GSFC two-dimensional transport and chemistry model. The new algorithm contains planetary wave statistics, and parameterizations to account for the effects due to gravity waves and equatorial Kelvin waves. As such, this scheme utilizes significantly more information compared to our previous algorithm which was based only on zonal mean temperatures and heating rates. The new model transport captures much of the qualitative structure and seasonal variability observed in long lived tracers, such as: isolation of the tropics and the southern hemisphere winter polar vortex; the well mixed surf-zone region of the winter sub-tropics and mid-latitudes; the latitudinal and seasonal variations of total ozone; and the seasonal variations of mesospheric H2O. The model also indicates a double peaked structure in methane associated with the semiannual oscillation in the tropical upper stratosphere. This feature is similar in phase but is significantly weaker in amplitude compared to the observations. The model simulations of carbon-14 and strontium-90 are in good agreement with observations, both in simulating the peak in mixing ratio at 20-25 km, and the decrease with altitude in mixing ratio above 25 km. We also find mostly good agreement between modeled and observed age of air determined from SF6 outside of the northern hemisphere polar vortex. However, observations inside the vortex reveal significantly older air compared to the model. This is consistent with the model deficiencies in simulating CH4 in the northern hemisphere winter high latitudes and illustrates the limitations of the current climatological zonal mean model formulation. The propagation of seasonal signals in water vapor and CO2 in the lower stratosphere showed general agreement in phase, and the model qualitatively captured the observed amplitude decrease in CO2 from the tropics to midlatitudes. However, the simulated seasonal

  12. A game-theoretic framework for landmark-based image segmentation.

    PubMed

    Ibragimov, Bulat; Likar, Boštjan; Pernus, Franjo; Vrtovec, Tomaz

    2012-09-01

    A novel game-theoretic framework for landmark-based image segmentation is presented. Landmark detection is formulated as a game, in which landmarks are players, landmark candidate points are strategies, and likelihoods that candidate points represent landmarks are payoffs, determined according to the similarity of image intensities and spatial relationships between the candidate points in the target image and their corresponding landmarks in images from the training set. The solution of the formulated game-theoretic problem is the equilibrium of candidate points that represent landmarks in the target image and is obtained by a novel iterative scheme that solves the segmentation problem in polynomial time. The object boundaries are finally extracted by applying dynamic programming to the optimal path searching problem between the obtained adjacent landmarks. The performance of the proposed framework was evaluated for segmentation of lung fields from chest radiographs and heart ventricles from cardiac magnetic resonance cross sections. The comparison to other landmark-based segmentation techniques shows that the results obtained by the proposed game-theoretic framework are highly accurate and precise in terms of mean boundary distance and area overlap. Moreover, the framework overcomes several shortcomings of the existing techniques, such as sensitivity to initialization and convergence to local optima.

  13. Model Selection for Equating Testlet-Based Tests in the NEAT Design: An Empirical Study

    ERIC Educational Resources Information Center

    He, Wei; Li, Feifei; Wolfe, Edward W.; Mao, Xia

    2012-01-01

    For those tests solely composed of testlets, local item independency assumption tends to be violated. This study, by using empirical data from a large-scale state assessment program, was interested in investigates the effects of using different models on equating results under the non-equivalent group anchor-test (NEAT) design. Specifically, the…

  14. Comparisons of experiment with cellulose models based on electronic structure and empirical force field theories

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Studies of cellobiose conformations with HF/6-31G* and B3LYP/6-31+G*quantum theory [1] gave a reference for studies with the much faster empirical methods such as MM3, MM4, CHARMM and AMBER. The quantum studies also enable a substantial reduction in the number of exo-cyclic group orientations that...

  15. Monitoring of Qualifications and Employment in Austria: An Empirical Approach Based on the Labour Force Survey

    ERIC Educational Resources Information Center

    Lassnigg, Lorenz; Vogtenhuber, Stefan

    2011-01-01

    The empirical approach referred to in this article describes the relationship between education and training (ET) supply and employment in Austria; the use of the new ISCED (International Standard Classification of Education) fields of study variable makes this approach applicable abroad. The purpose is to explore a system that produces timely…

  16. Understanding Transactional Distance in Web-Based Learning Environments: An Empirical Study

    ERIC Educational Resources Information Center

    Huang, Xiaoxia; Chandra, Aruna; DePaolo, Concetta A.; Simmons, Lakisha L.

    2016-01-01

    Transactional distance is an important pedagogical theory in distance education that calls for more empirical support. The purpose of this study was to verify the theory by operationalizing and examining the relationship of (1) dialogue, structure and learner autonomy to transactional distance, and (2) environmental factors and learner demographic…

  17. Asynchronous cellular automaton-based neuron: theoretical analysis and on-FPGA learning.

    PubMed

    Matsubara, Takashi; Torikai, Hiroyuki

    2013-05-01

    A generalized asynchronous cellular automaton-based neuron model is a special kind of cellular automaton that is designed to mimic the nonlinear dynamics of neurons. The model can be implemented as an asynchronous sequential logic circuit and its control parameter is the pattern of wires among the circuit elements that is adjustable after implementation in a field-programmable gate array (FPGA) device. In this paper, a novel theoretical analysis method for the model is presented. Using this method, stabilities of neuron-like orbits and occurrence mechanisms of neuron-like bifurcations of the model are clarified theoretically. Also, a novel learning algorithm for the model is presented. An equivalent experiment shows that an FPGA-implemented learning algorithm enables an FPGA-implemented model to automatically reproduce typical nonlinear responses and occurrence mechanisms observed in biological and model neurons.

  18. Security Analysis of Selected AMI Failure Scenarios Using Agent Based Game Theoretic Simulation

    SciTech Connect

    Abercrombie, Robert K; Schlicher, Bob G; Sheldon, Frederick T

    2014-01-01

    Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. We concentrated our analysis on the Advanced Metering Infrastructure (AMI) functional domain which the National Electric Sector Cyber security Organization Resource (NESCOR) working group has currently documented 29 failure scenarios. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain. From these five selected scenarios, we characterize them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrates how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA.

  19. Numerical simulation of bubble departure in subcooled pool boiling based on non-empirical boiling and condensation model

    NASA Astrophysics Data System (ADS)

    Ose, Y.; Kunugi, T.

    2013-07-01

    In this study, in order to clarify the heat transfer characteristics of the subcooled boiling phenomena and to discuss on their mechanism, a non-empirical boiling and condensation model for numerical simulation has been adopted. This model consists of an improved phase-change model and a consideration of a relaxation time based on the quasithermal equilibrium hypothesis. The transient three-dimensional numerical simulations based on the MARS (Multiinterface Advection and Reconstruction Solver) with the non-empirical boiling and condensation model have been conducted for an isolated boiling bubble behavior in a subcooled pool. The subcooled bubble behaviors, such as the growth process of the nucleate bubble on the heating surface, the condensation process and the extinction behaviors after departing from the heating surface were investigated, respectively. In this paper, the bubble departing behavior from the heating surface was discussed in detail. The overall numerical results showed in very good agreement with the experimental results.

  20. Estimates for ELF effects: noise-based thresholds and the number of experimental conditions required for empirical searches.

    PubMed

    Weaver, J C; Astumian, R D

    1992-01-01

    Interactions between physical fields and biological systems present difficult conceptual problems. Complete biological systems, even isolated cells, are exceedingly complex. This argues against the pursuit of theoretical models, with the possible consequence that only experimental studies should be considered. In contrast, electromagnetic fields are well understood. Further, some subsystems of cells (viz. cell membranes) can be reasonably represented by physical models. This argues for the pursuit of theoretical models which quantitatively describe interactions of electromagnetic fields with that subsystem. Here we consider the hypothesis that electric fields, not magnetic fields, are the source of interactions, From this it follows that the cell membrane is a relevant subsystem, as the membrane is much more resistive than the intra- or extracellular regions. A general class of interactions is considered: electroconformational changes associated with the membrane. Expected results of such as approach include the dependence of the interaction on key parameters (e.g., cell size, field magnitude, frequency, and exposure time), constraints on threshold exposure conditions, and insight into how experiments might be designed. Further, because it is well established that strong and moderate electric fields interact significantly with cells, estimates of the extrapolated interaction for weaker fields can be sought. By employing signal-to-noise (S/N) ratio criteria, theoretical models can also be used to estimate threshold magnitudes. These estimates are particularly relevant to in vitro conditions, for which most biologically generated background fields are absent. Finally, we argue that if theoretical model predictions are unavailable to guide the selection of experimental conditions, an overwhelmingly large number of different conditions will be needed to find, establish, and characterize bioelectromagnetic effects in an empirical search. This is contrasted with well

  1. Upscaling Empirically Based Conceptualisations to Model Tropical Dominant Hydrological Processes for Historical Land Use Change

    NASA Astrophysics Data System (ADS)

    Toohey, R.; Boll, J.; Brooks, E.; Jones, J.

    2009-12-01

    Surface runoff and percolation to ground water are two hydrological processes of concern to the Atlantic slope of Costa Rica because of their impacts on flooding and drinking water contamination. As per legislation, the Costa Rican Government funds land use management from the farm to the regional scale to improve or conserve hydrological ecosystem services. In this study, we examined how land use (e.g., forest, coffee, sugar cane, and pasture) affects hydrological response at the point, plot (1 m2), and the field scale (1-6ha) to empirically conceptualize the dominant hydrological processes in each land use. Using our field data, we upscaled these conceptual processes into a physically-based distributed hydrological model at the field, watershed (130 km2), and regional (1500 km2) scales. At the point and plot scales, the presence of macropores and large roots promoted greater vertical percolation and subsurface connectivity in the forest and coffee field sites. The lack of macropores and large roots, plus the addition of management artifacts (e.g., surface compaction and a plough layer), altered the dominant hydrological processes by increasing lateral flow and surface runoff in the pasture and sugar cane field sites. Macropores and topography were major influences on runoff generation at the field scale. Also at the field scale, antecedent moisture conditions suggest a threshold behavior as a temporal control on surface runoff generation. However, in this tropical climate with very intense rainstorms, annual surface runoff was less than 10% of annual precipitation at the field scale. Significant differences in soil and hydrological characteristics observed at the point and plot scales appear to have less significance when upscaled to the field scale. At the point and plot scales, percolation acted as the dominant hydrological process in this tropical environment. However, at the field scale for sugar cane and pasture sites, saturation-excess runoff increased as

  2. The estimation of convective rainfall by area integrals. I - The theoretical and empirical basis. II - The height-area rainfall threshold (HART) method

    NASA Technical Reports Server (NTRS)

    Rosenfeld, Daniel; Short, David A.; Atlas, David

    1990-01-01

    A theory is developed which establishes the basis for the use of rainfall areas within present thresholds as a measure of either the instantaneous areawide rain rate of convective storms or the total volume of rain from an individual storm over its lifetime. The method is based upon the existence of a well-behaved pdf of rain rate either from the many storms at one instant or from a single storm during its life. The generality of the instantaneous areawide method was examined by applying it to quantitative radar data sets from the GARP Tropical Atlantic Experiment for South Africa, Texas, and Darwin (Australia). It is shown that the pdf's developed for each of these areas are consistent with the theory.

  3. Estimation of daily global solar radiation using wavelet regression, ANN, GEP and empirical models: A comparative study of selected temperature-based approaches

    NASA Astrophysics Data System (ADS)

    Sharifi, Sayed Saber; Rezaverdinejad, Vahid; Nourani, Vahid

    2016-11-01

    Although the sunshine-based models generally have a better performance than temperature-based models for estimating solar radiation, the limited availability of sunshine duration records makes the development of temperature-based methods inevitable. This paper presents a comparative study between Artificial Neural Networks (ANNs), Gene Expression Programming (GEP), Wavelet Regression (WR) and 5 selected temperature-based empirical models for estimating the daily global solar radiation. A new combination of inputs including four readily accessible parameters have been employed: daily mean clearness index (KT), temperature range (ΔT), theoretical sunshine duration (N) and extraterrestrial radiation (Ra). Ten statistical indicators in a form of GPI (Global Performance Indicator) is used to ascertain the suitability of the models. The performance of selected models across the range of solar radiation values, was depicted by the quantile-quantile (Q-Q) plots. Comparing these plots makes it evident that ANNs can cover a broader range of solar radiation values. The results shown indicate that the performance of ANN model was clearly superior to the other models. The findings also demonstrated that WR model performed well and presented high accuracy in estimations of daily global solar radiation.

  4. Meta-Theoretical Contributions to the Constitution of a Model-Based Didactics of Science

    NASA Astrophysics Data System (ADS)

    Ariza, Yefrin; Lorenzano, Pablo; Adúriz-Bravo, Agustín

    2016-10-01

    There is nowadays consensus in the community of didactics of science (i.e. science education understood as an academic discipline) regarding the need to include the philosophy of science in didactical research, science teacher education, curriculum design, and the practice of science education in all educational levels. Some authors have identified an ever-increasing use of the concept of `theoretical model', stemming from the so-called semantic view of scientific theories. However, it can be recognised that, in didactics of science, there are over-simplified transpositions of the idea of model (and of other meta-theoretical ideas). In this sense, contemporary philosophy of science is often blurred or distorted in the science education literature. In this paper, we address the discussion around some meta-theoretical concepts that are introduced into didactics of science due to their perceived educational value. We argue for the existence of a `semantic family', and we characterise four different versions of semantic views existing within the family. In particular, we seek to contribute to establishing a model-based didactics of science mainly supported in this semantic family.

  5. Feasibility of theoretical formulas on the anisotropy of shale based on laboratory measurement and error analysis

    NASA Astrophysics Data System (ADS)

    Xie, Jianyong; Di, Bangrang; Wei, Jianxin; Luan, Xinyuan; Ding, Pinbo

    2015-04-01

    This paper designs a total angle ultrasonic test method to measure the P-wave velocities (vp), vertically and horizontally polarized shear wave velocities (vsv and vsh) of all angles to the bedding plane on different kinds of strong anisotropic shale. Analysis has been made of the comparisons among the observations and corresponding calculated theoretical curves based on the varied vertical transversely isotropic (TI) medium theories, for which discussing the real similarity with the characterizations of the TI medium on the scope of dynamic behaviors, and further conclude a more accurate and precise theory from the varied theoretical formulas as well as its suitable range to characterize the strong anisotropy of shale. At a low phase angle (theta <10 degrees), the three theoretical curves are consistent with the observations, and then tend to be distinct with the increase of the phase angle, especially for the Thomsen theoretical curves which tend toward serious deviation, while the Berryman expressions provide a relatively much better agreement with the measured data for vp, vsv on shale. Also all of the three theories lead to more deviations in the approximation of the vsv than for the vp and vsh. Furthermore, we created synthetic comparative ideal physical models (from coarse bakelite, cambric bakelite, and paper bakelite) as supplementary models to natural shale, which are used to model shale with different anisotropy, to research the effects of the anisotropic parameters upon the applicability of the former optimal TI theories, especially for the vsv. We found the when the P-wave anisotropy, S-wave anisotropy ε, γ > 0.25, the Berrryman curve will be the best fit for the vp, vsv on shale.

  6. A theoretical and empirical study of the response of the high latitude thermosphere to the sense of the 'Y' component of the interplanetary magnetic field

    NASA Technical Reports Server (NTRS)

    Rees, D.; Fuller-Rowell, T. J.; Gordon, R.; Smith, M. F.; Maynard, N. C.; Heppner, J. P.; Spencer, N. W.; Wharton, L.

    1986-01-01

    Patterns of magnetospheric energetic plasma precipitation as a function of the Y component of the Interplanetary Magnetic Field (IMF) are studied. The development of a three-dimensional, time-dependent global thermospheric model using a polar conversion electric field with a dependence on the Y component of the IMF to evaluate thermospheric wind circulation is examined. Thermospheric wind data from the ISEE-3 satellite, Dynamics Explorer-2 satellite, and a ground-based Fabry-Perot interferometer in Kiruna, Sweden, collected on December 1, 2, 6, 25, 1981 and February 12, 13, 1982 are described. The observed data and simulations of polar thermospheric winds are compared. In the Northern Hemisphere a strong antisunward ion flow on the dawn side of the geomagnetic polar cap is observed when the BY is positive, and the flow is detected on the dusk side when the BY is negative. It is concluded that the strength and direction of the IMF directly control the transfer of solar wind momentum and energy to the high latitude thermosphere.

  7. Empirical ethics as dialogical practice.

    PubMed

    Widdershoven, Guy; Abma, Tineke; Molewijk, Bert

    2009-05-01

    In this article, we present a dialogical approach to empirical ethics, based upon hermeneutic ethics and responsive evaluation. Hermeneutic ethics regards experience as the concrete source of moral wisdom. In order to gain a good understanding of moral issues, concrete detailed experiences and perspectives need to be exchanged. Within hermeneutic ethics dialogue is seen as a vehicle for moral learning and developing normative conclusions. Dialogue stands for a specific view on moral epistemology and methodological criteria for moral inquiry. Responsive evaluation involves a structured way of setting up dialogical learning processes, by eliciting stories of participants, exchanging experiences in (homogeneous and heterogeneous) groups and drawing normative conclusions for practice. By combining these traditions we develop both a theoretical and a practical approach to empirical ethics, in which ethical issues are addressed and shaped together with stakeholders in practice. Stakeholders' experiences are not only used as a source for reflection by the ethicist; stakeholders are involved in the process of reflection and analysis, which takes place in a dialogue between participants in practice, facilitated by the ethicist. This dialogical approach to empirical ethics may give rise to questions such as: What contribution does the ethicist make? What role does ethical theory play? What is the relationship between empirical research and ethical theory in the dialogical process? In this article, these questions will be addressed by reflecting upon a project in empirical ethics that was set up in a dialogical way. The aim of this project was to develop and implement normative guidelines with and within practice, in order to improve the practice concerning coercion and compulsion in psychiatry.

  8. The processing of rotor startup signals based on empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Gai, Guanghong

    2006-01-01

    In this paper, we applied empirical mode decomposition method to analyse rotor startup signals, which are non-stationary and contain a lot of additional information other than that from its stationary running signals. The methodology developed in this paper decomposes the original startup signals into intrinsic oscillation modes or intrinsic modes function (IMFs). Then, we obtained rotating frequency components for Bode diagrams plot by corresponding IMFs, according to the characteristics of rotor system. The method can obtain precise critical speed without complex hardware support. The low-frequency components were extracted from these IMFs in vertical and horizontal directions. Utilising these components, we constructed a drift locus of rotor revolution centre, which provides some significant information to fault diagnosis of rotating machinery. Also, we proved that empirical mode decomposition method is more precise than Fourier filter for the extraction of low-frequency component.

  9. Mismatch between electrophysiologically defined and ventriculography based theoretical targets for posteroventral pallidotomy in Parkinson's disease

    PubMed Central

    Merello, M; Cammarota, A; Cerquetti, D; Leiguarda, R

    2000-01-01

    OBJECTIVES—Over the past few years many reports have shown that posteroventral pallidotomy is an effective method for treating advanced cases of Parkinson's disease. The main differences with earlier descriptions were the use of standardised evaluation with new high resolution MRI studies and of single cell microrecording which can electrophysiologically define the sensorimotor portion of the internal globus pallidus (GPi). The present study was performed on a consecutive series of 40 patients with Parkinson's disease who underwent posteroventral pallidotomy to determine localisation discrepancies between the ventriculography based theoretical and the electrophysiologically defined target for posteroventral pallidotomy.
METHODS—The tentative location of the posteroventral GPi portion was defined according to the proportional Talairach system. Single cell recording was performed in all patients. The definitive target was chosen according to the feasibility of recording single cells with GPi cell features, including the presence of motor drive and correct identification of the internal capsule and of the optic tract by activity recording and microstimulation.
RESULTS—In all 40 patients the electrophysiologically defined sensorimotor portion of the GPi was lesioned, with significantly improved cardinal Parkinson's disease symptoms as well as levodopa induced dyskinesias, without damage to the internal capsule or optic tract. Significant differences between the localisation of the ventriculography based theoretical versus electrophysiological target were found in depth (p<0.0008) and posteriority (p<0.04). No significant differences were found in laterality between both approaches. Difference ranges were 8 mm for laterality, 6.5 mm for depth, and 10 mm for posteriority.
CONCLUSIONS—Electrophysiologically defined lesion of GPi for posteroventral pallidotomy, shown to be effective for treating Parkinson's disease, is located at a significantly different

  10. Organizational Learning, Strategic Flexibility and Business Model Innovation: An Empirical Research Based on Logistics Enterprises

    NASA Astrophysics Data System (ADS)

    Bao, Yaodong; Cheng, Lin; Zhang, Jian

    Using the data of 237 Jiangsu logistics firms, this paper empirically studies the relationship among organizational learning capability, business model innovation, strategic flexibility. The results show as follows; organizational learning capability has positive impacts on business model innovation performance; strategic flexibility plays mediating roles on the relationship between organizational learning capability and business model innovation; interaction among strategic flexibility, explorative learning and exploitative learning play significant roles in radical business model innovation and incremental business model innovation.

  11. Epicentral Location of Regional Seismic Events Based on Empirical Green’s Functions from Ambient Noise

    DTIC Science & Technology

    2010-09-01

    located and characterized by the University of Utah Seismic Stations (UUSS) and by the Department of Earth and Atmospheric Sciences at Saint Louis...of sources at different depths; e.g., earthquakes within Earth’s crust, volcanic explosions, meteoritic impacts, explosions, mine collapses, or...not require knowledge of Earth structure. ● It works for weak events where the detection of body wave phases may be problematic. ●The empirical

  12. A Reliability Test of a Complex System Based on Empirical Likelihood

    PubMed Central

    Zhang, Jun; Hui, Yongchang

    2016-01-01

    To analyze the reliability of a complex system described by minimal paths, an empirical likelihood method is proposed to solve the reliability test problem when the subsystem distributions are unknown. Furthermore, we provide a reliability test statistic of the complex system and extract the limit distribution of the test statistic. Therefore, we can obtain the confidence interval for reliability and make statistical inferences. The simulation studies also demonstrate the theorem results. PMID:27760130

  13. Empirical model of the thermospheric mass density based on CHAMP satellite observations

    NASA Astrophysics Data System (ADS)

    Liu, Huixin; Hirano, Takashi; Watanabe, Shigeto

    2013-02-01

    The decadal observations from CHAMP satellite have provided ample information on the Earth's upper thermosphere, reshaping our understandings of the vertical coupling in the atmosphere and near-Earth space. An empirical model of the thermospheric mass density is constructed from these high-resolution observations using the multivariable least-squares fitting method. It describes the density variation with latitude, longitude, height, local time, season, and solar and geomagnetic activity levels within the altitude range of 350-420 km. It represents well prominent thermosphere structures like the equatorial mass density anomaly (EMA) and the wave-4 longitudinal pattern. Furthermore, the empirical model reveals two distinct features. First, the EMA is found to have a clear altitude dependence, with its crests moving equatorward with increasing altitude. Second, the equinoctial asymmetry is found to strongly depend on solar cycle, with its magnitude and phase being strongly regulated by solar activity levels. The equinoctial density maxima occur significantly after the actual equinox dates toward solar minimum, which may signal growing influence from the lower atmosphere forcing. This empirical model provides an instructive tool in exploring thermospheric density structures and dynamics. It can also be easily incorporated into other models to have a more accurate description of the background thermosphere, for both scientific and practical purposes.

  14. Deep in Data: Empirical Data Based Software Accuracy Testing Using the Building America Field Data Repository: Preprint

    SciTech Connect

    Neymark, J.; Roberts, D.

    2013-06-01

    An opportunity is available for using home energy consumption and building description data to develop a standardized accuracy test for residential energy analysis tools. That is, to test the ability of uncalibrated simulations to match real utility bills. Empirical data collected from around the United States have been translated into a uniform Home Performance Extensible Markup Language format that may enable software developers to create translators to their input schemes for efficient access to the data. This may facilitate the possibility of modeling many homes expediently, and thus implementing software accuracy test cases by applying the translated data. This paper describes progress toward, and issues related to, developing a usable, standardized, empirical data-based software accuracy test suite.

  15. Halfway Houses for Alcohol Dependents: From Theoretical Bases to Implications for the Organization of Facilities

    PubMed Central

    Reis, Alessandra Diehl; Laranjeira, Ronaldo

    2008-01-01

    The purpose of this paper is to supply a narrative review of the concepts, history, functions, methods, development and theoretical bases for the use of halfway houses for patients with mental disorders, and their correlations, for the net construction of chemical dependence model. This theme, in spite of its relevance, is still infrequently explored in the national literature. The authors report international and national uses of this model and discuss its applicability for the continuity of services for alcohol dependents. The results suggest that this area is in need of more attention and interest for future research. PMID:19061008

  16. The neural mediators of kindness-based meditation: a theoretical model.

    PubMed

    Mascaro, Jennifer S; Darcher, Alana; Negi, Lobsang T; Raison, Charles L

    2015-01-01

    Although kindness-based contemplative practices are increasingly employed by clinicians and cognitive researchers to enhance prosocial emotions, social cognitive skills, and well-being, and as a tool to understand the basic workings of the social mind, we lack a coherent theoretical model with which to test the mechanisms by which kindness-based meditation may alter the brain and body. Here, we link contemplative accounts of compassion and loving-kindness practices with research from social cognitive neuroscience and social psychology to generate predictions about how diverse practices may alter brain structure and function and related aspects of social cognition. Contingent on the nuances of the practice, kindness-based meditation may enhance the neural systems related to faster and more basic perceptual or motor simulation processes, simulation of another's affective body state, slower and higher-level perspective-taking, modulatory processes such as emotion regulation and self/other discrimination, and combinations thereof. This theoretical model will be discussed alongside best practices for testing such a model and potential implications and applications of future work.

  17. Theoretical analysis and experimental evaluation of a Csl(TI) based electronic portal imaging system.

    PubMed

    Sawant, Amit; Zeman, Herbert; Samant, Sanjiv; Lovhoiden, Gunnar; Weinberg, Brent; DiBianca, Frank

    2002-06-01

    This article discusses the design and analysis of a portal imaging system based on a thick transparent scintillator. A theoretical analysis using Monte Carlo simulation was performed to calculate the x-ray quantum detection efficiency (QDE), signal to noise ratio (SNR) and the zero frequency detective quantum efficiency [DQE(0)] of the system. A prototype electronic portal imaging device (EPID) was built, using a 12.7 mm thick, 20.32 cm diameter, Csl(Tl) scintillator, coupled to a liquid nitrogen cooled CCD TV camera. The system geometry of the prototype EPID was optimized to achieve high spatial resolution. The experimental evaluation of the prototype EPID involved the determination of contrast resolution, depth of focus, light scatter and mirror glare. Images of humanoid and contrast detail phantoms were acquired using the prototype EPID and were compared with those obtained using conventional and high contrast portal film and a commercial EPID. A theoretical analysis was also carried out for a proposed full field of view system using a large area, thinned CCD camera and a 12.7 mm thick CsI(TI) crystal. Results indicate that this proposed design could achieve DQE(0) levels up to 11%, due to its order of magnitude higher QDE compared to phosphor screen-metal plate based EPID designs, as well as significantly higher light collection compared to conventional TV camera based systems.

  18. The neural mediators of kindness-based meditation: a theoretical model

    PubMed Central

    Mascaro, Jennifer S.; Darcher, Alana; Negi, Lobsang T.; Raison, Charles L.

    2015-01-01

    Although kindness-based contemplative practices are increasingly employed by clinicians and cognitive researchers to enhance prosocial emotions, social cognitive skills, and well-being, and as a tool to understand the basic workings of the social mind, we lack a coherent theoretical model with which to test the mechanisms by which kindness-based meditation may alter the brain and body. Here, we link contemplative accounts of compassion and loving-kindness practices with research from social cognitive neuroscience and social psychology to generate predictions about how diverse practices may alter brain structure and function and related aspects of social cognition. Contingent on the nuances of the practice, kindness-based meditation may enhance the neural systems related to faster and more basic perceptual or motor simulation processes, simulation of another’s affective body state, slower and higher-level perspective-taking, modulatory processes such as emotion regulation and self/other discrimination, and combinations thereof. This theoretical model will be discussed alongside best practices for testing such a model and potential implications and applications of future work. PMID:25729374

  19. Patient centredness in integrated care: results of a qualitative study based on a systems theoretical framework

    PubMed Central

    Lüdecke, Daniel

    2014-01-01

    Introduction Health care providers seek to improve patient-centred care. Due to fragmentation of services, this can only be achieved by establishing integrated care partnerships. The challenge is both to control costs while enhancing the quality of care and to coordinate this process in a setting with many organisations involved. The problem is to establish control mechanisms, which ensure sufficiently consideration of patient centredness. Theory and methods Seventeen qualitative interviews have been conducted in hospitals of metropolitan areas in northern Germany. The documentary method, embedded into a systems theoretical framework, was used to describe and analyse the data and to provide an insight into the specific perception of organisational behaviour in integrated care. Results The findings suggest that integrated care partnerships rely on networks based on professional autonomy in the context of reliability. The relationships of network partners are heavily based on informality. This correlates with a systems theoretical conception of organisations, which are assumed autonomous in their decision-making. Conclusion and discussion Networks based on formal contracts may restrict professional autonomy and competition. Contractual bindings that suppress the competitive environment have negative consequences for patient-centred care. Drawbacks remain due to missing self-regulation of the network. To conclude, less regimentation of integrated care partnerships is recommended. PMID:25411573

  20. Theoretical calculations of base-base interactions in nucleic acids: II. Stacking interactions in polynucleotides.

    PubMed Central

    Gupta, G; Sasisekharan, V

    1978-01-01

    Base-base interactions were computed for single- and double stranded poly,ucleotides, for all possible base sequences. In each case, both right and left stacking arrangements are energetically possible. The preference of one over the other depends upon the base-sequence and the orientation of the bases with respect to helix-axis. Inverted stacking arrangement is also energetically possible for both single- and double-stranded polynucleotides. Finally, interacting energies of a regular duplex and the alternative structures were compared. It was found that the type II model is energetically more favourable than the rest. PMID:662698

  1. The successful merger of theoretical thermochemistry with fragment-based methods in quantum chemistry.

    PubMed

    Ramabhadran, Raghunath O; Raghavachari, Krishnan

    2014-12-16

    CONSPECTUS: Quantum chemistry and electronic structure theory have proven to be essential tools to the experimental chemist, in terms of both a priori predictions that pave the way for designing new experiments and rationalizing experimental observations a posteriori. Translating the well-established success of electronic structure theory in obtaining the structures and energies of small chemical systems to increasingly larger molecules is an exciting and ongoing central theme of research in quantum chemistry. However, the prohibitive computational scaling of highly accurate ab initio electronic structure methods poses a fundamental challenge to this research endeavor. This scenario necessitates an indirect fragment-based approach wherein a large molecule is divided into small fragments and is subsequently reassembled to compute its energy accurately. In our quest to further reduce the computational expense associated with the fragment-based methods and overall enhance the applicability of electronic structure methods to large molecules, we realized that the broad ideas involved in a different area, theoretical thermochemistry, are transferable to the area of fragment-based methods. This Account focuses on the effective merger of these two disparate frontiers in quantum chemistry and how new concepts inspired by theoretical thermochemistry significantly reduce the total number of electronic structure calculations needed to be performed as part of a fragment-based method without any appreciable loss of accuracy. Throughout, the generalized connectivity based hierarchy (CBH), which we developed to solve a long-standing problem in theoretical thermochemistry, serves as the linchpin in this merger. The accuracy of our method is based on two strong foundations: (a) the apt utilization of systematic and sophisticated error-canceling schemes via CBH that result in an optimal cutting scheme at any given level of fragmentation and (b) the use of a less expensive second

  2. Comparison of ensemble post-processing approaches, based on empirical and dynamical error modelisation of rainfall-runoff model forecasts

    NASA Astrophysics Data System (ADS)

    Chardon, J.; Mathevet, T.; Le Lay, M.; Gailhard, J.

    2012-04-01

    In the context of a national energy company (EDF : Electricité de France), hydro-meteorological forecasts are necessary to ensure safety and security of installations, meet environmental standards and improve water ressources management and decision making. Hydrological ensemble forecasts allow a better representation of meteorological and hydrological forecasts uncertainties and improve human expertise of hydrological forecasts, which is essential to synthesize available informations, coming from different meteorological and hydrological models and human experience. An operational hydrological ensemble forecasting chain has been developed at EDF since 2008 and is being used since 2010 on more than 30 watersheds in France. This ensemble forecasting chain is characterized ensemble pre-processing (rainfall and temperature) and post-processing (streamflow), where a large human expertise is solicited. The aim of this paper is to compare 2 hydrological ensemble post-processing methods developed at EDF in order improve ensemble forecasts reliability (similar to Monatanari &Brath, 2004; Schaefli et al., 2007). The aim of the post-processing methods is to dress hydrological ensemble forecasts with hydrological model uncertainties, based on perfect forecasts. The first method (called empirical approach) is based on a statistical modelisation of empirical error of perfect forecasts, by streamflow sub-samples of quantile class and lead-time. The second method (called dynamical approach) is based on streamflow sub-samples of quantile class and streamflow variation, and lead-time. On a set of 20 watersheds used for operational forecasts, results show that both approaches are necessary to ensure a good post-processing of hydrological ensemble, allowing a good improvement of reliability, skill and sharpness of ensemble forecasts. The comparison of the empirical and dynamical approaches shows the limits of the empirical approach which is not able to take into account hydrological

  3. "vocd": A Theoretical and Empirical Evaluation

    ERIC Educational Resources Information Center

    McCarthy, Philip M.; Jarvis, Scott

    2007-01-01

    A reliable index of lexical diversity (LD) has remained stubbornly elusive for over 60 years. Meanwhile, researchers in fields as varied as "stylistics," "neuropathology," "language acquisition," and even "forensics" continue to use flawed LD indices--often ignorant that their results are questionable and in…

  4. Measuring Modernism: Theoretical and Empirical Explorations

    ERIC Educational Resources Information Center

    Schnaiberg, Allan

    1970-01-01

    Using data from married Turkish women in Ankara city and four villages, it appears that each of the (6) measures of modernism represents a distinct behavioral sphere. A common denominator appears to lie in an emancipation" complex. (Author)

  5. An Empirical Agent-Based Model to Simulate the Adoption of Water Reuse Using the Social Amplification of Risk Framework.

    PubMed

    Kandiah, Venu; Binder, Andrew R; Berglund, Emily Z

    2017-01-11

    Water reuse can serve as a sustainable alternative water source for urban areas. However, the successful implementation of large-scale water reuse projects depends on community acceptance. Because of the negative perceptions that are traditionally associated with reclaimed water, water reuse is often not considered in the development of urban water management plans. This study develops a simulation model for understanding community opinion dynamics surrounding the issue of water reuse, and how individual perceptions evolve within that context, which can help in the planning and decision-making process. Based on the social amplification of risk framework, our agent-based model simulates consumer perceptions, discussion patterns, and their adoption or rejection of water reuse. The model is based on the "risk publics" model, an empirical approach that uses the concept of belief clusters to explain the adoption of new technology. Each household is represented as an agent, and parameters that define their behavior and attributes are defined from survey data. Community-level parameters-including social groups, relationships, and communication variables, also from survey data-are encoded to simulate the social processes that influence community opinion. The model demonstrates its capabilities to simulate opinion dynamics and consumer adoption of water reuse. In addition, based on empirical data, the model is applied to investigate water reuse behavior in different regions of the United States. Importantly, our results reveal that public opinion dynamics emerge differently based on membership in opinion clusters, frequency of discussion, and the structure of social networks.

  6. Modeling child-based theoretical reading constructs with struggling adult readers.

    PubMed

    Nanda, Alice O; Greenberg, Daphne; Morris, Robin

    2010-01-01

    This study examined whether measurement constructs behind reading-related tests for struggling adult readers are similar to what is known about measurement constructs for children. The sample included 371 adults reading between the third-and fifth-grade levels, including 127 men and 153 English speakers of other languages. Using measures of skills and subskills, confirmatory factor analyses were conducted to test child-based theoretical measurement models of reading: an achievement model of reading skills, a core deficit model of reading subskills, and an integrated model containing achievement and deficit variables. Although the findings present the best measurement models, the contribution of this article is the description of the difficulties encountered when applying child-based assumptions to developing measurement models for struggling adult readers.

  7. A theoretical derivation of the dilatancy equation for brittle rocks based on Maxwell model

    NASA Astrophysics Data System (ADS)

    Li, Jie; Huang, Houxu; Wang, Mingyang

    2017-01-01

    In this paper, the micro-cracks in the brittle rocks are assumed to be penny shaped and evenly distributed; the damage and dilatancy of the brittle rocks is attributed to the growth and expansion of numerous micro-cracks under the local tensile stress. A single crack's behaviour under the local tensile stress is generalized to all cracks based on the distributed damage mechanics. The relationship between the local tensile stress and the external loading is derived based on the Maxwell model. The damage factor corresponding to the external loading is represented using the p-alpha (p-α) model. A dilatancy equation that can build up a link between the external loading and the rock dilatancy is established. A test of dilatancy of a brittle rock under triaxial compression is conducted; the comparison between experimental results and our theoretical results shows good consistency.

  8. Theoretical investigation of kinetics of a Cu2S-based gap-type atomic switch

    NASA Astrophysics Data System (ADS)

    Nayak, Alpana; Tsuruoka, Tohru; Terabe, Kazuya; Hasegawa, Tsuyoshi; Aono, Masakazu

    2011-06-01

    Atomic switch, operating by forming and dissolving a metal-protrusion in a nanogap, shows an exponentially large bias dependence and a faster switching with increasing temperature and decreasing off-resistance. These major characteristics are explained with a simple model where the electrochemical potential at the subsurface of solid-electrolyte electrode determines the precipitation rate of metal atoms and the electric-field in the nanogap strongly affects the formation of metal-protrusion. Theoretically calculated switching time, based on this model, well reproduced the measured properties of a Cu2S-based atomic switch as a function of bias, temperature and off-resistance, providing a significant physical insight into the mechanism.

  9. Experimental and theoretical performance analysis for a CMOS-based high resolution image detector

    PubMed Central

    Jain, Amit; Bednarek, Daniel R.; Rudin, Stephen

    2014-01-01

    Increasing complexity of endovascular interventional procedures requires superior x-ray imaging quality. Present state-of-the-art x-ray imaging detectors may not be adequate due to their inherent noise and resolution limitations. With recent developments, CMOS based detectors are presenting an option to fulfill the need for better image quality. For this work, a new CMOS detector has been analyzed experimentally and theoretically in terms of sensitivity, MTF and DQE. The detector (Dexela Model 1207, Perkin-Elmer Co., London, UK) features 14-bit image acquisition, a CsI phosphor, 75 µm pixels and an active area of 12 cm × 7 cm with over 30 fps frame rate. This detector has two modes of operations with two different full-well capacities: high and low sensitivity. The sensitivity and instrumentation noise equivalent exposure (INEE) were calculated for both modes. The detector modulation-transfer function (MTF), noise-power spectra (NPS) and detective quantum efficiency (DQE) were measured using an RQA5 spectrum. For the theoretical performance evaluation, a linear cascade model with an added aliasing stage was used. The detector showed excellent linearity in both modes. The sensitivity and the INEE of the detector were found to be 31.55 DN/µR and 0.55 µR in high sensitivity mode, while they were 9.87 DN/µR and 2.77 µR in low sensitivity mode. The theoretical and experimental values for the MTF and DQE showed close agreement with good DQE even at fluoroscopic exposure levels. In summary, the Dexela detector's imaging performance in terms of sensitivity, linear system metrics, and INEE demonstrates that it can overcome the noise and resolution limitations of present state-of-the-art x-ray detectors. PMID:25300571

  10. Experimental and theoretical performance analysis for a CMOS-based high resolution image detector

    NASA Astrophysics Data System (ADS)

    Jain, Amit; Bednarek, Daniel R.; Rudin, Stephen

    2014-03-01

    Increasing complexity of endovascular interventional procedures requires superior x-ray imaging quality. Present stateof- the-art x-ray imaging detectors may not be adequate due to their inherent noise and resolution limitations. With recent developments, CMOS based detectors are presenting an option to fulfill the need for better image quality. For this work, a new CMOS detector has been analyzed experimentally and theoretically in terms of sensitivity, MTF and DQE. The detector (Dexela Model 1207, Perkin-Elmer Co., London, UK) features 14-bit image acquisition, a CsI phosphor, 75 μm pixels and an active area of 12 cm x 7 cm with over 30 fps frame rate. This detector has two modes of operations with two different full-well capacities: high and low sensitivity. The sensitivity and instrumentation noise equivalent exposure (INEE) were calculated for both modes. The detector modulation-transfer function (MTF), noise-power spectra (NPS) and detective quantum efficiency (DQE) were measured using an RQA5 spectrum. For the theoretical performance evaluation, a linear cascade model with an added aliasing stage was used. The detector showed excellent linearity in both modes. The sensitivity and the INEE of the detector were found to be 31.55 DN/μR and 0.55 μR in high sensitivity mode, while they were 9.87 DN/μR and 2.77 μR in low sensitivity mode. The theoretical and experimental values for the MTF and DQE showed close agreement with good DQE even at fluoroscopic exposure levels. In summary, the Dexela detector's imaging performance in terms of sensitivity, linear system metrics, and INEE demonstrates that it can overcome the noise and resolution limitations of present state-of-the-art x-ray detectors.

  11. Experimental and theoretical performance analysis for a CMOS-based high resolution image detector.

    PubMed

    Jain, Amit; Bednarek, Daniel R; Rudin, Stephen

    2014-03-19

    Increasing complexity of endovascular interventional procedures requires superior x-ray imaging quality. Present state-of-the-art x-ray imaging detectors may not be adequate due to their inherent noise and resolution limitations. With recent developments, CMOS based detectors are presenting an option to fulfill the need for better image quality. For this work, a new CMOS detector has been analyzed experimentally and theoretically in terms of sensitivity, MTF and DQE. The detector (Dexela Model 1207, Perkin-Elmer Co., London, UK) features 14-bit image acquisition, a CsI phosphor, 75 µm pixels and an active area of 12 cm × 7 cm with over 30 fps frame rate. This detector has two modes of operations with two different full-well capacities: high and low sensitivity. The sensitivity and instrumentation noise equivalent exposure (INEE) were calculated for both modes. The detector modulation-transfer function (MTF), noise-power spectra (NPS) and detective quantum efficiency (DQE) were measured using an RQA5 spectrum. For the theoretical performance evaluation, a linear cascade model with an added aliasing stage was used. The detector showed excellent linearity in both modes. The sensitivity and the INEE of the detector were found to be 31.55 DN/µR and 0.55 µR in high sensitivity mode, while they were 9.87 DN/µR and 2.77 µR in low sensitivity mode. The theoretical and experimental values for the MTF and DQE showed close agreement with good DQE even at fluoroscopic exposure levels. In summary, the Dexela detector's imaging performance in terms of sensitivity, linear system metrics, and INEE demonstrates that it can overcome the noise and resolution limitations of present state-of-the-art x-ray detectors.

  12. What 'empirical turn in bioethics'?

    PubMed

    Hurst, Samia

    2010-10-01

    Uncertainty as to how we should articulate empirical data and normative reasoning seems to underlie most difficulties regarding the 'empirical turn' in bioethics. This article examines three different ways in which we could understand 'empirical turn'. Using real facts in normative reasoning is trivial and would not represent a 'turn'. Becoming an empirical discipline through a shift to the social and neurosciences would be a turn away from normative thinking, which we should not take. Conducting empirical research to inform normative reasoning is the usual meaning given to the term 'empirical turn'. In this sense, however, the turn is incomplete. Bioethics has imported methodological tools from empirical disciplines, but too often it has not imported the standards to which researchers in these disciplines are held. Integrating empirical and normative approaches also represents true added difficulties. Addressing these issues from the standpoint of debates on the fact-value distinction can cloud very real methodological concerns by displacing the debate to a level of abstraction where they need not be apparent. Ideally, empirical research in bioethics should meet standards for empirical and normative validity similar to those used in the source disciplines for these methods, and articulate these aspects clearly and appropriately. More modestly, criteria to ensure that none of these standards are completely left aside would improve the quality of empirical bioethics research and partly clear the air of critiques addressing its theoretical justification, when its rigour in the particularly difficult context of interdisciplinarity is what should be at stake.

  13. Empirical estimation of genome-wide significance thresholds based on the 1000 Genomes Project data set

    PubMed Central

    Kanai, Masahiro; Tanaka, Toshihiro; Okada, Yukinori

    2016-01-01

    To assess the statistical significance of associations between variants and traits, genome-wide association studies (GWAS) should employ an appropriate threshold that accounts for the massive burden of multiple testing in the study. Although most studies in the current literature commonly set a genome-wide significance threshold at the level of P=5.0 × 10−8, the adequacy of this value for respective populations has not been fully investigated. To empirically estimate thresholds for different ancestral populations, we conducted GWAS simulations using the 1000 Genomes Phase 3 data set for Africans (AFR), Europeans (EUR), Admixed Americans (AMR), East Asians (EAS) and South Asians (SAS). The estimated empirical genome-wide significance thresholds were Psig=3.24 × 10−8 (AFR), 9.26 × 10−8 (EUR), 1.83 × 10−7 (AMR), 1.61 × 10−7 (EAS) and 9.46 × 10−8 (SAS). We additionally conducted trans-ethnic meta-analyses across all populations (ALL) and all populations except for AFR (ΔAFR), which yielded Psig=3.25 × 10−8 (ALL) and 4.20 × 10−8 (ΔAFR). Our results indicate that the current threshold (P=5.0 × 10−8) is overly stringent for all ancestral populations except for Africans; however, we should employ a more stringent threshold when conducting a meta-analysis, regardless of the presence of African samples. PMID:27305981

  14. Empirical estimation of genome-wide significance thresholds based on the 1000 Genomes Project data set.

    PubMed

    Kanai, Masahiro; Tanaka, Toshihiro; Okada, Yukinori

    2016-10-01

    To assess the statistical significance of associations between variants and traits, genome-wide association studies (GWAS) should employ an appropriate threshold that accounts for the massive burden of multiple testing in the study. Although most studies in the current literature commonly set a genome-wide significance threshold at the level of P=5.0 × 10(-8), the adequacy of this value for respective populations has not been fully investigated. To empirically estimate thresholds for different ancestral populations, we conducted GWAS simulations using the 1000 Genomes Phase 3 data set for Africans (AFR), Europeans (EUR), Admixed Americans (AMR), East Asians (EAS) and South Asians (SAS). The estimated empirical genome-wide significance thresholds were Psig=3.24 × 10(-8) (AFR), 9.26 × 10(-8) (EUR), 1.83 × 10(-7) (AMR), 1.61 × 10(-7) (EAS) and 9.46 × 10(-8) (SAS). We additionally conducted trans-ethnic meta-analyses across all populations (ALL) and all populations except for AFR (ΔAFR), which yielded Psig=3.25 × 10(-8) (ALL) and 4.20 × 10(-8) (ΔAFR). Our results indicate that the current threshold (P=5.0 × 10(-8)) is overly stringent for all ancestral populations except for Africans; however, we should employ a more stringent threshold when conducting a meta-analysis, regardless of the presence of African samples.

  15. Theoretical investigation of conductivity sensitivities of SiC-based bio-chemical acoustic wave sensors

    NASA Astrophysics Data System (ADS)

    Fan, Li; Chen, Zhe; Zhang, Shu-yi; Zhang, Hui

    2014-02-01

    The phase velocities, electromechanical coupling coefficients, conductivity sensitivities, insert losses, and minimum detectable masses of Rayleigh and Lamb waves sensors based on silicon carbide (SiC) substrates are theoretically studied. The results are compared with the performances of the sensors based on conventional silicon substrates. It is found that the sensors using SiC substrates have higher electromechanical coupling coefficients and conductivity sensitivities than the conventional silicon-based sensors in virtue of piezoelectricity of the SiC. Moreover, higher phase velocities in SiC substrates can reduce the insert losses and minimum detectable masses of the sensors. In this case, in the detection of the gas with the tiny mass as the hydrogen, in which the conductivity sensitivity is more important than the mass sensitivity, the sensor based on the SiC substrate has a higher sensitivity and exhibits the potential to detect the gas with the concentration below the ppm level. According to the results, the performances of the sensors based on the Rayleigh and Lamb waves using the SiC substrates can be optimized by properly selecting piezoelectric films, structural parameters, and operating wavelengths.

  16. [Synthesis and theoretical study on fluorescence property of 4- (2-hydroxybenzylideneamino) phenyl ethanone schiff base].

    PubMed

    Liang, Xiao-Rui; Wang, Gang; Jiang, Yan-Lan; Qu, Cheng-Li; Wang, Xiu-Juan; Zhao, Bo

    2013-12-01

    Using salicylaldehyde and 4-aminophenyl ethanone as raw material, a Schiff base derivative 4-(2-hydroxybenzylidene-amino) phenyl ethanone was synthesized by the solid phase reaction method at room temperature. The structure of the product was characterized by elemental analysis and 1 HNMR The UV spectra, fluorescence emission spectra and fluorescence quantum yield of the title Schiff base derivative were investigated. The results showed that this Schiff base displayed superior fluorescence property. The ground state configuration of the title Schiff base was optimized by density functional theory (DFT) method at the B3LYP/6-311G level. After vibrational analysis, there is no imaginary frequency, which indicates that the structure is stable. Then the ground state configuration was optimized to the excited state configuration by the method of single excited interactions CIS. Based on the optimized structure for the ground state and excited state time-dependent density functional theory (TD-DFT) calculations were carried out at the B3LYP/6-31G level to predict the absorption spectra and the fluorescence spectra. The results show that the computed spectra were comparable with the spectra from the experiments. The relationship between the molecular structure and the fluorescence property of 4-(2-hydroxybenzylideneamino) phenyl ethanone was also discussed. The results obtained may provide some theoretical guidance for the design of new fluorescence compounds.

  17. Empirical evaluation of H.265/HEVC-based dynamic adaptive video streaming over HTTP (HEVC-DASH)

    NASA Astrophysics Data System (ADS)

    Irondi, Iheanyi; Wang, Qi; Grecos, Christos

    2014-05-01

    Real-time HTTP streaming has gained global popularity for delivering video content over Internet. In particular, the recent MPEG-DASH (Dynamic Adaptive Streaming over HTTP) standard enables on-demand, live, and adaptive Internet streaming in response to network bandwidth fluctuations. Meanwhile, emerging is the new-generation video coding standard, H.265/HEVC (High Efficiency Video Coding) promises to reduce the bandwidth requirement by 50% at the same video quality when compared with the current H.264/AVC standard. However, little existing work has addressed the integration of the DASH and HEVC standards, let alone empirical performance evaluation of such systems. This paper presents an experimental HEVC-DASH system, which is a pull-based adaptive streaming solution that delivers HEVC-coded video content through conventional HTTP servers where the client switches to its desired quality, resolution or bitrate based on the available network bandwidth. Previous studies in DASH have focused on H.264/AVC, whereas we present an empirical evaluation of the HEVC-DASH system by implementing a real-world test bed, which consists of an Apache HTTP Server with GPAC, an MP4Client (GPAC) with open HEVC-based DASH client and a NETEM box in the middle emulating different network conditions. We investigate and analyze the performance of HEVC-DASH by exploring the impact of various network conditions such as packet loss, bandwidth and delay on video quality. Furthermore, we compare the Intra and Random Access profiles of HEVC coding with the Intra profile of H.264/AVC when the correspondingly encoded video is streamed with DASH. Finally, we explore the correlation among the quality metrics and network conditions, and empirically establish under which conditions the different codecs can provide satisfactory performance.

  18. Metamaterial-based theoretical description of light scattering by metallic nano-hole array structures

    SciTech Connect

    Singh, Mahi R.; Najiminaini, Mohamadreza; Carson, Jeffrey J. L.; Balakrishnan, Shankar

    2015-05-14

    We have experimentally and theoretically investigated the light-matter interaction in metallic nano-hole array structures. The scattering cross section spectrum was measured for three samples each having a unique nano-hole array radius and periodicity. Each measured spectrum had several peaks due to surface plasmon polaritons. The dispersion relation and the effective dielectric constant of the structure were calculated using transmission line theory and Bloch's theorem. Using the effective dielectric constant and the transfer matrix method, the surface plasmon polariton energies were calculated and found to be quantized. Using these quantized energies, a Hamiltonian for the surface plasmon polaritons was written in the second quantized form. Working with the Hamiltonian, a theory of scattering cross section was developed based on the quantum scattering theory and Green's function method. For both theory and experiment, the location of the surface plasmon polariton spectral peaks was dependant on the array periodicity and radii of the nano-holes. Good agreement was observed between the experimental and theoretical results. It is proposed that the newly developed theory can be used to facilitate optimization of nanosensors for medical and engineering applications.

  19. (E)-2-[(2-hydroxybenzylidene)amino]phenylarsonic acid Schiff base: Synthesis, characterization and theoretical studies

    NASA Astrophysics Data System (ADS)

    Judith Percino, M.; Cerón, Margarita; Castro, María Eugenia; Ramírez, Ricardo; Soriano, Guillermo; Chapela, Víctor M.

    2015-02-01

    The structure of the Schiff base (E)-2-[(2-hydroxybenzylidene)amino]phenylarsonic [(E)-HBAPhAA], synthesized from salicylaldehyde and o-aminophenylarsonic acid in the presence of HCl, was characterized by FTIR, 1H NMR, EI-MS, UV-Vis spectroscopy, and X-ray crystallography. The crystal belonged to the monoclinic space group P21/c. Two molecules formed a dimer via intermolecular interactions due to the attachment of H atoms to O1, O3 and O4 with Osbnd H bond distances within reasonable ranges, ca. 0.84(3) Å. The structure also showed two intramolecular interactions of 2.634(2) and 3.053(2) Å for Nsbnd H⋯O hydrogen bonds, which caused the structures to be almost planar. We performed a theoretical analysis using DFT theory at B3LYP/6-31+G(d,p) level to determine the stability of the E and Z conformers. The geometry analysis of the E- and Z-isomers revealed an interconversion energy barrier between E/Z isomers of 22.72 kcal mol-1. We also theoretically analyzed the keto form of the E-isomer and observed a small energy barrier for the tautomerization of 6.17 kcal mol-1.

  20. EEG-fMRI based information theoretic characterization of the human perceptual decision system.

    PubMed

    Ostwald, Dirk; Porcaro, Camillo; Mayhew, Stephen D; Bagshaw, Andrew P

    2012-01-01

    The modern metaphor of the brain is that of a dynamic information processing device. In the current study we investigate how a core cognitive network of the human brain, the perceptual decision system, can be characterized regarding its spatiotemporal representation of task-relevant information. We capitalize on a recently developed information theoretic framework for the analysis of simultaneously acquired electroencephalography (EEG) and functional magnetic resonance imaging data (fMRI) (Ostwald et al. (2010), NeuroImage 49: 498-516). We show how this framework naturally extends from previous validations in the sensory to the cognitive domain and how it enables the economic description of neural spatiotemporal information encoding. Specifically, based on simultaneous EEG-fMRI data features from n = 13 observers performing a visual perceptual decision task, we demonstrate how the information theoretic framework is able to reproduce earlier findings on the neurobiological underpinnings of perceptual decisions from the response signal features' marginal distributions. Furthermore, using the joint EEG-fMRI feature distribution, we provide novel evidence for a highly distributed and dynamic encoding of task-relevant information in the human brain.

  1. Metamaterial-based theoretical description of light scattering by metallic nano-hole array structures

    NASA Astrophysics Data System (ADS)

    Singh, Mahi R.; Najiminaini, Mohamadreza; Balakrishnan, Shankar; Carson, Jeffrey J. L.

    2015-05-01

    We have experimentally and theoretically investigated the light-matter interaction in metallic nano-hole array structures. The scattering cross section spectrum was measured for three samples each having a unique nano-hole array radius and periodicity. Each measured spectrum had several peaks due to surface plasmon polaritons. The dispersion relation and the effective dielectric constant of the structure were calculated using transmission line theory and Bloch's theorem. Using the effective dielectric constant and the transfer matrix method, the surface plasmon polariton energies were calculated and found to be quantized. Using these quantized energies, a Hamiltonian for the surface plasmon polaritons was written in the second quantized form. Working with the Hamiltonian, a theory of scattering cross section was developed based on the quantum scattering theory and Green's function method. For both theory and experiment, the location of the surface plasmon polariton spectral peaks was dependant on the array periodicity and radii of the nano-holes. Good agreement was observed between the experimental and theoretical results. It is proposed that the newly developed theory can be used to facilitate optimization of nanosensors for medical and engineering applications.

  2. An automatic electroencephalography blinking artefact detection and removal method based on template matching and ensemble empirical mode decomposition.

    PubMed

    Bizopoulos, Paschalis A; Al-Ani, Tarik; Tsalikakis, Dimitrios G; Tzallas, Alexandros T; Koutsouris, Dimitrios D; Fotiadis, Dimitrios I

    2013-01-01

    Electrooculographic (EOG) artefacts are one of the most common causes of Electroencephalogram (EEG) distortion. In this paper, we propose a method for EOG Blinking Artefacts (BAs) detection and removal from EEG. Normalized Correlation Coefficient (NCC), based on a predetermined BA template library was used for detecting the BA. Ensemble Empirical Mode Decomposition (EEMD) was applied to the contaminated region and a statistical algorithm determined which Intrinsic Mode Functions (IMFs) correspond to the BA. The proposed method was applied in simulated EEG signals, which were contaminated with artificially created EOG BAs, increasing the Signal-to-Error Ratio (SER) of the EEG Contaminated Region (CR) by 35 dB on average.

  3. An Empirical Model-based MOE for Friction Reduction by Slot-Ejected Polymer Solutions in an Aqueous Environment

    DTIC Science & Technology

    2007-12-21

    Date: 21 December 2007 Prepared by: Dr. John G. Pierce Under Contract No.: N00014-06-C-0535 Submitted to: Office of Naval Research Dr...1. REPORT DATE 21 DEC 2007 2. REPORT TYPE 3. DATES COVERED 00-00-2007 to 00-00-2007 4. TITLE AND SUBTITLE An Empirical Model-based MOE for...SPONSOR/MONITOR’S ACRONYM(S) 11 . SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution

  4. Empirical rainfall thresholds and copula based IDF curves for shallow landslides and flash floods

    NASA Astrophysics Data System (ADS)

    Bezak, Nejc; Šraj, Mojca; Brilly, Mitja; Mikoš, Matjaž

    2015-04-01

    Large mass movements, like deep-seated landslides or large debris flows, and flash floods can endanger human lives and cause huge environmental and economic damage in hazard areas. The main objective of the study was to investigate the characteristics of selected extreme rainfall events, which triggered landslides and caused flash floods, in Slovenia in the last 25 years. Seven extreme events, which occurred in Slovenia (Europe) in the last 25 years (1990-2014) and caused 17 casualties and about 500 million Euros of economic loss, were analysed in this study. Post-event analyses showed that rainfall characteristics triggering flash floods and landslides are different where landslides were triggered by longer duration (up to one or few weeks) rainfall events and flash floods by short duration (few hours to one or two days) rainfall events. The sensitivity analysis results indicate that inter-event time variable, which is defined as the minimum duration of the period without rain between two consecutive rainfall events, and sample definition methodology can have significant influence on the position of rainfall events in the intensity-duration space, on the constructed intensity-duration-frequency (IDF) curves and on the relationship between the empirical rainfall threshold curves and IDF curves constructed using copula approach. The empirical rainfall threshold curves (ID curves) were also evaluated for the selected extreme events. The results indicate that a combination of several empirical rainfall thresholds with appropriate high density of rainfall measuring network can be used as part of the early warning system for initiation of landslides and debris flows. However, different rainfall threshold curves should be used for lowland and mountainous areas in Slovenia. Furthermore, the intensity-duration-frequency (IDF) relationship was constructed using the Frank copula functions for 16 pluviographic meteorological stations in Slovenia using the high resolution

  5. Generalised Linear Models Incorporating Population Level Information: An Empirical Likelihood Based Approach

    PubMed Central

    Chaudhuri, Sanjay; Handcock, Mark S.; Rendall, Michael S.

    2011-01-01

    In many situations information from a sample of individuals can be supplemented by population level information on the relationship between a dependent variable and explanatory variables. Inclusion of the population level information can reduce bias and increase the efficiency of the parameter estimates. Population level information can be incorporated via constraints on functions of the model parameters. In general the constraints are nonlinear making the task of maximum likelihood estimation harder. In this paper we develop an alternative approach exploiting the notion of an empirical likelihood. It is shown that within the framework of generalised linear models, the population level information corresponds to linear constraints, which are comparatively easy to handle. We provide a two-step algorithm that produces parameter estimates using only unconstrained estimation. We also provide computable expressions for the standard errors. We give an application to demographic hazard modelling by combining panel survey data with birth registration data to estimate annual birth probabilities by parity. PMID:22740776

  6. Ab initio based empirical potential used to study the mechanical properties of molybdenum

    NASA Astrophysics Data System (ADS)

    Park, Hyoungki; Fellinger, Michael R.; Lenosky, Thomas J.; Tipton, William W.; Trinkle, Dallas R.; Rudin, Sven P.; Woodward, Christopher; Wilkins, John W.; Hennig, Richard G.

    2012-06-01

    Density-functional theory energies, forces, and elastic constants determine the parametrization of an empirical, modified embedded-atom method potential for molybdenum. The accuracy and transferability of the potential are verified by comparison to experimental and density-functional data for point defects, phonons, thermal expansion, surface and stacking fault energies, and ideal shear strength. Searching the energy landscape predicted by the potential using a genetic algorithm verifies that it reproduces not only the correct bcc ground state of molybdenum but also all low-energy metastable phases. The potential is also applicable to the study of plastic deformation and used to compute energies, core structures, and Peierls stresses of screw and edge dislocations.

  7. Stability evaluation of short-circuiting gas metal arc welding based on ensemble empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Huang, Yong; Wang, Kehong; Zhou, Zhilan; Zhou, Xiaoxiao; Fang, Jimi

    2017-03-01

    The arc of gas metal arc welding (GMAW) contains abundant information about its stability and droplet transition, which can be effectively characterized by extracting the arc electrical signals. In this study, ensemble empirical mode decomposition (EEMD) was used to evaluate the stability of electrical current signals. The welding electrical signals were first decomposed by EEMD, and then transformed to a Hilbert–Huang spectrum and a marginal spectrum. The marginal spectrum is an approximate distribution of amplitude with frequency of signals, and can be described by a marginal index. Analysis of various welding process parameters showed that the marginal index of current signals increased when the welding process was more stable, and vice versa. Thus EEMD combined with the marginal index can effectively uncover the stability and droplet transition of GMAW.

  8. Bearing fault detection based on hybrid ensemble detector and empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Georgoulas, George; Loutas, Theodore; Stylios, Chrysostomos D.; Kostopoulos, Vassilis

    2013-12-01

    Aiming at more efficient fault diagnosis, this research work presents an integrated anomaly detection approach for seeded bearing faults. Vibration signals from normal bearings and bearings with three different fault locations, as well as different fault sizes and loading conditions are examined. The Empirical Mode Decomposition and the Hilbert Huang transform are employed for the extraction of a compact feature set. Then, a hybrid ensemble detector is trained using data coming only from the normal bearings and it is successfully applied for the detection of any deviation from the normal condition. The results prove the potential use of the proposed scheme as a first stage of an alarm signalling system for the detection of bearing faults irrespective of their loading condition.

  9. Gold price analysis based on ensemble empirical model decomposition and independent component analysis

    NASA Astrophysics Data System (ADS)

    Xian, Lu; He, Kaijian; Lai, Kin Keung

    2016-07-01

    In recent years, the increasing level of volatility of the gold price has received the increasing level of attention from the academia and industry alike. Due to the complexity and significant fluctuations observed in the gold market, however, most of current approaches have failed to produce robust and consistent modeling and forecasting results. Ensemble Empirical Model Decomposition (EEMD) and Independent Component Analysis (ICA) are novel data analysis methods that can deal with nonlinear and non-stationary time series. This study introduces a new methodology which combines the two methods and applies it to gold price analysis. This includes three steps: firstly, the original gold price series is decomposed into several Intrinsic Mode Functions (IMFs) by EEMD. Secondly, IMFs are further processed with unimportant ones re-grouped. Then a new set of data called Virtual Intrinsic Mode Functions (VIMFs) is reconstructed. Finally, ICA is used to decompose VIMFs into statistically Independent Components (ICs). The decomposition results reveal that the gold price series can be represented by the linear combination of ICs. Furthermore, the economic meanings of ICs are analyzed and discussed in detail, according to the change trend and ICs' transformation coefficients. The analyses not only explain the inner driving factors and their impacts but also conduct in-depth analysis on how these factors affect gold price. At the same time, regression analysis has been conducted to verify our analysis. Results from the empirical studies in the gold markets show that the EEMD-ICA serve as an effective technique for gold price analysis from a new perspective.

  10. EVA: A new theoretically based molecular descriptor for use in QSAR/QSPR analysis

    NASA Astrophysics Data System (ADS)

    Ferguson, A. M.; Heritage, T.; Jonathon, P.; Pack, S. E.; Phillips, L.; Rogan, J.; Snaith, P. J.

    1997-03-01

    A new descriptor of molecular structure, EVA, for use in the derivation of robustly predictive QSAR relationships is described. It is based on theoretically derived normal coordinate frequencies, and has been used extensively and successfully in proprietary chemical discovery programmes within Shell Research. As a result of informal dissemination of the methodology, it is now being used successfully in related areas such as pharmaceutical drug discovery. Much of the experimental data used in development remain proprietary, and are not available for publication. This paper describes the method and illustrates its application to the calculation of nonproprietary data, log Pow, in both explanatory and predictive modes. It will be followed by other publications illustrating its application to a range of data derived from biological systems.

  11. Theoretical investigation of hydrogen transfer mechanism in the adenine thymine base pair

    NASA Astrophysics Data System (ADS)

    Villani, Giovanni

    2005-09-01

    We have studied the quantum dynamics of the hydrogen bonds in the adenine-thymine base pair. Due to the position of hydrogen atoms, different tautomers are possible: the stable Watson-Crick A-T, the imino-enol A*-T* and the zwitterionic (the form with charge separation) A +-T - and A --T + structures. The common idea in the literature is that only A-T exists either because the difference of energy among this tautomer and the others is large or because the other structures are transformed quickly in A-T. Here, we show a detailed theoretical study that suggests the following conclusion: A-T is the stablest tautomer, a partially charged system is important and a small amount of the imino-enol A*-T* tautomer is present at any time. The mechanism of passage from A-T tautomer to the others has also been investigated.

  12. Boron based two-dimensional crystals: theoretical design, realization proposal and applications

    NASA Astrophysics Data System (ADS)

    Li, Xian-Bin; Xie, Sheng-Yi; Zheng, Hui; Tian, Wei Quan; Sun, Hong-Bo

    2015-11-01

    The successful realization of free-standing graphene and the various applications of its exotic properties have spurred tremendous research interest for two-dimensional (2D) layered materials. Besides graphene, many other 2D materials have been successfully produced by experiment, such as silicene, monolayer MoS2, few-layer black phosphorus and so on. As a neighbor of carbon in the periodic table, element boron is interesting and many researchers have contributed their efforts to realize boron related 2D structures. These structures may be significant both in fundamental science and future technical applications in nanoelectronics and nanodevices. In this review, we summarize the recent developments of 2D boron based materials. The theoretical design, possible experimental realization strategies and their potential technical applications are presented and discussed. Also, the current challenges and prospects of this area are discussed.

  13. Nanoscale deflection detection of a cantilever-based biosensor using MOSFET structure: A theoretical analysis

    NASA Astrophysics Data System (ADS)

    Paryavi, Mohsen; Montazeri, Abbas; Tekieh, Tahereh; Sasanpour, Pezhman

    2016-10-01

    A novel method for detection of biological species based on measurement of cantilever deflection has been proposed and numerically evaluated. Employing the cantilever as a moving gate of a MOSFET structure, its deflection can be analyzed via current characterization of the MOSFET consequently. Locating the cantilever as a suspended gate of a MOSFET on a substrate, the distance between cantilever and oxide layer will change the carrier concentration. Accordingly, it will be resulted in different current voltage characteristics of the device which can be easily measured using simple apparatuses. In order to verify the proposed method, the performance of system has been theoretically analyzed using COMSOL platform. The simulation results have confirmed the performance and sensitivity of the proposed method.

  14. A new theoretical framework for modeling respiratory protection based on the beta distribution.

    PubMed

    Klausner, Ziv; Fattal, Eyal

    2014-08-01

    The problem of modeling respiratory protection is well known and has been dealt with extensively in the literature. Often the efficiency of respiratory protection is quantified in terms of penetration, defined as the proportion of an ambient contaminant concentration that penetrates the respiratory protection equipment. Typically, the penetration modeling framework in the literature is based on the assumption that penetration measurements follow the lognormal distribution. However, the analysis in this study leads to the conclusion that the lognormal assumption is not always valid, making it less adequate for analyzing respiratory protection measurements. This work presents a formulation of the problem from first principles, leading to a stochastic differential equation whose solution is the probability density function of the beta distribution. The data of respiratory protection experiments were reexamined, and indeed the beta distribution was found to provide the data a better fit than the lognormal. We conclude with a suggestion for a new theoretical framework for modeling respiratory protection.

  15. Boron based two-dimensional crystals: theoretical design, realization proposal and applications.

    PubMed

    Li, Xian-Bin; Xie, Sheng-Yi; Zheng, Hui; Tian, Wei Quan; Sun, Hong-Bo

    2015-12-07

    The successful realization of free-standing graphene and the various applications of its exotic properties have spurred tremendous research interest for two-dimensional (2D) layered materials. Besides graphene, many other 2D materials have been successfully produced by experiment, such as silicene, monolayer MoS2, few-layer black phosphorus and so on. As a neighbor of carbon in the periodic table, element boron is interesting and many researchers have contributed their efforts to realize boron related 2D structures. These structures may be significant both in fundamental science and future technical applications in nanoelectronics and nanodevices. In this review, we summarize the recent developments of 2D boron based materials. The theoretical design, possible experimental realization strategies and their potential technical applications are presented and discussed. Also, the current challenges and prospects of this area are discussed.

  16. Results of experimental and theoretical investigations in charge transfer transitions, scintillators and Eu 2+ based phosphors

    NASA Astrophysics Data System (ADS)

    Srivastava, Alok M.

    2009-11-01

    A brief overview of recent results obtained in scintillator and phosphors are presented. Four topics, that are at the center of considerable research, and which are important from both fundamental and practical point of view, are chosen. The identification and behavior of ligand-to-RE 3+ (RE 3+ = rare earth) charge transfer transition when the ligand ions are halides and N 3- is reviewed. The reasons for the high light yield of the LuI 3:Ce 3+ scintillator is investigated theoretically and a new channel of energy transfer to excitons and directly to the Ce 3+ ion identified. The prospect of increasing the light yield of Ce 3+ based scintillators by the Pr 3+ ion is discussed. Finally, the remarkable luminescence of octahedrally coordinated Eu 2+ ion in Cs 2M 2+P 2O 7 (M 2+ = Ca, Sr) is discussed.

  17. New anthracene-based Schiff bases: Theoretical and experimental investigations of photophysical and electrochemical properties

    NASA Astrophysics Data System (ADS)

    Sek, Danuta; Siwy, Mariola; Grucela, Marzena; Małecki, Grzegorz; Nowak, Elżbieta M.; Lewinska, Gabriela; Santera, Jerzy; Laba, Katarzyna; Lapkowski, Mieczyslaw; Kotowicz, Sonia; Schab-Balcerzak, Ewa

    2017-03-01

    The new Schiff bases bearing anthracene unit were synthesized from 2-aminoanthracene and various aldehydes such as: benzaldehyde, 4-(diphenylamino)benzaldehyde, 9-phenanthrenecarboxaldehyde, 9-anthracenecarboxaldehyde, and biphenyl-4-carboxaldehyde, 2-naphthaldehyde. Resulted azomethines were characterized by IR, NMR (1H and 13C), elemental analysis and UV-vis spectroscopy. The imine consists of anthracene and biphenyl moieties exhibited liquid crystal properties and their nematic phase showed Schlieren texture. The photoluminescence measurements carried out in solution and in solid state as blend with PMMA revealed the ability of the imines to emission of the blue light with quantum yield efficiency in the range of 2.18-6.03% in blend. Based on the electrochemical experiment they showed value of energy gap (Eg) in the range of 2.5-2.7 eV. Additionally, density functional theory (DFT) was applied for calculations of both electronic structure and spectroscopic properties of synthesized Schiff bases. Moreover, the results obtained from preliminary tests of application of the azomethines in organic photovoltaic (OPV) devices confirmed their electron acceptor character.

  18. New anthracene-based Schiff bases: Theoretical and experimental investigations of photophysical and electrochemical properties.

    PubMed

    Sek, Danuta; Siwy, Mariola; Grucela, Marzena; Małecki, Grzegorz; Nowak, Elżbieta M; Lewinska, Gabriela; Santera, Jerzy; Laba, Katarzyna; Lapkowski, Mieczyslaw; Kotowicz, Sonia; Schab-Balcerzak, Ewa

    2017-03-15

    The new Schiff bases bearing anthracene unit were synthesized from 2-aminoanthracene and various aldehydes such as: benzaldehyde, 4-(diphenylamino)benzaldehyde, 9-phenanthrenecarboxaldehyde, 9-anthracenecarboxaldehyde, and biphenyl-4-carboxaldehyde, 2-naphthaldehyde. Resulted azomethines were characterized by IR, NMR ((1)H and (13)C), elemental analysis and UV-vis spectroscopy. The imine consists of anthracene and biphenyl moieties exhibited liquid crystal properties and their nematic phase showed Schlieren texture. The photoluminescence measurements carried out in solution and in solid state as blend with PMMA revealed the ability of the imines to emission of the blue light with quantum yield efficiency in the range of 2.18-6.03% in blend. Based on the electrochemical experiment they showed value of energy gap (Eg) in the range of 2.5-2.7eV. Additionally, density functional theory (DFT) was applied for calculations of both electronic structure and spectroscopic properties of synthesized Schiff bases. Moreover, the results obtained from preliminary tests of application of the azomethines in organic photovoltaic (OPV) devices confirmed their electron acceptor character.

  19. Theoretical Limits of Energy Density in Silicon-Carbon Composite Anode Based Lithium Ion Batteries

    PubMed Central

    Dash, Ranjan; Pannala, Sreekanth

    2016-01-01

    Silicon (Si) is under consideration as a potential next-generation anode material for the lithium ion battery (LIB). Experimental reports of up to 40% increase in energy density of Si anode based LIBs (Si-LIBs) have been reported in literature. However, this increase in energy density is achieved when the Si-LIB is allowed to swell (volumetrically expand) more than graphite based LIB (graphite-LIB) and beyond practical limits. The volume expansion of LIB electrodes should be negligible for applications such as automotive or mobile devices. We determine the theoretical bounds of Si composition in a Si–carbon composite (SCC) based anode to maximize the volumetric energy density of a LIB by constraining the external dimensions of the anode during charging. The porosity of the SCC anode is adjusted to accommodate the volume expansion during lithiation. The calculated threshold value of Si was then used to determine the possible volumetric energy densities of LIBs with SCC anode (SCC-LIBs) and the potential improvement over graphite-LIBs. The level of improvement in volumetric and gravimetric energy density of SCC-LIBs with constrained volume is predicted to be less than 10% to ensure the battery has similar power characteristics of graphite-LIBs. PMID:27311811

  20. Theoretical investigation of all-metal-based mushroom plasmonic metamaterial absorbers at infrared wavelengths

    NASA Astrophysics Data System (ADS)

    Ogawa, Shinpei; Fujisawa, Daisuke; Kimata, Masafumi

    2015-12-01

    High-performance wavelength-selective infrared (IR) sensors require small pixel structures, a low-thermal mass, and operation in the middle-wavelength infrared (MWIR) and long-wavelength infrared (LWIR) regions for multicolor IR imaging. All-metal-based mushroom plasmonic metamaterial absorbers (MPMAs) were investigated theoretically and were designed to enhance the performance of wavelength-selective uncooled IR sensors. All components of the MPMAs are based on thin layers of metals such as Au without oxide insulators for increased absorption. The absorption properties of the MPMAs were investigated by rigorous coupled-wave analysis. Strong wavelength-selective absorption is realized over a wide range of MWIR and LWIR wavelengths by the plasmonic resonance of the micropatch and the narrow-gap resonance, without disturbance from the intrinsic absorption of oxide insulators. The absorption wavelength is defined mainly by the micropatch size and is longer than its period. The metal post width has less impact on the absorption properties and can maintain single-mode operation. Through-holes can be formed on the plate area to reduce the thermal mass. A small pixel size with reduced thermal mass and wideband single-mode operation can be realized using all-metal-based MPMAs.

  1. Theoretical Analysis on Mechanical Deformation of Membrane-Based Photomask Blanks

    NASA Astrophysics Data System (ADS)

    Marumoto, Kenji; Aya, Sunao; Yabe, Hedeki; Okada, Tatsunori; Sumitani, Hiroaki

    2012-04-01

    Membrane-based photomask is used in proximity X-ray lithography including that in LIGA (Lithographie, Galvanoformung und Abformung) process, and near-field photolithography. In this article, out-of-plane deformation (OPD) and in-plane displacement (IPD) of membrane-based photomask blanks are theoretically analyzed to obtain the mask blanks with flat front surface and low stress absorber film. First, we derived the equations of OPD and IPD for the processing steps of membrane-based photomask such as film deposition, back-etching and bonding, using a theory of symmetrical bending of circular plates with a coaxial circular hole and that of deformation of cylinder under hydrostatic pressure. The validity of the equations was proved by comparing the calculation results with experimental ones. Using these equations, we investigated the relation between the geometry of the mask blanks and the distortions generally, and gave the criterion to attain the flat front surface. Moreover, the absorber stress-bias required to obtain zero-stress on finished mask blanks was also calculated and it has been found that only little stress-bias was required for adequate hole size of support plate.

  2. Theoretical study of carbon-based tips for scanning tunnelling microscopy.

    PubMed

    González, C; Abad, E; Dappe, Y J; Cuevas, J C

    2016-03-11

    Motivated by recent experiments, we present here a detailed theoretical analysis of the use of carbon-based conductive tips in scanning tunnelling microscopy. In particular, we employ ab initio methods based on density functional theory to explore a graphitic, an amorphous carbon and two diamond-like tips for imaging with a scanning tunnelling microscope (STM), and we compare them with standard metallic tips made of gold and tungsten. We investigate the performance of these tips in terms of the corrugation of the STM images acquired when scanning a single graphene sheet. Moreover, we analyse the impact of the tip-sample distance and show that it plays a fundamental role in the resolution and symmetry of the STM images. We also explore in depth how the adsorption of single atoms and molecules in the tip apexes modifies the STM images and demonstrate that, in general, it leads to an improved image resolution. The ensemble of our results provides strong evidence that carbon-based tips can significantly improve the resolution of STM images, as compared to more standard metallic tips, which may open a new line of research in scanning tunnelling microscopy.

  3. Synthesis and characterization of three novel Schiff base compounds: Experimental and theoretical study

    NASA Astrophysics Data System (ADS)

    Taslı, P. T.; Bayrakdar, A.; Karakus, O. O.; Kart, H. H.; Koc, Y.

    2015-09-01

    In this study, three novel Schiff base compounds such as N-(4-nitrobenzyl)-4-methyl bromo aniline ( 1a), N-(2,4-dimethoxybenzyl)-4-methyl bromoaniline ( 2a), SN-((1H-indol-3-yl) methylene)-4- methyl bromoaniline ( 3a) are synthesized and characterized by using the spectroscopic methods of UV, IR and 1H-NMR. Molecular geometry and spectroscopic properties of synthesized compounds are also analyzed by using ab initio calculation methods based on the density functional theory (DFT) in the ground state. The extensive theoretical and experimental FT-IR and UV-vis spectrometry studies of synthesized compounds are performed. The optimized molecular structure and harmonic vibrational frequencies are studied by using B3LYP/6-311++G(d,p) method. Moreover, electronic structures are investigated by using the time dependent density functional theory (TD-DFT) while the energy changes of the parent compounds are examined in a solvent medium by using the polarizable continuum model (PCM). Additionally, the frontier molecular orbital analysis is performed for the Schiff base compounds. The electronic properties of each compound such as; chemical hardness, chemical softness, ionization potential, electron affinity, electronegativity and chemical potential are investigated by utilizing the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) energies.

  4. Contingency theoretic methodology for agent-based web-oriented manufacturing systems

    NASA Astrophysics Data System (ADS)

    Durrett, John R.; Burnell, Lisa J.; Priest, John W.

    2000-12-01

    The development of distributed, agent-based, web-oriented, N-tier Information Systems (IS) must be supported by a design methodology capable of responding to the convergence of shifts in business process design, organizational structure, computing, and telecommunications infrastructures. We introduce a contingency theoretic model for the use of open, ubiquitous software infrastructure in the design of flexible organizational IS. Our basic premise is that developers should change in the way they view the software design process from a view toward the solution of a problem to one of the dynamic creation of teams of software components. We postulate that developing effective, efficient, flexible, component-based distributed software requires reconceptualizing the current development model. The basic concepts of distributed software design are merged with the environment-causes-structure relationship from contingency theory; the task-uncertainty of organizational- information-processing relationships from information processing theory; and the concept of inter-process dependencies from coordination theory. Software processes are considered as employees, groups of processes as software teams, and distributed systems as software organizations. Design techniques already used in the design of flexible business processes and well researched in the domain of the organizational sciences are presented. Guidelines that can be utilized in the creation of component-based distributed software will be discussed.

  5. Experimental, Theoretical and Computational Studies of Plasma-Based Concepts for Future High Energy Accelerators

    SciTech Connect

    Joshi, Chan; Mori, W.

    2013-10-21

    This is the final report on the DOE grant number DE-FG02-92ER40727 titled, “Experimental, Theoretical and Computational Studies of Plasma-Based Concepts for Future High Energy Accelerators.” During this grant period the UCLA program on Advanced Plasma Based Accelerators, headed by Professor C. Joshi has made many key scientific advances and trained a generation of students, many of whom have stayed in this research field and even started research programs of their own. In this final report however, we will focus on the last three years of the grant and report on the scientific progress made in each of the four tasks listed under this grant. Four tasks are focused on: Plasma Wakefield Accelerator Research at FACET, SLAC National Accelerator Laboratory, In House Research at UCLA’s Neptune and 20 TW Laser Laboratories, Laser-Wakefield Acceleration (LWFA) in Self Guided Regime: Experiments at the Callisto Laser at LLNL, and Theory and Simulations. Major scientific results have been obtained in each of the four tasks described in this report. These have led to publications in the prestigious scientific journals, graduation and continued training of high quality Ph.D. level students and have kept the U.S. at the forefront of plasma-based accelerators research field.

  6. Theoretical Limits of Energy Density in Silicon-Carbon Composite Anode Based Lithium Ion Batteries.

    PubMed

    Dash, Ranjan; Pannala, Sreekanth

    2016-06-17

    Silicon (Si) is under consideration as a potential next-generation anode material for the lithium ion battery (LIB). Experimental reports of up to 40% increase in energy density of Si anode based LIBs (Si-LIBs) have been reported in literature. However, this increase in energy density is achieved when the Si-LIB is allowed to swell (volumetrically expand) more than graphite based LIB (graphite-LIB) and beyond practical limits. The volume expansion of LIB electrodes should be negligible for applications such as automotive or mobile devices. We determine the theoretical bounds of Si composition in a Si-carbon composite (SCC) based anode to maximize the volumetric energy density of a LIB by constraining the external dimensions of the anode during charging. The porosity of the SCC anode is adjusted to accommodate the volume expansion during lithiation. The calculated threshold value of Si was then used to determine the possible volumetric energy densities of LIBs with SCC anode (SCC-LIBs) and the potential improvement over graphite-LIBs. The level of improvement in volumetric and gravimetric energy density of SCC-LIBs with constrained volume is predicted to be less than 10% to ensure the battery has similar power characteristics of graphite-LIBs.

  7. Theoretical investigation on thermoelectric properties of Cu-based chalcopyrite compounds

    NASA Astrophysics Data System (ADS)

    Wang, Biao; Xiang, Hongjun; Nakayama, Tsuneyoshi; Zhou, Jun; Li, Baowen

    2017-01-01

    Cu-based materials are potential candidates for commercial thermoelectric materials due to their abundance, nontoxicity, and high performance. We incorporate the multiband Boltzmann transport equations with first-principles calculations to theoretically investigate the thermoelectric properties of Cu-based chalcopyrite compounds. As a demonstration of our method, the thermoelectric properties of quaternary compounds Cu2ZnSnX4 (X = S, Se) and ternary compounds CuBTe2 (B = Ga, In) are studied. We systematically calculate the electrical conductivity, the Seebeck coefficient, and the power factor of the four materials above based on parameters obtained from first-principles calculations and using several other fitting parameters. For quaternary compounds, our results reveal that Cu2ZnSnSe4 is better than Cu2ZnSnS4 and its optimal hole concentration is around 5 ×1019cm-3 with the peak power factor 4.7 μ W/cm K 2 at 600 K. For ternary compounds, we find that their optimal hole concentrations are around 1 ×1020cm-3 with the peak power factors over 26 μ W/cm K 2 at 800 K.

  8. The future scalability of pH-based genome sequencers: A theoretical perspective

    NASA Astrophysics Data System (ADS)

    Go, Jonghyun; Alam, Muhammad A.

    2013-10-01

    Sequencing of human genome is an essential prerequisite for personalized medicine and early prognosis of various genetic diseases. The state-of-art, high-throughput genome sequencing technologies provide improved sequencing; however, their reliance on relatively expensive optical detection schemes has prevented wide-spread adoption of the technology in routine care. In contrast, the recently announced pH-based electronic genome sequencers achieve fast sequencing at low cost because of the compatibility with the current microelectronics technology. While the progress in technology development has been rapid, the physics of the sequencing chips and the potential for future scaling (and therefore, cost reduction) remain unexplored. In this article, we develop a theoretical framework and a scaling theory to explain the principle of operation of the pH-based sequencing chips and use the framework to explore various perceived scaling limits of the technology related to signal to noise ratio, well-to-well crosstalk, and sequencing accuracy. We also address several limitations inherent to the key steps of pH-based genome sequencers, which are widely shared by many other sequencing platforms in the market but remained unexplained properly so far.

  9. A Compound fault diagnosis for rolling bearings method based on blind source separation and ensemble empirical mode decomposition.

    PubMed

    Wang, Huaqing; Li, Ruitong; Tang, Gang; Yuan, Hongfang; Zhao, Qingliang; Cao, Xi

    2014-01-01

    A Compound fault signal usually contains multiple characteristic signals and strong confusion noise, which makes it difficult to separate week fault signals from them through conventional ways, such as FFT-based envelope detection, wavelet transform or empirical mode decomposition individually. In order to improve the compound faults diagnose of rolling bearings via signals' separation, the present paper proposes a new method to identify compound faults from measured mixed-signals, which is based on ensemble empirical mode decomposition (EEMD) method and independent component analysis (ICA) technique. With the approach, a vibration signal is firstly decomposed into intrinsic mode functions (IMF) by EEMD method to obtain multichannel signals. Then, according to a cross correlation criterion, the corresponding IMF is selected as the input matrix of ICA. Finally, the compound faults can be separated effectively by executing ICA method, which makes the fault features more easily extracted and more clearly identified. Experimental results validate the effectiveness of the proposed method in compound fault separating, which works not only for the outer race defect, but also for the rollers defect and the unbalance fault of the experimental system.

  10. Development of the Knowledge-based & Empirical Combined Scoring Algorithm (KECSA) to Score Protein-Ligand Interactions

    PubMed Central

    Zheng, Zheng

    2013-01-01

    We describe a novel knowledge-based protein-ligand scoring function that employs a new definition for the reference state, allowing us to relate a statistical potential to a Lennard-Jones (LJ) potential. In this way, the LJ potential parameters were generated from protein-ligand complex structural data contained in the PDB. Forty-nine types of atomic pairwise interactions were derived using this method, which we call the knowledge-based and empirical combined scoring algorithm (KECSA). Two validation benchmarks were introduced to test the performance of KECSA. The first validation benchmark included two test sets that address the training-set and enthalpy/entropy of KECSA The second validation benchmark suite included two large-scale and five small-scale test sets to compare the reproducibility of KECSA with respect to two empirical score functions previously developed in our laboratory (LISA and LISA+), as well as to other well-known scoring methods. Validation results illustrate that KECSA shows improved performance in all test sets when compared with other scoring methods especially in its ability to minimize the RMSE. LISA and LISA+ displayed similar performance using the correlation coefficient and Kendall τ as the metric of quality for some of the small test sets. Further pathways for improvement are discussed which would KECSA more sensitive to subtle changes in ligand structure. PMID:23560465

  11. Relevant modes selection method based on Spearman correlation coefficient for laser signal denoising using empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Duan, Yabo; Song, Chengtian

    2016-12-01

    Empirical mode decomposition (EMD) is a recently proposed nonlinear and nonstationary laser signal denoising method. A noisy signal is broken down using EMD into oscillatory components that are called intrinsic mode functions (IMFs). Thresholding-based denoising and correlation-based partial reconstruction of IMFs are the two main research directions for EMD-based denoising. Similar to other decomposition-based denoising approaches, EMD-based denoising methods require a reliable threshold to determine which IMFs are noise components and which IMFs are noise-free components. In this work, we propose a new approach in which each IMF is first denoised using EMD interval thresholding (EMD-IT), and then a robust thresholding process based on Spearman correlation coefficient is used for relevant modes selection. The proposed method tackles the problem using a thresholding-based denoising approach coupled with partial reconstruction of the relevant IMFs. Other traditional denoising methods, including correlation-based EMD partial reconstruction (EMD-Correlation), discrete Fourier transform and wavelet-based methods, are investigated to provide a comparison with the proposed technique. Simulation and test results demonstrate the superior performance of the proposed method when compared with the other methods.

  12. Providing a contextual base and a theoretical structure to guide the teaching of science from early years to senior years

    NASA Astrophysics Data System (ADS)

    Stinner, Arthur

    1996-07-01

    this paper addresses the need for and the problem of organizing a science curriculum around contextual settings and science stories that serve to involve and motivate students to develop a scientific understanding of the world (with emphasis on physical science). A program of activities placed around contextual settings, science stories and contemporary issues of interest is recommended in an attempt to move toward a slow and secure abolition of the gulf between scientific knowledge and ‘common sense’ beliefs. A conceptual development is described to guide the connection between theory and evidence on a level appropriate for children, from early years to senior years. For senior years it is also important to connect the activity of teaching to a sound theoretical structure. The theoretical structure must illuminate the status of theory in science, establish what counts as evidence, clarify the relationship between experiment and explanation, and make connections to the history of science. This paper concludes with a proposed program of activities in terms of a sequence of theoretical and empirical activities that involve contextual settings, science stories, large context problems, thematic teaching, and popular science literature teaching.

  13. Empirical application of empathy enhancing program based on movement concept for married couples in conflict

    PubMed Central

    Kim, Soo-Yeon; Kang, Hye-Won; Chung, Yong-Chul; Park, Seungha

    2013-01-01

    In the field of marital therapy, it is known that couple movement program helps married couples faced with conflict situation to rebuild the relationship and to maintain a family homeostasis. The purpose of this study was to configure and apply the kinesthetic empathy program and to assess the effectiveness for married couples in conflict. To achieve the research aims, qualitative research method has been conducted, subjecting three couples, 6 people, who are participating in expressive movement program for this study. The study used focus group interview method for collecting date and employed for the interview method by mixing the semi-structured and unstructured questionnaire. The results were followings. First, through kinesthetic empathy enhancing program, one could develop self-awareness and emotional attunement. Second, the result showed the relationship between intention and empathy. It shows that “knowing spouse’s hidden intention” is significant factors to understand others. Third, kinesthetic empathy program could complement general marriage counseling program. The results of this study provide empirical evidence that movement program functions as an empathy enhancer through the process of perceiving, feeling, thinking, and interacting with others. PMID:24278896

  14. Spectral analysis of Hall-effect thruster plasma oscillations based on the empirical mode decomposition

    SciTech Connect

    Kurzyna, J.; Mazouffre, S.; Lazurenko, A.; Albarede, L.; Bonhomme, G.; Makowski, K.; Dudeck, M.; Peradzynski, Z.

    2005-12-15

    Hall-effect thruster plasma oscillations recorded by means of probes located at the channel exit are analyzed using the empirical mode decomposition (EMD) method. This self-adaptive technique permits to decompose a nonstationary signal into a set of intrinsic modes, and acts as a very efficient filter allowing to separate contributions of different underlying physical mechanisms. Applying the Hilbert transform to the whole set of modes allows to identify peculiar events and to assign them a range of instantaneous frequency and power. In addition to 25 kHz breathing-type oscillations which are unambiguously identified, the EMD approach confirms the existence of oscillations with instantaneous frequencies in the range of 100-500 kHz typical for ion transit-time oscillations. Modeling of high-frequency modes ({nu}{approx}10 MHz) resulting from EMD of measured wave forms supports the idea that high-frequency plasma oscillations originate from electron-density perturbations propagating azimuthally with the electron drift velocity.

  15. Cardiopulmonary Resuscitation Pattern Evaluation Based on Ensemble Empirical Mode Decomposition Filter via Nonlinear Approaches

    PubMed Central

    Ma, Matthew Huei-Ming

    2016-01-01

    Good quality cardiopulmonary resuscitation (CPR) is the mainstay of treatment for managing patients with out-of-hospital cardiac arrest (OHCA). Assessment of the quality of the CPR delivered is now possible through the electrocardiography (ECG) signal that can be collected by an automated external defibrillator (AED). This study evaluates a nonlinear approximation of the CPR given to the asystole patients. The raw ECG signal is filtered using ensemble empirical mode decomposition (EEMD), and the CPR-related intrinsic mode functions (IMF) are chosen to be evaluated. In addition, sample entropy (SE), complexity index (CI), and detrended fluctuation algorithm (DFA) are collated and statistical analysis is performed using ANOVA. The primary outcome measure assessed is the patient survival rate after two hours. CPR pattern of 951 asystole patients was analyzed for quality of CPR delivered. There was no significant difference observed in the CPR-related IMFs peak-to-peak interval analysis for patients who are younger or older than 60 years of age, similarly to the amplitude difference evaluation for SE and DFA. However, there is a difference noted for the CI (p < 0.05). The results show that patients group younger than 60 years have higher survival rate with high complexity of the CPR-IMFs amplitude differences. PMID:27529068

  16. Deconvolution-Based CT and MR Brain Perfusion Measurement: Theoretical Model Revisited and Practical Implementation Details.

    PubMed

    Fieselmann, Andreas; Kowarschik, Markus; Ganguly, Arundhuti; Hornegger, Joachim; Fahrig, Rebecca

    2011-01-01

    Deconvolution-based analysis of CT and MR brain perfusion data is widely used in clinical practice and it is still a topic of ongoing research activities. In this paper, we present a comprehensive derivation and explanation of the underlying physiological model for intravascular tracer systems. We also discuss practical details that are needed to properly implement algorithms for perfusion analysis. Our description of the practical computer implementation is focused on the most frequently employed algebraic deconvolution methods based on the singular value decomposition. In particular, we further discuss the need for regularization in order to obtain physiologically reasonable results. We include an overview of relevant preprocessing steps and provide numerous references to the literature. We cover both CT and MR brain perfusion imaging in this paper because they share many common aspects. The combination of both the theoretical as well as the practical aspects of perfusion analysis explicitly emphasizes the simplifications to the underlying physiological model that are necessary in order to apply it to measured data acquired with current CT and MR scanners.

  17. Theoretical analysis of transcranial Hall-effect stimulation based on passive cable model

    NASA Astrophysics Data System (ADS)

    Yuan, Yi; Li, Xiao-Li

    2015-12-01

    Transcranial Hall-effect stimulation (THS) is a new stimulation method in which an ultrasonic wave in a static magnetic field generates an electric field in an area of interest such as in the brain to modulate neuronal activities. However, the biophysical basis of simulating the neurons remains unknown. To address this problem, we perform a theoretical analysis based on a passive cable model to investigate the THS mechanism of neurons. Nerve tissues are conductive; an ultrasonic wave can move ions embedded in the tissue in a static magnetic field to generate an electric field (due to Lorentz force). In this study, a simulation model for an ultrasonically induced electric field in a static magnetic field is derived. Then, based on the passive cable model, the analytical solution for the voltage distribution in a nerve tissue is determined. The simulation results showthat THS can generate a voltage to stimulate neurons. Because the THS method possesses a higher spatial resolution and a deeper penetration depth, it shows promise as a tool for treating or rehabilitating neuropsychiatric disorders. Project supported by the National Natural Science Foundation of China (Grant Nos. 61273063 and 61503321), the China Postdoctoral Science Foundation (Grant No. 2013M540215), the Natural Science Foundation of Hebei Province, China (Grant No. F2014203161), and the Youth Research Program of Yanshan University, China (Grant No. 02000134).

  18. An all-atom structure-based potential for proteins: bridging minimal models with all-atom empirical forcefields.

    PubMed

    Whitford, Paul C; Noel, Jeffrey K; Gosavi, Shachi; Schug, Alexander; Sanbonmatsu, Kevin Y; Onuchic, José N

    2009-05-01

    Protein dynamics take place on many time and length scales. Coarse-grained structure-based (Go) models utilize the funneled energy landscape theory of protein folding to provide an understanding of both long time and long length scale dynamics. All-atom empirical forcefields with explicit solvent can elucidate our understanding of short time dynamics with high energetic and structural resolution. Thus, structure-based models with atomic details included can be used to bridge our understanding between these two approaches. We report on the robustness of folding mechanisms in one such all-atom model. Results for the B domain of Protein A, the SH3 domain of C-Src Kinase, and Chymotrypsin Inhibitor 2 are reported. The interplay between side chain packing and backbone folding is explored. We also compare this model to a C(alpha) structure-based model and an all-atom empirical forcefield. Key findings include: (1) backbone collapse is accompanied by partial side chain packing in a cooperative transition and residual side chain packing occurs gradually with decreasing temperature, (2) folding mechanisms are robust to variations of the energetic parameters, (3) protein folding free-energy barriers can be manipulated through parametric modifications, (4) the global folding mechanisms in a C(alpha) model and the all-atom model agree, although differences can be attributed to energetic heterogeneity in the all-atom model, and (5) proline residues have significant effects on folding mechanisms, independent of isomerization effects. Because this structure-based model has atomic resolution, this work lays the foundation for future studies to probe the contributions of specific energetic factors on protein folding and function.

  19. An All-atom Structure-Based Potential for Proteins: Bridging Minimal Models with All-atom Empirical Forcefields

    PubMed Central

    Whitford, Paul C.; Noel, Jeffrey K.; Gosavi, Shachi; Schug, Alexander; Sanbonmatsu, Kevin Y.; Onuchic, José N.

    2012-01-01

    Protein dynamics take place on many time and length scales. Coarse-grained structure-based (Gō) models utilize the funneled energy landscape theory of protein folding to provide an understanding of both long time and long length scale dynamics. All-atom empirical forcefields with explicit solvent can elucidate our understanding of short time dynamics with high energetic and structural resolution. Thus, structure-based models with atomic details included can be used to bridge our understanding between these two approaches. We report on the robustness of folding mechanisms in one such all-atom model. Results for the B domain of Protein A, the SH3 domain of C-Src Kinase and Chymotrypsin Inhibitor 2 are reported. The interplay between side chain packing and backbone folding is explored. We also compare this model to a Cα structure-based model and an all-atom empirical forcefield. Key findings include 1) backbone collapse is accompanied by partial side chain packing in a cooperative transition and residual side chain packing occurs gradually with decreasing temperature 2) folding mechanisms are robust to variations of the energetic parameters 3) protein folding free energy barriers can be manipulated through parametric modifications 4) the global folding mechanisms in a Cα model and the all-atom model agree, although differences can be attributed to energetic heterogeneity in the all-atom model 5) proline residues have significant effects on folding mechanisms, independent of isomerization effects. Since this structure-based model has atomic resolution, this work lays the foundation for future studies to probe the contributions of specific energetic factors on protein folding and function. PMID:18837035

  20. An Empirical Study of Neural Network-Based Audience Response Technology in a Human Anatomy Course for Pharmacy Students.

    PubMed

    Fernández-Alemán, José Luis; López-González, Laura; González-Sequeros, Ofelia; Jayne, Chrisina; López-Jiménez, Juan José; Carrillo-de-Gea, Juan Manuel; Toval, Ambrosio

    2016-04-01

    This paper presents an empirical study of a formative neural network-based assessment approach by using mobile technology to provide pharmacy students with intelligent diagnostic feedback. An unsupervised learning algorithm was integrated with an audience response system called SIDRA in order to generate states that collect some commonality in responses to questions and add diagnostic feedback for guided learning. A total of 89 pharmacy students enrolled on a Human Anatomy course were taught using two different teaching methods. Forty-four students employed intelligent SIDRA (i-SIDRA), whereas 45 students received the same training but without using i-SIDRA. A statistically significant difference was found between the experimental group (i-SIDRA) and the control group (traditional learning methodology), with T (87) = 6.598, p < 0.001. In four MCQs tests, the difference between the number of correct answers in the first attempt and in the last attempt was also studied. A global effect size of 0.644 was achieved in the meta-analysis carried out. The students expressed satisfaction with the content provided by i-SIDRA and the methodology used during the process of learning anatomy (M = 4.59). The new empirical contribution presented in this paper allows instructors to perform post hoc analyses of each particular student's progress to ensure appropriate training.

  1. Graph theoretic framework based cooperative control and estimation of multiple UAVs for target tracking

    NASA Astrophysics Data System (ADS)

    Ahmed, Mousumi

    Designing the control technique for nonlinear dynamic systems is a significant challenge. Approaches to designing a nonlinear controller are studied and an extensive study on backstepping based technique is performed in this research with the purpose of tracking a moving target autonomously. Our main motivation is to explore the controller for cooperative and coordinating unmanned vehicles in a target tracking application. To start with, a general theoretical framework for target tracking is studied and a controller in three dimensional environment for a single UAV is designed. This research is primarily focused on finding a generalized method which can be applied to track almost any reference trajectory. The backstepping technique is employed to derive the controller for a simplified UAV kinematic model. This controller can compute three autopilot modes i.e. velocity, ground heading (or course angle), and flight path angle for tracking the unmanned vehicle. Numerical implementation is performed in MATLAB with the assumption of having perfect and full state information of the target to investigate the accuracy of the proposed controller. This controller is then frozen for the multi-vehicle problem. Distributed or decentralized cooperative control is discussed in the context of multi-agent systems. A consensus based cooperative control is studied; such consensus based control problem can be viewed from the algebraic graph theory concepts. The communication structure between the UAVs is represented by the dynamic graph where UAVs are represented by the nodes and the communication links are represented by the edges. The previously designed controller is augmented to account for the group to obtain consensus based on their communication. A theoretical development of the controller for the cooperative group of UAVs is presented and the simulation results for different communication topologies are shown. This research also investigates the cases where the communication

  2. Towards a critical evaluation of an empirical and volume-based solvation function for ligand docking

    PubMed Central

    Muniz, Heloisa S.

    2017-01-01

    Molecular docking is an important tool for the discovery of new biologically active molecules given that the receptor structure is known. An excellent environment for the development of new methods and improvement of the current methods is being provided by the rapid growth in the number of proteins with known structure. The evaluation of the solvation energies outstands among the challenges for the modeling of the receptor-ligand interactions, especially in the context of molecular docking where a fast, though accurate, evaluation is ought to be achieved. Here we evaluated a variation of the desolvation energy model proposed by Stouten (Stouten P.F.W. et al, Molecular Simulation, 1993, 10: 97–120), or SV model. The SV model showed a linear correlation with experimentally determined solvation energies, as available in the database FreeSolv. However, when used in retrospective docking simulations using the benchmarks DUD, charged-matched DUD and DUD-Enhanced, the SV model resulted in poorer enrichments when compared to a pure force field model with no correction for solvation effects. The data provided here is consistent with other empirical solvation models employed in the context of molecular docking and indicates that a good model to account for solvent effects is still a goal to achieve. On the other hand, despite the inability to improve the enrichment of retrospective simulations, the SV solvation model showed an interesting ability to reduce the number of molecules with net charge -2 and -3 e among the top-scored molecules in a prospective test. PMID:28323889

  3. Changing Healthcare Providers’ Behavior during Pediatric Inductions with an Empirically-based Intervention

    PubMed Central

    Martin, Sarah R.; Chorney, Jill MacLaren; Tan, Edwin T.; Fortier, Michelle A.; Blount, Ronald L.; Wald, Samuel H.; Shapiro, Nina L.; Strom, Suzanne L.; Patel, Swati; Kain, Zeev N.

    2011-01-01

    Background Each year over 4 million children experience significant levels of preoperative anxiety, which has been linked to poor recovery outcomes. Healthcare providers (HCP) and parents represent key resources for children to help them manage their preoperative anxiety. The present study reports on the development and preliminary feasibility testing of a new intervention designed to change HCP and parent perioperative behaviors that have been previously reported to be associated with children’s coping and stress behaviors before surgery. Methods An empirically-derived intervention, Provider-Tailored Intervention for Perioperative Stress, was developed to train HCPs to increase behaviors that promote children’s coping and decrease behaviors that may exacerbate children’s distress. Rates of HCP behaviors were coded and compared between pre-intervention and post-intervention. Additionally, rates of parents’ behaviors were compared between those that interacted with HCPs before training to those interacting with HCPs post-intervention. Results Effect sizes indicated that HCPs that underwent training demonstrated increases in rates of desired behaviors (range: 0.22 to 1.49) and decreases in rates of undesired behaviors (range: 0.15 to 2.15). Additionally, parents, who were indirectly trained, also demonstrated changes to their rates of desired (range: 0.30 to 0.60) and undesired behaviors (range: 0.16 to 0.61). Conclusions The intervention successfully modified HCP and parent behaviors. It represents a potentially new clinical way to decrease anxiety in children. A recently National Institute of Child Health and Development funded multi-site randomized control trial will examine the efficacy of this intervention in reducing children’s preoperative anxiety and improving children’s postoperative recovery is about to start. PMID:21606826

  4. Combining Empirical Relationships with Data Based Mechanistic Modeling to Inform Solute Tracer Investigations across Stream Orders

    NASA Astrophysics Data System (ADS)

    Herrington, C.; Gonzalez-Pinzon, R.; Covino, T. P.; Mortensen, J.

    2015-12-01

    Solute transport studies in streams and rivers often begin with the introduction of conservative and reactive tracers into the water column. Information on the transport of these substances is then captured within tracer breakthrough curves (BTCs) and used to estimate, for instance, travel times and dissolved nutrient and carbon dynamics. Traditionally, these investigations have been limited to systems with small discharges (< 200 L/s) and with small reach lengths (< 500 m), partly due to the need for a priori information of the reach's hydraulic characteristics (e.g., channel geometry, resistance and dispersion coefficients) to predict arrival times, times to peak concentrations of the solute and mean travel times. Current techniques to acquire these channel characteristics through preliminary tracer injections become cost prohibitive at higher stream orders and the use of semi-continuous water quality sensors for collecting real-time information may be affected from erroneous readings that are masked by high turbidity (e.g., nitrate signals with SUNA instruments or fluorescence measures) and/or high total dissolved solids (e.g., making prohibitively expensive the use of salt tracers such as NaCl) in larger systems. Additionally, a successful time-of-travel study is valuable for only a single discharge and river stage. We have developed a method to predict tracer BTCs to inform sampling frequencies at small and large stream orders using empirical relationships developed from multiple tracer injections spanning several orders of magnitude in discharge and reach length. This method was successfully tested in 1st to 8th order systems along the Middle Rio Grande River Basin in New Mexico, USA.

  5. Ensemble Empirical Mode Decomposition based methodology for ultrasonic testing of coarse grain austenitic stainless steels.

    PubMed

    Sharma, Govind K; Kumar, Anish; Jayakumar, T; Purnachandra Rao, B; Mariyappa, N

    2015-03-01

    A signal processing methodology is proposed in this paper for effective reconstruction of ultrasonic signals in coarse grained high scattering austenitic stainless steel. The proposed methodology is comprised of the Ensemble Empirical Mode Decomposition (EEMD) processing of ultrasonic signals and application of signal minimisation algorithm on selected Intrinsic Mode Functions (IMFs) obtained by EEMD. The methodology is applied to ultrasonic signals obtained from austenitic stainless steel specimens of different grain size, with and without defects. The influence of probe frequency and data length of a signal on EEMD decomposition is also investigated. For a particular sampling rate and probe frequency, the same range of IMFs can be used to reconstruct the ultrasonic signal, irrespective of the grain size in the range of 30-210 μm investigated in this study. This methodology is successfully employed for detection of defects in a 50mm thick coarse grain austenitic stainless steel specimens. Signal to noise ratio improvement of better than 15 dB is observed for the ultrasonic signal obtained from a 25 mm deep flat bottom hole in 200 μm grain size specimen. For ultrasonic signals obtained from defects at different depths, a minimum of 7 dB extra enhancement in SNR is achieved as compared to the sum of selected IMF approach. The application of minimisation algorithm with EEMD processed signal in the proposed methodology proves to be effective for adaptive signal reconstruction with improved signal to noise ratio. This methodology was further employed for successful imaging of defects in a B-scan.

  6. Information-theoretic discrepancy based iterative reconstructions (IDIR) for polychromatic x-ray tomography

    SciTech Connect

    Jang, Kwang Eun; Lee, Jongha; Sung, Younghun; Lee, SeongDeok

    2013-09-15

    Purpose: X-ray photons generated from a typical x-ray source for clinical applications exhibit a broad range of wavelengths, and the interactions between individual particles and biological substances depend on particles' energy levels. Most existing reconstruction methods for transmission tomography, however, neglect this polychromatic nature of measurements and rely on the monochromatic approximation. In this study, we developed a new family of iterative methods that incorporates the exact polychromatic model into tomographic image recovery, which improves the accuracy and quality of reconstruction.Methods: The generalized information-theoretic discrepancy (GID) was employed as a new metric for quantifying the distance between the measured and synthetic data. By using special features of the GID, the objective function for polychromatic reconstruction which contains a double integral over the wavelength and the trajectory of incident x-rays was simplified to a paraboloidal form without using the monochromatic approximation. More specifically, the original GID was replaced with a surrogate function with two auxiliary, energy-dependent variables. Subsequently, the alternating minimization technique was applied to solve the double minimization problem. Based on the optimization transfer principle, the objective function was further simplified to the paraboloidal equation, which leads to a closed-form update formula. Numerical experiments on the beam-hardening correction and material-selective reconstruction were conducted to compare and assess the performance of conventional methods and the proposed algorithms.Results: The authors found that the GID determines the distance between its two arguments in a flexible manner. In this study, three groups of GIDs with distinct data representations were considered. The authors demonstrated that one type of GIDs that comprises “raw” data can be viewed as an extension of existing statistical reconstructions; under a

  7. Holding-based network of nations based on listed energy companies: An empirical study on two-mode affiliation network of two sets of actors

    NASA Astrophysics Data System (ADS)

    Li, Huajiao; Fang, Wei; An, Haizhong; Gao, Xiangyun; Yan, Lili

    2016-05-01

    Economic networks in the real world are not homogeneous; therefore, it is important to study economic networks with heterogeneous nodes and edges to simulate a real network more precisely. In this paper, we present an empirical study of the one-mode derivative holding-based network constructed by the two-mode affiliation network of two sets of actors using the data of worldwide listed energy companies and their shareholders. First, we identify the primitive relationship in the two-mode affiliation network of the two sets of actors. Then, we present the method used to construct the derivative network based on the shareholding relationship between two sets of actors and the affiliation relationship between actors and events. After constructing the derivative network, we analyze different topological features on the node level, edge level and entire network level and explain the meanings of the different values of the topological features combining the empirical data. This study is helpful for expanding the usage of complex networks to heterogeneous economic networks. For empirical research on the worldwide listed energy stock market, this study is useful for discovering the inner relationships between the nations and regions from a new perspective.

  8. Data-based empirical model reduction as an approach to data mining

    NASA Astrophysics Data System (ADS)

    Ghil, M.

    2012-12-01

    Science is very much about finding order in chaos, patterns in oodles of data, signal in noise, and so on. One can see any scientific description as a model of the data, whether verbal, statistical or dynamical. In this talk, I will provide an approach to such descriptions that relies on constructing nonlinear, stochastically forced models, via empirical model reduction (EMR). EMR constructs a low-order nonlinear system of prognostic equations driven by stochastic forcing; it estimates both the dynamical operator and the properties of the driving noise directly from observations or from a high-order model's simulation. The multi-level EMR structure for modeling the stochastic forcing allows one to capture feedback between high- and low-frequency components of the variability, thus parameterizing the "fast scales," often referred to as the "noise," in terms of the memory of the "slow" scales, referred to as the "signal." EMR models have been shown to capture quite well features of the high-dimensional data sets involved, in the frequency domain as well as in the spatial domain. Illustrative examples will involve capturing correctly patterns in data sets that are either purely observational or generated by high-end models. They will be selected from intraseasonal variability of the mid-latitude atmosphere, seasonal-to-interannual variability of the sea surface temperature field, and air-sea interaction in the Southern Ocean. The work described in this talk is joint with M.D. Chekroun, D. Kondrashov, S. Kravtsov, and A.W. Robertson. Recent results on using a modified and improved form of EMR modeling for predictive purposes will be provided in a separate talk by D. Kondrashov, M. Chekroun and M. Ghil on "Data-Driven Model Reduction and Climate Prediction: Nonlinear Stochastic, Energy-Conserving Models With Memory Effects."Detailed budget of mean phase-space tendencies for the plane spanned by EOFs 1 and 4 of an intermediate-complexity model of mid-latitude flow

  9. Respiratory rate detection algorithm based on RGB-D camera: theoretical background and experimental results.

    PubMed

    Benetazzo, Flavia; Freddi, Alessandro; Monteriù, Andrea; Longhi, Sauro

    2014-09-01

    Both the theoretical background and the experimental results of an algorithm developed to perform human respiratory rate measurements without any physical contact are presented. Based on depth image sensing techniques, the respiratory rate is derived by measuring morphological changes of the chest wall. The algorithm identifies the human chest, computes its distance from the camera and compares this value with the instantaneous distance, discerning if it is due to the respiratory act or due to a limited movement of the person being monitored. To experimentally validate the proposed algorithm, the respiratory rate measurements coming from a spirometer were taken as a benchmark and compared with those estimated by the algorithm. Five tests were performed, with five different persons sat in front of the camera. The first test aimed to choose the suitable sampling frequency. The second test was conducted to compare the performances of the proposed system with respect to the gold standard in ideal conditions of light, orientation and clothing. The third, fourth and fifth tests evaluated the algorithm performances under different operating conditions. The experimental results showed that the system can correctly measure the respiratory rate, and it is a viable alternative to monitor the respiratory activity of a person without using invasive sensors.

  10. Respiratory rate detection algorithm based on RGB-D camera: theoretical background and experimental results

    PubMed Central

    Freddi, Alessandro; Monteriù, Andrea; Longhi, Sauro

    2014-01-01

    Both the theoretical background and the experimental results of an algorithm developed to perform human respiratory rate measurements without any physical contact are presented. Based on depth image sensing techniques, the respiratory rate is derived by measuring morphological changes of the chest wall. The algorithm identifies the human chest, computes its distance from the camera and compares this value with the instantaneous distance, discerning if it is due to the respiratory act or due to a limited movement of the person being monitored. To experimentally validate the proposed algorithm, the respiratory rate measurements coming from a spirometer were taken as a benchmark and compared with those estimated by the algorithm. Five tests were performed, with five different persons sat in front of the camera. The first test aimed to choose the suitable sampling frequency. The second test was conducted to compare the performances of the proposed system with respect to the gold standard in ideal conditions of light, orientation and clothing. The third, fourth and fifth tests evaluated the algorithm performances under different operating conditions. The experimental results showed that the system can correctly measure the respiratory rate, and it is a viable alternative to monitor the respiratory activity of a person without using invasive sensors. PMID:26609383

  11. Theoretical Evaluation of Electroactive Polymer Based Micropump Diaphragm for Air Flow Control

    NASA Technical Reports Server (NTRS)

    Xu, Tian-Bing; Su, Ji; Zhang, Qiming

    2004-01-01

    An electroactive polymer (EAP), high energy electron irradiated poly(vinylidene fluoride-trifluoroethylene) [P(VDFTrFE)] copolymer, based actuation micropump diaphragm (PAMPD) have been developed for air flow control. The displacement strokes and profiles as a function of amplifier and frequency of electric field have been characterized. The volume stroke rates (volume rate) as function of electric field, driving frequency have been theoretically evaluated, too. The PAMPD exhibits high volume rate. It is easily tuned with varying of either amplitude or frequency of the applied electric field. In addition, the performance of the diaphragms were modeled and the agreement between the modeling results and experimental data confirms that the response of the diaphragms follow the design parameters. The results demonstrated that the diaphragm can fit some future aerospace applications to replace the traditional complex mechanical systems, increase the control capability and reduce the weight of the future air dynamic control systems. KEYWORDS: Electroactive polymer (EAP), micropump, diaphragm, actuation, displacement, volume rate, pumping speed, clamping ratio.

  12. Experimental and theoretical spectroscopic study and structural determination of nickel(II) tridentate Schiff base complexes.

    PubMed

    Kianfar, Ali Hossein; Farrokhpour, Hossein; Dehghani, Parin; Khavasi, Hamid Reza

    2015-11-05

    Some new complexes of [NiL(PR3)] (where L=(E)-1-[(2-amino-5-nitrophenyl)iminio-methyl]naphthalene-2-olate (L(1)), (E)-1-[(2-hydroxiphenyl)iminio-methyl]naphthalene-2-olate (L(2)), R=Bu and Ph) containing tridentate ONN and ONO Schiff bases were synthesized and characterized by IR, UV-Vis, (1)H-NMR spectroscopy and elemental analysis. The geometry of [NiL(1)(PBu3)] and [NiL(2)(PBu3)] complexes were determined by X-ray crystallography. It was indicated that the complexes have a square planar structure and four coordinates in the solid state. Theoretical calculations were also performed to optimize the structures of the ligands and complexes in the gas phase and ethanol solvent, separately to confirm the structures proposed by X-ray crystallography. In addition, UV-Visible and IR spectra of the complexes were calculated and compared with the corresponding experimental spectra to complete the experimental structural identification.

  13. Margins of freedom: a field-theoretic approach to class-based health dispositions and practices.

    PubMed

    Burnett, Patrick John; Veenstra, Gerry

    2017-03-23

    Pierre Bourdieu's theory of practice situates social practices in the relational interplay between experiential mental phenomena (habitus), resources (capitals) and objective social structures (fields). When applied to class-based practices in particular, the overarching field of power within which social classes are potentially made manifest is the primary field of interest. Applying relational statistical techniques to original survey data from Toronto and Vancouver, Canada, we investigated whether smoking, engaging in physical activity and consuming fruit and vegetables are dispersed in a three-dimensional field of power shaped by economic and cultural capitals and cultural dispositions and practices. We find that aesthetic dispositions and flexibility of developing and established dispositions are associated with positioning in the Canadian field of power and embedded in the logics of the health practices dispersed in the field. From this field-theoretic perspective, behavioural change requires the disruption of existing relations of harmony between the habitus of agents, the fields within which the practices are enacted and the capitals that inform and enforce the mores and regularities of the fields. The three-dimensional model can be explored at: http://relational-health.ca/margins-freedom.

  14. Theoretical investigation of hydrogen transfer mechanism in the guanine cytosine base pair

    NASA Astrophysics Data System (ADS)

    Villani, Giovanni

    2006-05-01

    We have studied the quantum-dynamics of the hydrogen bonds in the guanine-cytosine base pair. Due to the position of hydrogen atoms, different tautomers are possible: the stable Watson-Crick G-C, the imino-enol G*-C*, the imino-enol-imino-enol G #-C # and some zwitterionic structures. The common idea in the literature is that only the G-C and the G*-C* tautomers are stable with an estimate of G-C → G*-C* transition probability of 10 -6-10 -9 by the help of Boltzmann statistics. Here we show a detailed quantum theoretical study that suggests the following conclusion: G-C is the stablest tautomer, some partially charged systems (due to the movement of only one hydrogen atom) are important and a large amount of the imino-enol G*-C* (and less of the imino-enol-imino-enol G #-C # structure) tautomer is present at any time. The corresponding transition probabilities from different tautomers are not due to thermal passage, but they are a pure quantum phenomenon. These large probabilities definitively disprove the idea of these tautomers as mutation points. The mechanisms of passage from the G-C tautomer to the others have also been investigated.

  15. A game theoretic framework for incentive-based models of intrinsic motivation in artificial systems.

    PubMed

    Merrick, Kathryn E; Shafi, Kamran

    2013-01-01

    An emerging body of research is focusing on understanding and building artificial systems that can achieve open-ended development influenced by intrinsic motivations. In particular, research in robotics and machine learning is yielding systems and algorithms with increasing capacity for self-directed learning and autonomy. Traditional software architectures and algorithms are being augmented with intrinsic motivations to drive cumulative acquisition of knowledge and skills. Intrinsic motivations have recently been considered in reinforcement learning, active learning and supervised learning settings among others. This paper considers game theory as a novel setting for intrinsic motivation. A game theoretic framework for intrinsic motivation is formulated by introducing the concept of optimally motivating incentive as a lens through which players perceive a game. Transformations of four well-known mixed-motive games are presented to demonstrate the perceived games when players' optimally motivating incentive falls in three cases corresponding to strong power, affiliation and achievement motivation. We use agent-based simulations to demonstrate that players with different optimally motivating incentive act differently as a result of their altered perception of the game. We discuss the implications of these results both for modeling human behavior and for designing artificial agents or robots.

  16. [Nursing practice based on theoretical models: a qualitative study of nurses' perception].

    PubMed

    Amaducci, Giovanna; Iemmi, Marina; Prandi, Marzia; Saffioti, Angelina; Carpanoni, Marika; Mecugni, Daniela

    2013-01-01

    Many faculty argue that theory and theorizing are closely related to the clinical practice, that the disciplinary knowledge grows, more relevantly, from the specific care context in which it takes place and, moreover, that knowledge does not proceed only by the application of general principles of the grand theories to specific cases. Every nurse, in fact, have  a mental model, of what may or may not be aware, that motivate and substantiate every action and choice of career. The study describes what the nursing theoretical model is; the mental model and the tacit  knowledge underlying it. It identifies the explicit theoretical model of the professional group that rapresents nursing partecipants, aspects of continuity with the theoretical model proposed by this degree course in Nursing.. Methods Four focus groups were made which were attended by a total of 22 nurses, rapresentatives of almost every Unit of Reggio Emilia Hospital's. We argue that the theoretical nursing model of each professional group is the result of tacit knowledge, which help to define the personal mental model, and the theoretical model, which explicitly underlying theoretical content learned applied consciously and reverted to / from nursing practice. Reasoning on the use of theory in practice has allowed us to give visibility to a theoretical model explicitly nursing authentically oriented to the needs of the person, in all its complexity in specific contexts.

  17. Meta-Analysis of Group Learning Activities: Empirically Based Teaching Recommendations

    ERIC Educational Resources Information Center

    Tomcho, Thomas J.; Foels, Rob

    2012-01-01

    Teaching researchers commonly employ group-based collaborative learning approaches in Teaching of Psychology teaching activities. However, the authors know relatively little about the effectiveness of group-based activities in relation to known psychological processes associated with group dynamics. Therefore, the authors conducted a meta-analytic…

  18. Formula-Based Public School Funding System in Victoria: An Empirical Analysis of Equity

    ERIC Educational Resources Information Center

    Bandaranayake, Bandara

    2013-01-01

    This article explores the formula-based school funding system in the state of Victoria, Australia, where state funds are directly allocated to schools based on a range of equity measures. The impact of Victoria' funding system for education in terms of alleviating inequality and disadvantage is contentious, to say the least. It is difficult to…

  19. Probabilistic Algorithms, Integration, and Empirical Evaluation for Disambiguating Multiple Selections in Frustum-Based Pointing

    DTIC Science & Technology

    2006-06-01

    generated and is used for processing selections. Kolsch et al. [11] developed a real-time hand gesture recognition system that can act as the sole...576–583. [19] G. Schmidt and D. House, “Model-based motion filtering for improving arm gesture recognition performance,” in Gesture-based

  20. An Empirical Analysis of the Antecedents of Web-Based Learning Continuance

    ERIC Educational Resources Information Center

    Chiu, Chao-Min; Sun, Szu-Yuan; Sun, Pei-Chen; Ju, Teresa L.

    2007-01-01

    Like any other product, service and Web-based application, the success of Web-based learning depends largely on learners' satisfaction and other factors that will eventually increase learners' intention to continue using it. This paper integrates the concept of subjective task value and fairness theory to construct a model for investigating the…

  1. Text-Based On-Line Conferencing: A Conceptual and Empirical Analysis Using a Minimal Prototype.

    ERIC Educational Resources Information Center

    McCarthy, John C.; And Others

    1993-01-01

    Analyzes requirements for text-based online conferencing through the use of a minimal prototype. Topics discussed include prototyping with a minimal system; text-based communication; the system as a message passer versus the system as a shared data structure; and three exercises that showed how users worked with the prototype. (Contains 61…

  2. An Empirical Study of Instructor Adoption of Web-Based Learning Systems

    ERIC Educational Resources Information Center

    Wang, Wei-Tsong; Wang, Chun-Chieh

    2009-01-01

    For years, web-based learning systems have been widely employed in both educational and non-educational institutions. Although web-based learning systems are emerging as a useful tool for facilitating teaching and learning activities, the number of users is not increasing as fast as expected. This study develops an integrated model of instructor…

  3. Theoretical Issues

    SciTech Connect

    Marc Vanderhaeghen

    2007-04-01

    The theoretical issues in the interpretation of the precision measurements of the nucleon-to-Delta transition by means of electromagnetic probes are highlighted. The results of these measurements are confronted with the state-of-the-art calculations based on chiral effective-field theories (EFT), lattice QCD, large-Nc relations, perturbative QCD, and QCD-inspired models. The link of the nucleon-to-Delta form factors to generalized parton distributions (GPDs) is also discussed.

  4. Fault identification of rotor-bearing system based on ensemble empirical mode decomposition and self-zero space projection analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Fan; Zhu, Zhencai; Li, Wei; Zhou, Gongbo; Chen, Guoan

    2014-07-01

    Accurately identifying faults in rotor-bearing systems by analyzing vibration signals, which are nonlinear and nonstationary, is challenging. To address this issue, a new approach based on ensemble empirical mode decomposition (EEMD) and self-zero space projection analysis is proposed in this paper. This method seeks to identify faults appearing in a rotor-bearing system using simple algebraic calculations and projection analyses. First, EEMD is applied to decompose the collected vibration signals into a set of intrinsic mode functions (IMFs) for features. Second, these extracted features under various mechanical health conditions are used to design a self-zero space matrix according to space projection analysis. Finally, the so-called projection indicators are calculated to identify the rotor-bearing system's faults with simple decision logic. Experiments are implemented to test the reliability and effectiveness of the proposed approach. The results show that this approach can accurately identify faults in rotor-bearing systems.

  5. Gyroscope-driven mouse pointer with an EMOTIV® EEG headset and data analysis based on Empirical Mode Decomposition.

    PubMed

    Rosas-Cholula, Gerardo; Ramirez-Cortes, Juan Manuel; Alarcon-Aquino, Vicente; Gomez-Gil, Pilar; Rangel-Magdaleno, Jose de Jesus; Reyes-Garcia, Carlos

    2013-08-14

    This paper presents a project on the development of a cursor control emulating the typical operations of a computer-mouse, using gyroscope and eye-blinking electromyographic signals which are obtained through a commercial 16-electrode wireless headset, recently released by Emotiv. The cursor position is controlled using information from a gyroscope included in the headset. The clicks are generated through the user's blinking with an adequate detection procedure based on the spectral-like technique called Empirical Mode Decomposition (EMD). EMD is proposed as a simple and quick computational tool, yet effective, aimed to artifact reduction from head movements as well as a method to detect blinking signals for mouse control. Kalman filter is used as state estimator for mouse position control and jitter removal. The detection rate obtained in average was 94.9%. Experimental setup and some obtained results are presented.

  6. Re-reading nursing and re-writing practice: towards an empirically based reformulation of the nursing mandate.

    PubMed

    Allen, Davina

    2004-12-01

    This article examines field studies of nursing work published in the English language between 1993 and 2003 as the first step towards an empirically based reformulation of the nursing mandate. A decade of ethnographic research reveals that, contrary to contemporary theories which promote an image of nursing work centred on individualised unmediated caring relationships, in real-life practice the core nursing contribution is that of the healthcare mediator. Eight bundles of activity that comprise this intermediary role are described utilising evidence from the literature. The mismatch between nursing's culture and ideals and the structure and constraints of the work setting is a chronic source of practitioner dissatisfaction. It is argued that the profession has little to gain by pursuing an agenda of holistic patient care centred on emotional intimacy and that an alternative occupational mandate focused on the healthcare mediator function might make for more humane health services and a more viable professional future.

  7. Empirically Supported Treatments in Psychotherapy: Towards an Evidence-Based or Evidence-Biased Psychology in Clinical Settings?

    PubMed Central

    Castelnuovo, Gianluca

    2010-01-01

    The field of research and practice in psychotherapy has been deeply influenced by two different approaches: the empirically supported treatments (ESTs) movement, linked with the evidence-based medicine (EBM) perspective and the “Common Factors” approach, typically connected with the “Dodo Bird Verdict”. About the first perspective, since 1998 a list of ESTs has been established in mental health field. Criterions for “well-established” and “probably efficacious” treatments have arisen. The development of these kinds of paradigms was motivated by the emergence of a “managerial” approach and related systems for remuneration also for mental health providers and for insurance companies. In this article ESTs will be presented underlining also some possible criticisms. Finally complementary approaches, that could add different evidence in the psychotherapy research in comparison with traditional EBM approach, are presented. PMID:21833197

  8. A novel approach for baseline correction in 1H-MRS signals based on ensemble empirical mode decomposition.

    PubMed

    Parto Dezfouli, Mohammad Ali; Dezfouli, Mohsen Parto; Rad, Hamidreza Saligheh

    2014-01-01

    Proton magnetic resonance spectroscopy ((1)H-MRS) is a non-invasive diagnostic tool for measuring biochemical changes in the human body. Acquired (1)H-MRS signals may be corrupted due to a wideband baseline signal generated by macromolecules. Recently, several methods have been developed for the correction of such baseline signals, however most of them are not able to estimate baseline in complex overlapped signal. In this study, a novel automatic baseline correction method is proposed for (1)H-MRS spectra based on ensemble empirical mode decomposition (EEMD). This investigation was applied on both the simulated data and the in-vivo (1)H-MRS of human brain signals. Results justify the efficiency of the proposed method to remove the baseline from (1)H-MRS signals.

  9. Gyroscope-Driven Mouse Pointer with an EMOTIV® EEG Headset and Data Analysis Based on Empirical Mode Decomposition

    PubMed Central

    Rosas-Cholula, Gerardo; Ramirez-Cortes, Juan Manuel; Alarcon-Aquino, Vicente; Gomez-Gil, Pilar; Rangel-Magdaleno, Jose de Jesus; Reyes-Garcia, Carlos

    2013-01-01

    This paper presents a project on the development of a cursor control emulating the typical operations of a computer-mouse, using gyroscope and eye-blinking electromyographic signals which are obtained through a commercial 16-electrode wireless headset, recently released by Emotiv. The cursor position is controlled using information from a gyroscope included in the headset. The clicks are generated through the user's blinking with an adequate detection procedure based on the spectral-like technique called Empirical Mode Decomposition (EMD). EMD is proposed as a simple and quick computational tool, yet effective, aimed to artifact reduction from head movements as well as a method to detect blinking signals for mouse control. Kalman filter is used as state estimator for mouse position control and jitter removal. The detection rate obtained in average was 94.9%. Experimental setup and some obtained results are presented. PMID:23948873

  10. Microarray missing data imputation based on a set theoretic framework and biological knowledge

    PubMed Central

    Gan, Xiangchao; Liew, Alan Wee-Chung; Yan, Hong

    2006-01-01

    Gene expressions measured using microarrays usually suffer from the missing value problem. However, in many data analysis methods, a complete data matrix is required. Although existing missing value imputation algorithms have shown good performance to deal with missing values, they also have their limitations. For example, some algorithms have good performance only when strong local correlation exists in data while some provide the best estimate when data is dominated by global structure. In addition, these algorithms do not take into account any biological constraint in their imputation. In this paper, we propose a set theoretic framework based on projection onto convex sets (POCS) for missing data imputation. POCS allows us to incorporate different types of a priori knowledge about missing values into the estimation process. The main idea of POCS is to formulate every piece of prior knowledge into a corresponding convex set and then use a convergence-guaranteed iterative procedure to obtain a solution in the intersection of all these sets. In this work, we design several convex sets, taking into consideration the biological characteristic of the data: the first set mainly exploit the local correlation structure among genes in microarray data, while the second set captures the global correlation structure among arrays. The third set (actually a series of sets) exploits the biological phenomenon of synchronization loss in microarray experiments. In cyclic systems, synchronization loss is a common phenomenon and we construct a series of sets based on this phenomenon for our POCS imputation algorithm. Experiments show that our algorithm can achieve a significant reduction of error compared to the KNNimpute, SVDimpute and LSimpute methods. PMID:16549873

  11. Theoretical studies of hydrogen bonding in water cyanides and in the base pair Gu Cy

    NASA Astrophysics Data System (ADS)

    Rivelino, Roberto; Ludwig, Valdemir; Rissi, Eduardo; Canuto, Sylvio

    2002-09-01

    Density-functional (DFT) and many-body-perturbation theories (MBPT/CC) are used to study the hydrogen bonding in the water-cyanide complexes H-CN⋯H 2O, H 3C-CN⋯H 2O and (CH 3) 3C-CN⋯H 2O. Structures, binding energies and changes in vibrational frequencies are analyzed. The calculated CN stretching frequency is found to shift to the blue upon complexation in H-CN⋯H 2O and H 3C-CN⋯H 2O. To investigate electron correlation effects on the binding energies of these complexes, single-point calculations are performed at the MBPT/CC (MP2, MP3, MP4, CCSD and CCSD(T)) levels using the optimized MP2 geometries. Binding energies are also obtained at different levels of DFT (B3LYP and PW91) and compared with the MBPT/CC results. All calculations include corrections for basis set superposition error (BSSE) and zero-point vibrational energies. Additionally, the triple hydrogen-bonded guanine-cytosine (Gu-Cy) base pair is analyzed. The binding energy of the Watson-Crick model for Gu-Cy is calculated using the Hartree-Fock calculations and DFT (B3LYP and BP86) methods. The results for the hydrogen bonding distances and binding energies are in good agreement with experimental and recent theoretical values. The calculated dipole moment of the Gu-Cy complex is compared with the direct vector sum of the isolated bases. After taking into account the BSSE effects we find that the electron polarization due to the hydrogen binding leads to an increase of ˜20% of the calculated dipole moment of the complex.

  12. Network-Based Enriched Gene Subnetwork Identification: A Game-Theoretic Approach.

    PubMed

    Razi, Abolfazl; Afghah, Fatemeh; Singh, Salendra; Varadan, Vinay

    2016-01-01

    Identifying subsets of genes that jointly mediate cancer etiology, progression, or therapy response remains a challenging problem due to the complexity and heterogeneity in cancer biology, a problem further exacerbated by the relatively small number of cancer samples profiled as compared with the sheer number of potential molecular factors involved. Pure data-driven methods that merely rely on multiomics data have been successful in discovering potentially functional genes but suffer from high false-positive rates and tend to report subsets of genes whose biological interrelationships are unclear. Recently, integrative data-driven models have been developed to integrate multiomics data with signaling pathway networks in order to identify pathways associated with clinical or biological phenotypes. However, these approaches suffer from an important drawback of being restricted to previously discovered pathway structures and miss novel genomic interactions as well as potential crosstalk among the pathways. In this article, we propose a novel coalition-based game-theoretic approach to overcome the challenge of identifying biologically relevant gene subnetworks associated with disease phenotypes. The algorithm starts from a set of seed genes and traverses a protein-protein interaction network to identify modulated subnetworks. The optimal set of modulated subnetworks is identified using Shapley value that accounts for both individual and collective utility of the subnetwork of genes. The algorithm is applied to two illustrative applications, including the identification of subnetworks associated with (i) disease progression risk in response to platinum-based therapy in ovarian cancer and (ii) immune infiltration in triple-negative breast cancer. The results demonstrate an improved predictive power of the proposed method when compared with state-of-the-art feature selection methods, with the added advantage of identifying novel potentially functional gene subnetworks

  13. Restoration of images degraded by signal-dependent noise based on energy minimization: an empirical study

    NASA Astrophysics Data System (ADS)

    Bajić, Buda; Lindblad, Joakim; Sladoje, Nataša

    2016-07-01

    Most energy minimization-based restoration methods are developed for signal-independent Gaussian noise. The assumption of Gaussian noise distribution leads to a quadratic data fidelity term, which is appealing in optimization. When an image is acquired with a photon counting device, it contains signal-dependent Poisson or mixed Poisson-Gaussian noise. We quantify the loss in performance that occurs when a restoration method suited for Gaussian noise is utilized for mixed noise. Signal-dependent noise can be treated by methods based on either classical maximum a posteriori (MAP) probability approach or on a variance stabilization approach (VST). We compare performances of these approaches on a large image material and observe that VST-based methods outperform those based on MAP in both quality of restoration and in computational efficiency. We quantify improvement achieved by utilizing Huber regularization instead of classical total variation regularization. The conclusion from our study is a recommendation to utilize a VST-based approach combined with regularization by Huber potential for restoration of images degraded by blur and signal-dependent noise. This combination provides a robust and flexible method with good performance and high speed.

  14. New theoretical expressions for the five adsorption type isotherms classified by BET based on statistical physics treatment.

    PubMed

    Khalfaoui, M; Knani, S; Hachicha, M A; Lamine, A Ben

    2003-07-15

    New theoretical expressions to model the five adsorption isotherm types have been established. Using the grand canonical ensemble in statistical physics, we give an analytical expression to each of five physical adsorption isotherm types classified by Brunauer, Emett, and Teller, often called BET isotherms. The establishment of these expressions is based on statistical physics and theoretical considerations. This method allowed estimation of all the mathematical parameters in the models. The physicochemical parameters intervening in the adsorption process that the models present could be deduced directly from the experimental adsorption isotherms by numerical simulation. We determine the adequate model for each type of isotherm, which fixes by direct numerical simulation the monolayer, multilayer, or condensation character. New equations are discussed and results obtained are verified for experimental data from the literature. The new theoretical expressions that we have proposed, based on statistical physics treatment, are rather powerful to better understand and interpret the various five physical adsorption type isotherms at a microscopic level.

  15. Preparation, Practice, and Performance: An Empirical Examination of the Impact of Standards-Based Instruction on Secondary Students' Math and Science Achievement

    ERIC Educational Resources Information Center

    Thompson, Carla J.

    2009-01-01

    For almost two decades proponents of educational reform have advocated the use of standards-based education in maths and science classrooms for improving teacher practices, increasing student learning, and raising the quality of maths and science instruction. This study empirically examined the impact of specific standards-based teacher…

  16. Theoretical analysis for scaling law of thermal blooming based on optical phase deference

    NASA Astrophysics Data System (ADS)

    Sun, Yunqiang; Huang, Zhilong; Ren, Zebin; Chen, Zhiqiang; Guo, Longde; Xi, Fengjie

    2016-10-01

    In order to explore the laser propagation influence of thermal blooming effect of pipe flow and to analysis the influencing factors, scaling law theoretical analysis of the thermal blooming effects in pipe flow are carry out in detail based on the optical path difference caused by thermal blooming effects in pipe flow. Firstly, by solving the energy coupling equation of laser beam propagation, the temperature of the flow is obtained, and then the optical path difference caused by the thermal blooming is deduced. Through the analysis of the influence of pipe size, flow field and laser parameters on the optical path difference, energy scaling parameters Ne=nTαLPR2/(ρɛCpπR02) and geometric scaling parameters Nc=νR2/(ɛL) of thermal blooming for the pipe flow are derived. Secondly, for the direct solution method, the energy coupled equations have analytic solutions only for the straight tube with Gauss beam. Considering the limitation of directly solving the coupled equations, the dimensionless analysis method is adopted, the analysis is also based on the change of optical path difference, same scaling parameters for the pipe flow thermal blooming are derived, which makes energy scaling parameters Ne and geometric scaling parameters Nc have good universality. The research results indicate that when the laser power and the laser beam diameter are changed, thermal blooming effects of the pipeline axial flow caused by optical path difference will not change, as long as you keep energy scaling parameters constant. When diameter or length of the pipe changes, just keep the geometric scaling parameters constant, the pipeline axial flow gas thermal blooming effects caused by optical path difference distribution will not change. That is to say, when the pipe size and laser parameters change, if keeping two scaling parameters with constant, the pipeline axial flow thermal blooming effects caused by the optical path difference will not change. Therefore, the energy scaling

  17. Teaching Standards-Based Group Work Competencies to Social Work Students: An Empirical Examination

    ERIC Educational Resources Information Center

    Macgowan, Mark J.; Vakharia, Sheila P.

    2012-01-01

    Objectives: Accreditation standards and challenges in group work education require competency-based approaches in teaching social work with groups. The Association for the Advancement of Social Work with Groups developed Standards for Social Work Practice with Groups, which serve as foundation competencies for professional practice. However, there…

  18. Inequality of Higher Education in China: An Empirical Test Based on the Perspective of Relative Deprivation

    ERIC Educational Resources Information Center

    Hou, Liming

    2014-01-01

    The primary goal of this paper is to examine what makes Chinese college students dissatisfied with entrance opportunities for higher education. Based on the author's survey data, we test two parameters which could be a potential cause of this dissatisfaction: 1) distributive inequality, which emphasizes the individual's dissatisfaction caused by…

  19. Homogeneity in Community-Based Rape Prevention Programs: Empirical Evidence of Institutional Isomorphism

    ERIC Educational Resources Information Center

    Townsend, Stephanie M.; Campbell, Rebecca

    2007-01-01

    This study examined the practices of 24 community-based rape prevention programs. Although these programs were geographically dispersed throughout one state, they were remarkably similar in their approach to rape prevention programming. DiMaggio and Powell's (1991) theory of institutional isomorphism was used to explain the underlying causes of…

  20. Empirical Investigation into Motives for Choosing Web-Based Distance Learning Programs

    ERIC Educational Resources Information Center

    Alkhattabi, Mona

    2016-01-01

    Today, in association with rapid social and economic changes, there is an increasing level of demand for distance and online learning programs. This study will focus on identifying the main motivational factors for choosing a web-based distance-learning program. Moreover, it will investigate how these factors relate to age, gender, marital status…

  1. An Adaptive E-Learning System Based on Students' Learning Styles: An Empirical Study

    ERIC Educational Resources Information Center

    Drissi, Samia; Amirat, Abdelkrim

    2016-01-01

    Personalized e-learning implementation is recognized as one of the most interesting research areas in the distance web-based education. Since the learning style of each learner is different one must fit e-learning with the different needs of learners. This paper presents an approach to integrate learning styles into adaptive e-learning hypermedia.…

  2. Introducing Evidence-Based Principles to Guide Collaborative Approaches to Evaluation: Results of an Empirical Process

    ERIC Educational Resources Information Center

    Shulha, Lyn M.; Whitmore, Elizabeth; Cousins, J. Bradley; Gilbert, Nathalie; al Hudib, Hind

    2016-01-01

    This article introduces a set of evidence-based principles to guide evaluation practice in contexts where evaluation knowledge is collaboratively produced by evaluators and stakeholders. The data from this study evolved in four phases: two pilot phases exploring the desirability of developing a set of principles; an online questionnaire survey…

  3. Young Readers' Narratives Based on a Picture Book: Model Readers and Empirical Readers

    ERIC Educational Resources Information Center

    Hoel, Trude

    2015-01-01

    The article present parts of a research project where the aim is to investigate six- to seven-year-old children's language use in storytelling. The children's oral texts are based on the wordless picture book "Frog, Where Are You?" Which has been, and still remains, a frequent tool for collecting narratives from children. The Frog story…

  4. Web-based Educational Media: Issues and Empirical Test of Learning.

    ERIC Educational Resources Information Center

    Radhakrishnan, Senthil; Bailey, James E.

    This paper addresses issues and cost benefits of World Wide Web-based education systems. It presents the results of an effort to identify problems that arise when considering this media and suggests conceptual solutions to some of these problems. To evaluate these solutions, a prototype system was built and tested in an engineering classroom; the…

  5. Theoretical spectroscopic study of seven zinc(II) complex with macrocyclic Schiff-base ligand.

    PubMed

    Sayin, Koray; Kariper, Sultan Erkan; Sayin, Tuba Alagöz; Karakaş, Duran

    2014-12-10

    Seven zinc complexes, which are [ZnL(1)](2+), [ZnL(2)](2+), [ZnL(3)](2+), [ZnL(4)](2+), [ZnL(5)](2+), [ZnL(6)](2+) and [ZnL(7)](2+), are studied as theoretically. Structural parameters, vibration frequencies, electronic absorption spectra and (1)H and (13)C NMR spectra are obtained for Zn(II) complexes of macrocyclic penta and heptaaza Schiff-base ligand. Vibration spectra of Zn(II) complexes are studied by using Density Functional Theory (DFT) calculations at the B3LYP/LANL2DZ. The UV-VIS and NMR spectra of the zinc complexes are obtained by using Time Dependent-Density Functional Theory (TD-DFT) method and Giao method, respectively. The agreements are found between experimental data of [ZnL(5)](2+), [ZnL(6)](2+) and [ZnL(7)](2+) complex ions and their calculated results. The geometries of complexes are found as distorted pentagonal planar for [ZnL(1)](2+), [ZnL(2)](2+) and [ZnL(3)](2+) complex ions, distorted tetrahedral for [ZnL(4)](2+) complex ion and distorted pentagonal bipyramidal for [ZnL(5)](2+), [ZnL(6)](2+) and [ZnL(7)](2+) complex ions. Ranking of biological activity is determined by using quantum chemical parameters and this ranking is found as: [ZnL(7)](2+)>[ZnL(6)](2+)>[ZnL(5)](2+)>[ZnL(3)](2+)>[ZnL(2)](2+)>[ZnL(1)](2+).

  6. Set-theoretic deconvolution (STD) for multichromatic ground/air/space-based imagery

    NASA Astrophysics Data System (ADS)

    Safronov, Aleksandr N.

    1997-09-01

    This paper proposes a class of nonlinear methods, called Set-Theoretic Deconvolution (STD), developed for joint restoration of M(M >1) monochrome distorted 2-dimensional images (snapshots) of an unknown extended object, being viewed through the optical channel with unknown PSF, whose true monochrome brightness profiles look distinct at M(M>!) slightly different wavelengths chosen. The presented method appeals to the generalized Projection Onto Convex Sets (POCS) formalism, so that the proper projective metric is introduced and then minimized. Thus, a number of operators is derived in closed form and cyclically applied to M-dimensional functional vector built up from estimates for combinations of monochrome images. During the projecting of vector onto convex sets one attempts to avoid non-physical inversion and to correctly form a feasible solution (fixed point) consistent with qualitative not quantitative information being assumed to be known in advance. Computer simulation demonstrates that the resulting improved monochrome images reveal fine details which could not easily be discerned in the original distorted images. This technique recovers fairly reliably the total multichromatic 2-D portrait of an arbitrary compact object whose monochrome brightness distributions have discontinuities and are highly nonconvex plus multiply connected ones. Originally developed for the deblurring of passively observed objects, the STD approach can be carried over to scenario with actively irradiated objects (f.e., near-Earth space targets). Under advanced conditions, such as spatio-spectrally diversified laser illumination or coherent Doppler imaging implementation, the synthesized loop deconvolver could be universal tool in object feature extraction by means of occasionally aberrated space-borne telescope or turbulence-affected ground/air-based large aperture optical systems.

  7. Empirical Characteristics of Family-Based Linkage to a Complex Trait: the ADIPOQ Region and Adiponectin Levels

    PubMed Central

    Hellwege, Jacklyn N.; Palmer, Nicholette D.; Brown, W. Mark; Ziegler, Julie T.; An, S. Sandy; Guo, Xiuqing; Chen, Y.-D. Ida; Taylor, Kent; Hawkins, Gregory A.; Ng, Maggie C.Y.; Speliotes, Elizabeth K.; Lorenzo, Carlos; Norris, Jill M.; Rotter, Jerome I.; Wagenknecht, Lynne E.; Langefeld, Carl D.; Bowden, Donald W.

    2014-01-01

    We previously identified a low frequency (1.1%) coding variant (G45R; rs200573126) in the adiponectin gene (ADIPOQ) which was the basis for a multipoint microsatellite linkage signal (LOD=8.2) for plasma adiponectin levels in Hispanic families. We have empirically evaluated the ability of data from targeted common variants, exome chip genotyping, and genome-wide association study (GWAS) data to detect linkage and association to adiponectin protein levels at this locus. Simple two-point linkage and association analyses were performed in 88 Hispanic families (1150 individuals) using 10,958 SNPs on chromosome 3. Approaches were compared for their ability to map the functional variant, G45R, which was strongly linked (two-point LOD=20.98) and powerfully associated (p-value=8.1×10−50). Over 450 SNPs within a broad 61 Mb interval around rs200573126 showed nominal evidence of linkage (LOD>3) but only four other SNPs in this region were associated with p-values<1.0×10−4. When G45R was accounted for, the maximum LOD score across the interval dropped to 4.39 and the best p-value was 1.1×10−5. Linked and/or associated variants ranged in frequency (0.0018 to 0.50) and type (coding, non-coding) and had little detectable linkage disequilibrium with rs200573126 (r2<0.20). In addition, the two-point linkage approach empirically outperformed multipoint microsatellite and multipoint SNP analysis. In the absence of data for rs200573126, family-based linkage analysis using a moderately dense SNP dataset, including both common and low frequency variants, resulted in stronger evidence for an adiponectin locus than association data alone. Thus, linkage analysis can be a useful tool to facilitate identification of high impact genetic variants. PMID:25447270

  8. Conventional empirical law reverses in the phase transitions of 122-type iron-based superconductors

    PubMed Central

    Yu, Zhenhai; Wang, Lin; Wang, Luhong; Liu, Haozhe; Zhao, Jinggeng; Li, Chunyu; Sinogeikin, Stanislav; Wu, Wei; Luo, Jianlin; Wang, Nanlin; Yang, Ke; Zhao, Yusheng; Mao, Ho-kwang

    2014-01-01

    Phase transition of solid-state materials is a fundamental research topic in condensed matter physics, materials science and geophysics. It has been well accepted and widely proven that isostructural compounds containing different cations undergo same pressure-induced phase transitions but at progressively lower pressures as the cation radii increases. However, we discovered that this conventional law reverses in the structural transitions in 122-type iron-based superconductors. In this report, a combined low temperature and high pressure X-ray diffraction (XRD) measurement has identified the phase transition curves among the tetragonal (T), orthorhombic (O) and the collapsed-tetragonal (cT) phases in the structural phase diagram of the iron-based superconductor AFe2As2 (A = Ca, Sr, Eu, and Ba). The cation radii dependence of the phase transition pressure (T → cT) shows an opposite trend in which the compounds with larger ambient radii cations have a higher transition pressure. PMID:25417655

  9. Specification-based software sizing: An empirical investigation of function metrics

    NASA Technical Reports Server (NTRS)

    Jeffery, Ross; Stathis, John

    1993-01-01

    For some time the software industry has espoused the need for improved specification-based software size metrics. This paper reports on a study of nineteen recently developed systems in a variety of application domains. The systems were developed by a single software services corporation using a variety of languages. The study investigated several metric characteristics. It shows that: earlier research into inter-item correlation within the overall function count is partially supported; a priori function counts, in themself, do not explain the majority of the effort variation in software development in the organization studied; documentation quality is critical to accurate function identification; and rater error is substantial in manual function counting. The implication of these findings for organizations using function based metrics are explored.

  10. Patients’ Acceptance towards a Web-Based Personal Health Record System: An Empirical Study in Taiwan

    PubMed Central

    Liu, Chung-Feng; Tsai, Yung-Chieh; Jang, Fong-Lin

    2013-01-01

    The health care sector has become increasingly interested in developing personal health record (PHR) systems as an Internet-based telehealthcare implementation to improve the quality and decrease the cost of care. However, the factors that influence patients’ intention to use PHR systems remain unclear. Based on physicians’ therapeutic expertise, we implemented a web-based infertile PHR system and proposed an extended Technology Acceptance Model (TAM) that integrates the physician-patient relationship (PPR) construct into TAM’s original perceived ease of use (PEOU) and perceived usefulness (PU) constructs to explore which factors will influence the behavioral intentions (BI) of infertile patients to use the PHR. From ninety participants from a medical center, 50 valid responses to a self-rating questionnaire were collected, yielding a response rate of 55.56%. The partial least squares (PLS) technique was used to assess the causal relationships that were hypothesized in the extended model. The results indicate that infertile patients expressed a moderately high intention to use the PHR system. The PPR and PU of patients had significant effects on their BI to use PHR, whereas the PEOU indirectly affected the patients’ BI through the PU. This investigation confirms that PPR can have a critical role in shaping patients’ perceptions of the use of healthcare information technologies. Hence, we suggest that hospitals should promote the potential usefulness of PHR and improve the quality of the physician-patient relationship to increase patients’ intention of using PHR. PMID:24142185

  11. Empirical evaluation of analytical models for parallel relational data-base queries. Master's thesis

    SciTech Connect

    Denham, M.C.

    1990-12-01

    This thesis documents the design and implementation of three parallel join algorithms to be used in the verification of analytical models developed by Kearns. Kearns developed a set of analytical models for a variety of relational database queries. These models serve as tools for the design of parallel relational database system. Each of Kearns' models is classified as either single step or multiple step. The single step models reflect queries that require only one operation while the multiple step models reflect queries that require multiple operations. Three parallel join algorithms were implemented based upon Kearns' models. Two are based upon single step join models and one is based upon a multiple step join model. They are implemented on an Intel iPSC/1 parallel computer. The single step join algorithms include the parallel nested-loop join and the bucket (or hash) join. The multiple step algorithm that was implemented is a pipelined version of the bucket join. The results show that within the constraints of the test cases run, the three models are all at least accurate to within about 8.5% and they should prove useful in the design of parallel relational database systems.

  12. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User’s Head Movement

    PubMed Central

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-01-01

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user’s head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest. PMID:27589768

  13. Empirical force field for cisplatin based on quantum dynamics data: case study of new parameterization scheme for coordination compounds.

    PubMed

    Yesylevskyy, S; Cardey, Bruno; Kraszewski, S; Foley, Sarah; Enescu, Mironel; da Silva, Antônio M; Dos Santos, Hélio F; Ramseyer, Christophe

    2015-10-01

    Parameterization of molecular complexes containing a metallic compound, such as cisplatin, is challenging due to the unconventional coordination nature of the bonds which involve platinum atoms. In this work, we develop a new methodology of parameterization for such compounds based on quantum dynamics (QD) calculations. We show that the coordination bonds and angles are more flexible than in normal covalent compounds. The influence of explicit solvent is also shown to be crucial to determine the flexibility of cisplatin in quantum dynamics simulations. Two empirical topologies of cisplatin were produced by fitting its atomic fluctuations against QD in vacuum and QD with explicit first solvation shell of water molecules respectively. A third topology built in a standard way from the static optimized structure was used for comparison. The later one leads to an excessively rigid molecule and exhibits much smaller fluctuations of the bonds and angles than QD reveals. It is shown that accounting for the high flexibility of cisplatin molecule is needed for adequate description of its first hydration shell. MD simulations with flexible QD-based topology also reveal a significant decrease of the barrier of passive diffusion of cisplatin accross the model lipid bilayer. These results confirm that flexibility of organometallic compounds is an important feature to be considered in classical molecular dynamics topologies. Proposed methodology based on QD simulations provides a systematic way of building such topologies.

  14. Synthesizing Results From Empirical Research on Computer-Based Scaffolding in STEM Education

    PubMed Central

    Belland, Brian R.; Walker, Andrew E.; Kim, Nam Ju; Lefler, Mason

    2016-01-01

    Computer-based scaffolding assists students as they generate solutions to complex problems, goals, or tasks, helping increase and integrate their higher order skills in the process. However, despite decades of research on scaffolding in STEM (science, technology, engineering, and mathematics) education, no existing comprehensive meta-analysis has synthesized the results of these studies. This review addresses that need by synthesizing the results of 144 experimental studies (333 outcomes) on the effects of computer-based scaffolding designed to assist the full range of STEM learners (primary through adult education) as they navigated ill-structured, problem-centered curricula. Results of our random effect meta-analysis (a) indicate that computer-based scaffolding showed a consistently positive (ḡ = 0.46) effect on cognitive outcomes across various contexts of use, scaffolding characteristics, and levels of assessment and (b) shed light on many scaffolding debates, including the roles of customization (i.e., fading and adding) and context-specific support. Specifically, scaffolding’s influence on cognitive outcomes did not vary on the basis of context-specificity, presence or absence of scaffolding change, and logic by which scaffolding change is implemented. Scaffolding’s influence was greatest when measured at the principles level and among adult learners. Still scaffolding’s effect was substantial and significantly greater than zero across all age groups and assessment levels. These results suggest that scaffolding is a highly effective intervention across levels of different characteristics and can largely be designed in many different ways while still being highly effective. PMID:28344365

  15. Theoretical bases for research on the acquisition of social sex-roles by children of lesbian mothers.

    PubMed

    Nungesser, L G

    1980-01-01

    The present study, which examines the socialization effects of lesbian mothers upon their children, begins with a discussion of the classification and measurement of sex-typed behaviors. Theories from developmental, behavioral, and social psychology are applied, in order to distinguish between the acquisition of sex-typed behaviors and the actual performance of those behaviors. The conditions affecting the modeling process are also discussed. Lesbian lifestyles and values are explored through a review of several descriptive studies of lesbian mothers. Finally, an application of theoretical models is presented to determine the socialization effects on the children. A theoretical base is provided for suggested experimental research.

  16. Conventional empirical law reverses in the phase transitions of 122-type iron-based superconductors

    SciTech Connect

    Yu, Zhenhai; Wang, Lin; Wang, Luhong; Liu, Haozhe; Zhao, Jinggeng; Li, Chunyu; Sinogeikin, Stanislav; Wu, Wei; Luo, Jianlin; Wang, Nanlin; Yang, Ke; Zhao, Yusheng; Mao, Ho -kwang

    2014-11-24

    Phase transition of solid-state materials is a fundamental research topic in condensed matter physics, materials science and geophysics. It has been well accepted and widely proven that isostructural compounds containing different cations undergo same pressure-induced phase transitions but at progressively lower pressures as the cation radii increases. However, we discovered that this conventional law reverses in the structural transitions in 122-type iron-based superconductors. In this report, a combined low temperature and high pressure X-ray diffraction (XRD) measurement has identified the phase transition curves among the tetragonal (T), orthorhombic (O) and the collapsed-tetragonal (cT) phases in the structural phase diagram of the iron-based superconductor AFe2As2 (A = Ca, Sr, Eu, and Ba). As a result, the cation radii dependence of the phase transition pressure (T → cT) shows an opposite trend in which the compounds with larger ambient radii cations have a higher transition pressure.

  17. Conventional empirical law reverses in the phase transitions of 122-type iron-based superconductors

    DOE PAGES

    Yu, Zhenhai; Wang, Lin; Wang, Luhong; ...

    2014-11-24

    Phase transition of solid-state materials is a fundamental research topic in condensed matter physics, materials science and geophysics. It has been well accepted and widely proven that isostructural compounds containing different cations undergo same pressure-induced phase transitions but at progressively lower pressures as the cation radii increases. However, we discovered that this conventional law reverses in the structural transitions in 122-type iron-based superconductors. In this report, a combined low temperature and high pressure X-ray diffraction (XRD) measurement has identified the phase transition curves among the tetragonal (T), orthorhombic (O) and the collapsed-tetragonal (cT) phases in the structural phase diagram ofmore » the iron-based superconductor AFe2As2 (A = Ca, Sr, Eu, and Ba). As a result, the cation radii dependence of the phase transition pressure (T → cT) shows an opposite trend in which the compounds with larger ambient radii cations have a higher transition pressure.« less

  18. The influence of land urbanization on landslides: An empirical estimation based on Chinese provincial panel data.

    PubMed

    Li, Gerui; Lei, Yalin; Yao, Huajun; Wu, Sanmang; Ge, Jianping

    2017-04-10

    This study used panel data for 28 provinces and municipalities in China from 2003 to 2014 to investigate the relationship between land urbanization and landslides by building panel models for a national sample and subsamples from the three regions of China and studied the problems of landslide prevention measures based on the relationship. The results showed that 1) at the national level, the percentage of built-up area and road density are respectively negative and positive for landslides. 2) At the regional level, the improvement of landslide prevention measures with increasing economic development only appears in built-up areas. The percentage of built-up areas increases the number of landslides in the western region and decreases the number in the central and eastern regions; the degree of decrease in the eastern region is larger than in the central region. Road density increases the number of landslides in each region, and the degree increases gradually from the west to the east. 3) The effect of landslide prevention funding is not obvious. Although the amount of landslide prevention funds decreases the number of landslides at the national level, the degree of increase is too small. Except in the central region, the amount of landslide prevention funding did not decrease the number of landslides effectively in the western and eastern regions. We propose a series of policy implications based on these test results that may help to improve landslide prevention measures.

  19. A beginner's guide to writing the nursing conceptual model-based theoretical rationale.

    PubMed

    Gigliotti, Eileen; Manister, Nancy N

    2012-10-01

    Writing the theoretical rationale for a study can be a daunting prospect for novice researchers. Nursing's conceptual models provide excellent frameworks for placement of study variables, but moving from the very abstract concepts of the nursing model to the less abstract concepts of the study variables is difficult. Similar to the five-paragraph essay used by writing teachers to assist beginning writers to construct a logical thesis, the authors of this column present guidelines that beginners can follow to construct their theoretical rationale. This guide can be used with any nursing conceptual model but Neuman's model was chosen here as the exemplar.

  20. A prediction procedure for propeller aircraft flyover noise based on empirical data

    NASA Astrophysics Data System (ADS)

    Smith, M. H.

    1981-04-01

    Forty-eight different flyover noise certification tests are analyzed using multiple linear regression methods. A prediction model is presented based on this analysis, and the results compared with the test data and two other prediction methods. The aircraft analyzed include 30 single engine aircraft, 16 twin engine piston aircraft, and two twin engine turboprops. The importance of helical tip Mach number is verified and the relationship of several other aircraft, engine, and propeller parameters is developed. The model shows good agreement with the test data and is at least as accurate as the other prediction methods. It has the advantage of being somewhat easier to use since it is in the form of a single equation.

  1. Psychological First Aid: A Consensus-Derived, Empirically Supported, Competency-Based Training Model

    PubMed Central

    Everly, George S.; Brown, Lisa M.; Wendelboe, Aaron M.; Abd Hamid, Nor Hashidah; Tallchief, Vicki L.; Links, Jonathan M.

    2014-01-01

    Surges in demand for professional mental health services occasioned by disasters represent a major public health challenge. To build response capacity, numerous psychological first aid (PFA) training models for professional and lay audiences have been developed that, although often concurring on broad intervention aims, have not systematically addressed pedagogical elements necessary for optimal learning or teaching. We describe a competency-based model of PFA training developed under the auspices of the Centers for Disease Control and Prevention and the Association of Schools of Public Health. We explain the approach used for developing and refining the competency set and summarize the observable knowledge, skills, and attitudes underlying the 6 core competency domains. We discuss the strategies for model dissemination, validation, and adoption in professional and lay communities. PMID:23865656

  2. Joint multifractal analysis based on the partition function approach: analytical analysis, numerical simulation and empirical application

    NASA Astrophysics Data System (ADS)

    Xie, Wen-Jie; Jiang, Zhi-Qiang; Gu, Gao-Feng; Xiong, Xiong; Zhou, Wei-Xing

    2015-10-01

    Many complex systems generate multifractal time series which are long-range cross-correlated. Numerous methods have been proposed to characterize the multifractal nature of these long-range cross correlations. However, several important issues about these methods are not well understood and most methods consider only one moment order. We study the joint multifractal analysis based on partition function with two moment orders, which was initially invented to investigate fluid fields, and derive analytically several important properties. We apply the method numerically to binomial measures with multifractal cross correlations and bivariate fractional Brownian motions without multifractal cross correlations. For binomial multifractal measures, the explicit expressions of mass function, singularity strength and multifractal spectrum of the cross correlations are derived, which agree excellently with the numerical results. We also apply the method to stock market indexes and unveil intriguing multifractality in the cross correlations of index volatilities.

  3. Behavioral Modeling Based on Probabilistic Finite Automata: An Empirical Study †

    PubMed Central

    Tîrnăucă, Cristina; Montaña, José L.; Ontañón, Santiago; González, Avelino J.; Pardo, Luis M.

    2016-01-01

    Imagine an agent that performs tasks according to different strategies. The goal of Behavioral Recognition (BR) is to identify which of the available strategies is the one being used by the agent, by simply observing the agent’s actions and the environmental conditions during a certain period of time. The goal of Behavioral Cloning (BC) is more ambitious. In this last case, the learner must be able to build a model of the behavior of the agent. In both settings, the only assumption is that the learner has access to a training set that contains instances of observed behavioral traces for each available strategy. This paper studies a machine learning approach based on Probabilistic Finite Automata (PFAs), capable of achieving both the recognition and cloning tasks. We evaluate the performance of PFAs in the context of a simulated learning environment (in this case, a virtual Roomba vacuum cleaner robot), and compare it with a collection of other machine learning approaches. PMID:27347956

  4. Empirical estimation of consistency parameter in intertemporal choice based on Tsallis’ statistics

    NASA Astrophysics Data System (ADS)

    Takahashi, Taiki; Oono, Hidemi; Radford, Mark H. B.

    2007-07-01

    Impulsivity and inconsistency in intertemporal choice have been attracting attention in econophysics and neuroeconomics. Although loss of self-control by substance abusers is strongly related to their inconsistency in intertemporal choice, researchers in neuroeconomics and psychopharmacology have usually studied impulsivity in intertemporal choice using a discount rate (e.g. hyperbolic k), with little effort being expended on parameterizing subject's inconsistency in intertemporal choice. Recent studies using Tsallis’ statistics-based econophysics have found a discount function (i.e. q-exponential discount function), which may continuously parameterize a subject's consistency in intertemporal choice. In order to examine the usefulness of the consistency parameter (0⩽q⩽1) in the q-exponential discounting function in behavioral studies, we experimentally estimated the consistency parameter q in Tsallis’ statistics-based discounting function by assessing the points of subjective equality (indifference points) at seven delays (1 week-25 years) in humans (N=24). We observed that most (N=19) subjects’ intertemporal choice was completely inconsistent ( q=0, i.e. hyperbolic discounting), the mean consistency (0⩽q⩽1) was smaller than 0.5, and only one subject had a completely consistent intertemporal choice ( q=1, i.e. exponential discounting). There was no significant correlation between impulsivity and inconsistency parameters. Our results indicate that individual differences in consistency in intertemporal choice can be parameterized by introducing a q-exponential discount function and most people discount delayed rewards hyperbolically, rather than exponentially (i.e. mean q is smaller than 0.5). Further, impulsivity and inconsistency in intertemporal choice can be considered as separate behavioral tendencies. The usefulness of the consistency parameter q in psychopharmacological studies of addictive behavior was demonstrated in the present study.

  5. An empirical RBF model of the magnetosphere parameterized by interplanetary and ground-based drivers

    NASA Astrophysics Data System (ADS)

    Tsyganenko, N. A.; Andreeva, V. A.

    2016-11-01

    In our recent paper (Andreeva and Tsyganenko, 2016), a novel method was proposed to model the magnetosphere directly from spacecraft data, with no a priori knowledge nor ad hoc assumptions about the geometry of the magnetic field sources. The idea was to split the field into the toroidal and poloidal parts and then expand each part into a weighted sum of radial basis functions (RBF). In the present work we take the next step forward by having developed a full-fledged model of the near magnetosphere, based on a multiyear set of space magnetometer data (1995-2015) and driven by ground-based and interplanetary input parameters. The model consolidates the largest ever amount of data and has been found to provide the best ever merit parameters, in terms of both the overall RMS residual field and record-high correlation coefficients between the observed and model field components. By experimenting with different combinations of input parameters and their time-averaging intervals, we found the best so far results to be given by the ram pressure Pd, SYM-H, and N-index by Newell et al. (2007). In addition, the IMF By has also been included as a model driver, with a goal to more accurately represent the IMF penetration effects. The model faithfully reproduces both externally and internally induced variations in the global distribution of the geomagnetic field and electric currents. Stronger solar wind driving results in a deepening of the equatorial field depression and a dramatic increase of its dawn-dusk asymmetry. The Earth's dipole tilt causes a consistent deformation of the magnetotail current sheet and a significant north-south asymmetry of the polar cusp depressions on the dayside. Next steps to further develop the new approach are also discussed.

  6. An Empirical Orthogonal Function-Based Algorithm for Estimating Terrestrial Latent Heat Flux from Eddy Covariance, Meteorological and Satellite Observations

    PubMed Central

    Feng, Fei; Li, Xianglan; Yao, Yunjun; Liang, Shunlin; Chen, Jiquan; Zhao, Xiang; Jia, Kun; Pintér, Krisztina; McCaughey, J. Harry

    2016-01-01

    Accurate estimation of latent heat flux (LE) based on remote sensing data is critical in characterizing terrestrial ecosystems and modeling land surface processes. Many LE products were released during the past few decades, but their quality might not meet the requirements in terms of data consistency and estimation accuracy. Merging multiple algorithms could be an effective way to improve the quality of existing LE products. In this paper, we present a data integration method based on modified empirical orthogonal function (EOF) analysis to integrate the Moderate Resolution Imaging Spectroradiometer (MODIS) LE product (MOD16) and the Priestley-Taylor LE algorithm of Jet Propulsion Laboratory (PT-JPL) estimate. Twenty-two eddy covariance (EC) sites with LE observation were chosen to evaluate our algorithm, showing that the proposed EOF fusion method was capable of integrating the two satellite data sets with improved consistency and reduced uncertainties. Further efforts were needed to evaluate and improve the proposed algorithm at larger spatial scales and time periods, and over different land cover types. PMID:27472383

  7. A Cutting Pattern Recognition Method for Shearers Based on Improved Ensemble Empirical Mode Decomposition and a Probabilistic Neural Network.

    PubMed

    Xu, Jing; Wang, Zhongbin; Tan, Chao; Si, Lei; Liu, Xinhua

    2015-10-30

    In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD) and Probabilistic Neural Network (PNN) is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF) components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method.

  8. Identifying P phase arrival of weak events: The Akaike Information Criterion picking application based on the Empirical Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Li, Xibing; Shang, Xueyi; Morales-Esteban, A.; Wang, Zewei

    2017-03-01

    Seismic P phase arrival picking of weak events is a difficult problem in seismology. The algorithm proposed in this research is based on Empirical Mode Decomposition (EMD) and on the Akaike Information Criterion (AIC) picker. It has been called the EMD-AIC picker. The EMD is a self-adaptive signal decomposition method that not only improves Signal to Noise Ratio (SNR) but also retains P phase arrival information. Then, P phase arrival picking has been determined by applying the AIC picker to the selected main Intrinsic Mode Functions (IMFs). The performance of the EMD-AIC picker has been evaluated on the basis of 1938 micro-seismic signals from the Yongshaba mine (China). The P phases identified by this algorithm have been compared with manual pickings. The evaluation results confirm that the EMD-AIC pickings are highly accurate for the majority of the micro-seismograms. Moreover, the pickings are independent of the kind of noise. Finally, the results obtained by this algorithm have been compared to the wavelet based Discrete Wavelet Transform (DWT)-AIC pickings. This comparison has demonstrated that the EMD-AIC picking method has a better picking accuracy than the DWT-AIC picking method, thus showing this method's reliability and potential.

  9. A Cutting Pattern Recognition Method for Shearers Based on Improved Ensemble Empirical Mode Decomposition and a Probabilistic Neural Network

    PubMed Central

    Xu, Jing; Wang, Zhongbin; Tan, Chao; Si, Lei; Liu, Xinhua

    2015-01-01

    In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD) and Probabilistic Neural Network (PNN) is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF) components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method. PMID:26528985

  10. Empirically Unbinding the Double Bind.

    ERIC Educational Resources Information Center

    Olson, David H.

    The theoretical concept of the double bind and the possibilities for researching it are discussed. The author has observed that theory and research, which should be reciprocal and mutually beneficial, have been working, as concerns the double bind, at odds with one another. Two approaches to empirically investigating the concept are considered via…

  11. Emotional competencies in geriatric nursing: empirical evidence from a computer based large scale assessment calibration study.

    PubMed

    Kaspar, Roman; Hartig, Johannes

    2016-03-01

    The care of older people was described as involving substantial emotion-related affordances. Scholars in vocational training and nursing disagree whether emotion-related skills could be conceptualized and assessed as a professional competence. Studies on emotion work and empathy regularly neglect the multidimensionality of these phenomena and their relation to the care process, and are rarely conclusive with respect to nursing behavior in practice. To test the status of emotion-related skills as a facet of client-directed geriatric nursing competence, 402 final-year nursing students from 24 German schools responded to a 62-item computer-based test. 14 items were developed to represent emotion-related affordances. Multi-dimensional IRT modeling was employed to assess a potential subdomain structure. Emotion-related test items did not form a separate subdomain, and were found to be discriminating across the whole competence continuum. Tasks concerning emotion work and empathy are reliable indicators for various levels of client-directed nursing competence. Claims for a distinct emotion-related competence in geriatric nursing, however, appear excessive with a process-oriented perspective.

  12. Evidence-Based Guidelines for Empirical Therapy of Neutropenic Fever in Korea

    PubMed Central

    Kim, Sung-Han; Kim, Soo Young; Kim, Chung-Jong; Park, Wan Beom; Song, Young Goo; Choi, Jung-Hyun

    2011-01-01

    Neutrophils play an important role in immunological function. Neutropenic patients are vulnerable to infection, and except fever is present, inflammatory reactions are scarce in many cases. Additionally, because infections can worsen rapidly, early evaluation and treatments are especially important in febrile neutropenic patients. In cases in which febrile neutropenia is anticipated due to anticancer chemotherapy, antibiotic prophylaxis can be used, based on the risk of infection. Antifungal prophylaxis may also be considered if long-term neutropenia or mucosal damage is expected. When fever is observed in patients suspected to have neutropenia, an adequate physical examination and blood and sputum cultures should be performed. Initial antibiotics should be chosen by considering the risk of complications following the infection; if the risk is low, oral antibiotics can be used. For initial intravenous antibiotics, monotherapy with a broad-spectrum antibiotic or combination therapy with two antibiotics is recommended. At 3-5 days after beginning the initial antibiotic therapy, the condition of the patient is assessed again to determine whether the fever has subsided or symptoms have worsened. If the patient's condition has improved, intravenous antibiotics can be replaced with oral antibiotics; if the condition has deteriorated, a change of antibiotics or addition of antifungal agents should be considered. If the causative microorganism is identified, initial antimicrobial or antifungal agents should be changed accordingly. When the cause is not detected, the initial agents should continue to be used until the neutrophil count recovers. PMID:21716917

  13. Temporal asymmetries in Interbank Market: an empirically grounded Agent-Based Model

    NASA Astrophysics Data System (ADS)

    Zlatic, Vinko; Popovic, Marko; Abraham, Hrvoje; Caldarelli, Guido; Iori, Giulia

    2014-03-01

    We analyse the changes in the topology of the structure of the E-mid interbank market in the period from September 1st 1999 to September 1st 2009. We uncover a type of temporal irreversibility in the growth of the largest component of the interbank trading network, which is not common to any of the usual network growth models. Such asymmetry, which is also detected on the growth of the clustering and reciprocity coefficient, reveals that the trading mechanism is driven by different dynamics at the beginning and at the end of the day. We are able to recover the complexity of the system by means of a simple Agent Based Model in which the probability of matching between counter parties depends on a time varying vertex fitness (or attractiveness) describing banks liquidity needs. We show that temporal irreversibility is associated with heterogeneity in the banking system and emerges when the distribution of liquidity shocks across banks is broad. We acknowledge support from FET project FOC-II.

  14. Carbon emissions, logistics volume and GDP in China: empirical analysis based on panel data model.

    PubMed

    Guo, Xiaopeng; Ren, Dongfang; Shi, Jiaxing

    2016-12-01

    This paper studies the relationship among carbon emissions, GDP, and logistics by using a panel data model and a combination of statistics and econometrics theory. The model is based on the historical data of 10 typical provinces and cities in China during 2005-2014. The model in this paper adds the variability of logistics on the basis of previous studies, and this variable is replaced by the freight turnover of the provinces. Carbon emissions are calculated by using the annual consumption of coal, oil, and natural gas. GDP is the gross domestic product. The results showed that the amount of logistics and GDP have a contribution to carbon emissions and the long-term relationships are different between different cities in China, mainly influenced by the difference among development mode, economic structure, and level of logistic development. After the testing of panel model setting, this paper established a variable coefficient model of the panel. The influence of GDP and logistics on carbon emissions is obtained according to the influence factors among the variables. The paper concludes with main findings and provides recommendations toward rational planning of urban sustainable development and environmental protection for China.

  15. A theoretical individual-based model of Brown Ring Disease in Manila clams, Venerupis philippinarum

    NASA Astrophysics Data System (ADS)

    Paillard, Christine; Jean, Fred; Ford, Susan E.; Powell, Eric N.; Klinck, John M.; Hofmann, Eileen E.; Flye-Sainte-Marie, Jonathan

    2014-08-01

    An individual-based mathematical model was developed to investigate the biological and environmental interactions that influence the prevalence and intensity of Brown Ring Disease (BRD), a disease, caused by the bacterial pathogen, Vibrio tapetis, in the Manila clam (Venerupis (= Tapes, = Ruditapes) philippinarum). V. tapetis acts as an external microparasite, adhering at the surface of the mantle edge and its secretion, the periostracal lamina, causing the symptomatic brown deposit. Brown Ring Disease is atypical in that it leaves a shell scar that provides a unique tool for diagnosis of either live or dead clams. The model was formulated using laboratory and field measurements of BRD development in Manila clams, physiological responses of the clam to the pathogen, and the physiology of V. tapetis, as well as theoretical understanding of bacterial disease progression in marine shellfish. The simulation results obtained for an individual Manila clam were expanded to cohorts and populations using a probability distribution that prescribed a range of variability for parameters in a three dimensional framework; assimilation rate, clam hemocyte activity rate (the number of bacteria ingested per hemocyte per day), and clam calcification rate (a measure of the ability to recover by covering over the symptomatic brown ring deposit), which sensitivity studies indicated to be processes important in determining BRD prevalence and intensity. This approach allows concurrent simulation of individuals with a variety of different physiological capabilities (phenotypes) and hence by implication differing genotypic composition. Different combinations of the three variables provide robust estimates for the fate of individuals with particular characteristics in a population that consists of mixtures of all possible combinations. The BRD model was implemented using environmental observations from sites in Brittany, France, where Manila clams routinely exhibit BRD signs. The simulated

  16. An empirical approach to predicting long term behavior of metal particle based recording media

    NASA Technical Reports Server (NTRS)

    Hadad, Allan S.

    1991-01-01

    Alpha iron particles used for magnetic recording are prepared through a series of dehydration and reduction steps of alpha-Fe2O3-H2O resulting in acicular, polycrystalline, body centered cubic (bcc) alpha-Fe particles that are single magnetic domains. Since fine iron particles are pyrophoric by nature, stabilization processes had to be developed in order for iron particles to be considered as a viable recording medium for long term archival (i.e., 25+ years) information storage. The primary means of establishing stability is through passivation or controlled oxidation of the iron particle's surface. Since iron particles used for magnetic recording are small, additional oxidation has a direct impact on performance especially where archival storage of recorded information for long periods of time is important. Further stabilization chemistry/processes had to be developed to guarantee that iron particles could be considered as a viable long term recording medium. In an effort to retard the diffusion of iron ions through the oxide layer, other elements such as silicon, aluminum, and chromium have been added to the base iron to promote more dense scale formation or to alleviate some of the non-stoichiometric behavior of the oxide or both. The presence of water vapor has been shown to disrupt the passive layer, subsequently increasing the oxidation rate of the iron. A study was undertaken to examine the degradation in magnetic properties as a function of both temperature and humidity on silicon-containing iron particles between 50-120 deg C and 3-89 percent relative humidity. The methodology to which experimental data was collected and analyzed leading to predictive capability is discussed.

  17. Does community-based conservation shape favorable attitudes among locals? an empirical study from nepal.

    PubMed

    Mehta, J N; Heinen, J T

    2001-08-01

    Like many developing countries, Nepal has adopted a community-based conservation (CBC) approach in recent years to manage its protected areas mainly in response to poor park-people relations. Among other things, under this approach the government has created new "people-oriented" conservation areas, formed and devolved legal authority to grassroots-level institutions to manage local resources, fostered infrastructure development, promoted tourism, and provided income-generating trainings to local people. Of interest to policy-makers and resource managers in Nepal and worldwide is whether this approach to conservation leads to improved attitudes on the part of local people. It is also important to know if personal costs and benefits associated with various intervention programs, and socioeconomic and demographic characteristics influence these attitudes. We explore these questions by looking at the experiences in Annapurna and Makalu-Barun Conservation Areas, Nepal, which have largely adopted a CBC approach in policy formulation, planning, and management. The research was conducted during 1996 and 1997; the data collection methods included random household questionnaire surveys, informal interviews, and review of official records and published literature. The results indicated that the majority of local people held favorable attitudes toward these conservation areas. Logistic regression results revealed that participation in training, benefit from tourism, wildlife depredation issue, ethnicity, gender, and education level were the significant predictors of local attitudes in one or the other conservation area. We conclude that the CBC approach has potential to shape favorable local attitudes and that these attitudes will be mediated by some personal attributes.

  18. Spectroscopic, colorimetric and theoretical investigation of salicylidene hydrazine based reduced Schiff base and its application towards biologically important anions.

    PubMed

    Jana, Sankar; Dalapati, Sasanka; Alam, Md Akhtarul; Guchhait, Nikhil

    2012-06-15

    A reduced Schiff base anionic receptor 1 [N,N'-bis-(2-hydroxy-5-nitro-benzyl)hydrazine] has been synthesized, characterized and reported as a selective chromogenic receptor for fluoride, acetate and phosphate anions over the other tested anions such as chloride, bromide, iodide and hydrogensulphite. Colorimetric naked-eye detection and UV-vis absorption spectroscopic techniques were used to distinguish the recognition behaviours towards various anions. The receptor-anion complexation mainly occurs via hydrogen bonding interactions which facile to generate the charge transfer band in the UV-vis spectra and cause large bathochromic shift as well as naked-eye colour change. Complexation stoichiometry, binding constant and free energy change due to complex formation were determined from Benesi-Hildebrand plot. The binding constant and the free energy change values are well interactive for spontaneous complexation. The experimental results have been correlated with the theoretical calculations using B3LYP hybrid functional and 6-311++G(d,p) basis set for both the receptor and complex by Density Functional Theory (DFT) method.

  19. Wind-blown Sand Electrification Inspired Triboelectric Energy Harvesting Based on Homogeneous Inorganic Materials Contact: A Theoretical Study and Prediction

    PubMed Central

    Hu, Wenwen; Wu, Weiwei; Zhou, Hao-miao

    2016-01-01

    Triboelectric nanogenerator (TENG) based on contact electrification between heterogeneous materials has been widely studied. Inspired from wind-blown sand electrification, we design a novel kind of TENG based on size dependent electrification using homogeneous inorganic materials. Based on the asymmetric contact theory between homogeneous material surfaces, a calculation of surface charge density has been carried out. Furthermore, the theoretical output of homogeneous material based TENG has been simulated. Therefore, this work may pave the way of fabricating TENG without the limitation of static sequence. PMID:26817411

  20. Intelligence in Bali--A Case Study on Estimating Mean IQ for a Population Using Various Corrections Based on Theory and Empirical Findings

    ERIC Educational Resources Information Center

    Rindermann, Heiner; te Nijenhuis, Jan

    2012-01-01

    A high-quality estimate of the mean IQ of a country requires giving a well-validated test to a nationally representative sample, which usually is not feasible in developing countries. So, we used a convenience sample and four corrections based on theory and empirical findings to arrive at a good-quality estimate of the mean IQ in Bali. Our study…

  1. Counselor Training: Empirical Findings and Current Approaches

    ERIC Educational Resources Information Center

    Buser, Trevor J.

    2008-01-01

    The literature on counselor training has included attention to cognitive and interpersonal skill development and has reported on empirical findings regarding the relationship of training with client outcomes. This article reviews the literature on each of these topics and discusses empirical and theoretical underpinnings of recently developed…

  2. The dappled nature of causes of psychiatric illness: replacing the organic-functional/hardware-software dichotomy with empirically based pluralism.

    PubMed

    Kendler, K S

    2012-04-01

    Our tendency to see the world of psychiatric illness in dichotomous and opposing terms has three major sources: the philosophy of Descartes, the state of neuropathology in late nineteenth century Europe (when disorders were divided into those with and without demonstrable pathology and labeled, respectively, organic and functional), and the influential concept of computer functionalism wherein the computer is viewed as a model for the human mind-brain system (brain=hardware, mind=software). These mutually re-enforcing dichotomies, which have had a pernicious influence on our field, make a clear prediction about how 'difference-makers' (aka causal risk factors) for psychiatric disorders should be distributed in nature. In particular, are psychiatric disorders like our laptops, which when they dysfunction, can be cleanly divided into those with software versus hardware problems? I propose 11 categories of difference-makers for psychiatric illness from molecular genetics through culture and review their distribution in schizophrenia, major depression and alcohol dependence. In no case do these distributions resemble that predicted by the organic-functional/hardware-software dichotomy. Instead, the causes of psychiatric illness are dappled, distributed widely across multiple categories. We should abandon Cartesian and computer-functionalism-based dichotomies as scientifically inadequate and an impediment to our ability to integrate the diverse information about psychiatric illness our research has produced. Empirically based pluralism provides a rigorous but dappled view of the etiology of psychiatric illness. Critically, it is based not on how we wish the world to be but how the difference-makers for psychiatric illness are in fact distributed.

  3. Risk-adjusted capitation based on the Diagnostic Cost Group Model: an empirical evaluation with health survey information.

    PubMed Central

    Lamers, L M

    1999-01-01

    OBJECTIVE: To evaluate the predictive accuracy of the Diagnostic Cost Group (DCG) model using health survey information. DATA SOURCES/STUDY SETTING: Longitudinal data collected for a sample of members of a Dutch sickness fund. In the Netherlands the sickness funds provide compulsory health insurance coverage for the 60 percent of the population in the lowest income brackets. STUDY DESIGN: A demographic model and DCG capitation models are estimated by means of ordinary least squares, with an individual's annual healthcare expenditures in 1994 as the dependent variable. For subgroups based on health survey information, costs predicted by the models are compared with actual costs. Using stepwise regression procedures a subset of relevant survey variables that could improve the predictive accuracy of the three-year DCG model was identified. Capitation models were extended with these variables. DATA COLLECTION/EXTRACTION METHODS: For the empirical analysis, panel data of sickness fund members were used that contained demographic information, annual healthcare expenditures, and diagnostic information from hospitalizations for each member. In 1993, a mailed health survey was conducted among a random sample of 15,000 persons in the panel data set, with a 70 percent response rate. PRINCIPAL FINDINGS: The predictive accuracy of the demographic model improves when it is extended with diagnostic information from prior hospitalizations (DCGs). A subset of survey variables further improves the predictive accuracy of the DCG capitation models. The predictable profits and losses based on survey information for the DCG models are smaller than for the demographic model. Most persons with predictable losses based on health survey information were not hospitalized in the preceding year. CONCLUSIONS: The use of diagnostic information from prior hospitalizations is a promising option for improving the demographic capitation payment formula. This study suggests that diagnostic

  4. An improved empirical model of electron and ion fluxes at geosynchronous orbit based on upstream solar wind conditions

    DOE PAGES

    Denton, M. H.; Henderson, M. G.; Jordanova, V. K.; ...

    2016-07-01

    In this study, a new empirical model of the electron fluxes and ion fluxes at geosynchronous orbit (GEO) is introduced, based on observations by Los Alamos National Laboratory (LANL) satellites. The model provides flux predictions in the energy range ~1 eV to ~40 keV, as a function of local time, energy, and the strength of the solar wind electric field (the negative product of the solar wind speed and the z component of the magnetic field). Given appropriate upstream solar wind measurements, the model provides a forecast of the fluxes at GEO with a ~1 h lead time. Model predictionsmore » are tested against in-sample observations from LANL satellites and also against out-of-sample observations from the Compact Environmental Anomaly Sensor II detector on the AMC-12 satellite. The model does not reproduce all structure seen in the observations. However, for the intervals studied here (quiet and storm times) the normalized root-mean-square deviation < ~0.3. It is intended that the model will improve forecasting of the spacecraft environment at GEO and also provide improved boundary/input conditions for physical models of the magnetosphere.« less

  5. Empirically Based Profiles of the Early Literacy Skills of Children With Language Impairment in Early Childhood Special Education.

    PubMed

    Justice, Laura; Logan, Jessica; Kaderavek, Joan; Schmitt, Mary Beth; Tompkins, Virginia; Bartlett, Christopher

    2015-01-01

    The purpose of this study was to empirically determine whether specific profiles characterize preschool-aged children with language impairment (LI) with respect to their early literacy skills (print awareness, name-writing ability, phonological awareness, alphabet knowledge); the primary interest was to determine if one or more profiles suggested vulnerability for future reading problems. Participants were 218 children enrolled in early childhood special education classrooms, 95% of whom received speech-language services. Children were administered an assessment of early literacy skills in the fall of the academic year. Based on results of latent profile analysis, four distinct literacy profiles were identified, with the single largest profile (55% of children) representing children with generally poor literacy skills across all areas examined. Children in the two low-risk categories had higher oral language skills than those in the high-risk and moderate-risk profiles. Across three of the four early literacy measures, children with language as their primary disability had higher scores than those with LI concomitant with other disabilities. These findings indicate that there are specific profiles of early literacy skills among children with LI, with about one half of children exhibiting a profile indicating potential susceptibility for future reading problems.

  6. An improved empirical model of electron and ion fluxes at geosynchronous orbit based on upstream solar wind conditions

    SciTech Connect

    Denton, M. H.; Henderson, M. G.; Jordanova, V. K.; Thomsen, M. F.; Borovsky, J. E.; Woodroffe, J.; Hartley, D. P.; Pitchford, D.

    2016-07-01

    In this study, a new empirical model of the electron fluxes and ion fluxes at geosynchronous orbit (GEO) is introduced, based on observations by Los Alamos National Laboratory (LANL) satellites. The model provides flux predictions in the energy range ~1 eV to ~40 keV, as a function of local time, energy, and the strength of the solar wind electric field (the negative product of the solar wind speed and the z component of the magnetic field). Given appropriate upstream solar wind measurements, the model provides a forecast of the fluxes at GEO with a ~1 h lead time. Model predictions are tested against in-sample observations from LANL satellites and also against out-of-sample observations from the Compact Environmental Anomaly Sensor II detector on the AMC-12 satellite. The model does not reproduce all structure seen in the observations. However, for the intervals studied here (quiet and storm times) the normalized root-mean-square deviation < ~0.3. It is intended that the model will improve forecasting of the spacecraft environment at GEO and also provide improved boundary/input conditions for physical models of the magnetosphere.

  7. Combined magnetic and kinetic control of advanced tokamak steady state scenarios based on semi-empirical modelling

    NASA Astrophysics Data System (ADS)

    Moreau, D.; Artaud, J. F.; Ferron, J. R.; Holcomb, C. T.; Humphreys, D. A.; Liu, F.; Luce, T. C.; Park, J. M.; Prater, R.; Turco, F.; Walker, M. L.

    2015-06-01

    This paper shows that semi-empirical data-driven models based on a two-time-scale approximation for the magnetic and kinetic control of advanced tokamak (AT) scenarios can be advantageously identified from simulated rather than real data, and used for control design. The method is applied to the combined control of the safety factor profile, q(x), and normalized pressure parameter, βN, using DIII-D parameters and actuators (on-axis co-current neutral beam injection (NBI) power, off-axis co-current NBI power, electron cyclotron current drive power, and ohmic coil). The approximate plasma response model was identified from simulated open-loop data obtained using a rapidly converging plasma transport code, METIS, which includes an MHD equilibrium and current diffusion solver, and combines plasma transport nonlinearity with 0D scaling laws and 1.5D ordinary differential equations. The paper discusses the results of closed-loop METIS simulations, using the near-optimal ARTAEMIS control algorithm (Moreau D et al 2013 Nucl. Fusion 53 063020) for steady state AT operation. With feedforward plus feedback control, the steady state target q-profile and βN are satisfactorily tracked with a time scale of about 10 s, despite large disturbances applied to the feedforward powers and plasma parameters. The robustness of the control algorithm with respect to disturbances of the H&CD actuators and of plasma parameters such as the H-factor, plasma density and effective charge, is also shown.

  8. Empirically-Based Crop Insurance for China: A Pilot Study in the Down-middle Yangtze River Area of China

    NASA Astrophysics Data System (ADS)

    Wang, Erda; Yu, Yang; Little, Bertis B.; Chen, Zhongxin; Ren, Jianqiang

    Factors that caused slow growth in crop insurance participation and its ultimate failure in China were multi-faceted including high agricultural production risk, low participation rate, inadequate public awareness, high loss ratio, insufficient and interrupted government financial support. Thus, a clear and present need for data driven analyses and empirically-based risk management exists in China. In the present investigation, agricultural production data for two crops (corn, rice) in five counties in Jiangxi Province and Hunan province for design of a pilot crop insurance program in China. A crop insurance program was designed which (1) provides 75% coverage, (2) a 55% premium rate reduction for the farmer compared to catastrophic coverage most recently offered, and uses the currently approved governmental premium subsidy level. Thus a safety net for Chinese farmers that help maintain agricultural production at a level of self-sufficiency that costs less than half the current plans requires one change to the program: ≥80% of producers must participate in an area.

  9. A theoretical framework for whole-plant carbon assimilation efficiency based on metabolic scaling theory: a test case using Picea seedlings.

    PubMed

    Wang, Zhiqiang; Ji, Mingfei; Deng, Jianming; Milne, Richard I; Ran, Jinzhi; Zhang, Qiang; Fan, Zhexuan; Zhang, Xiaowei; Li, Jiangtao; Huang, Heng; Cheng, Dongliang; Niklas, Karl J

    2015-06-01

    Simultaneous and accurate measurements of whole-plant instantaneous carbon-use efficiency (ICUE) and annual total carbon-use efficiency (TCUE) are difficult to make, especially for trees. One usually estimates ICUE based on the net photosynthetic rate or the assumed proportional relationship between growth efficiency and ICUE. However, thus far, protocols for easily estimating annual TCUE remain problematic. Here, we present a theoretical framework (based on the metabolic scaling theory) to predict whole-plant annual TCUE by directly measuring instantaneous net photosynthetic and respiratory rates. This framework makes four predictions, which were evaluated empirically using seedlings of nine Picea taxa: (i) the flux rates of CO(2) and energy will scale isometrically as a function of plant size, (ii) whole-plant net and gross photosynthetic rates and the net primary productivity will scale isometrically with respect to total leaf mass, (iii) these scaling relationships will be independent of ambient temperature and humidity fluctuations (as measured within an experimental chamber) regardless of the instantaneous net photosynthetic rate or dark respiratory rate, or overall growth rate and (iv) TCUE will scale isometrically with respect to instantaneous efficiency of carbon use (i.e., the latter can be used to predict the former) across diverse species. These predictions were experimentally verified. We also found that the ranking of the nine taxa based on net photosynthetic rates differed from ranking based on either ICUE or TCUE. In addition, the absolute values of ICUE and TCUE significantly differed among the nine taxa, with both ICUE and temperature-corrected ICUE being highest for Picea abies and lowest for Picea schrenkiana. Nevertheless, the data are consistent with the predictions of our general theoretical framework, which can be used to access annual carbon-use efficiency of different species at the level of an individual plant based on simple, direct

  10. Discussion on climate oscillations: CMIP5 general circulation models versus a semi-empirical harmonic model based on astronomical cycles

    NASA Astrophysics Data System (ADS)

    Scafetta, Nicola

    2013-11-01

    Power spectra of global surface temperature (GST) records (available since 1850) reveal major periodicities at about 9.1, 10-11, 19-22 and 59-62 years. Equivalent oscillations are found in numerous multisecular paleoclimatic records. The Coupled Model Intercomparison Project 5 (CMIP5) general circulation models (GCMs), to be used in the IPCC Fifth Assessment Report (AR5, 2013), are analyzed and found not able to reconstruct this variability. In particular, from 2000 to 2013.5 a GST plateau is observed while the GCMs predicted a warming rate of about 2 °C/century. In contrast, the hypothesis that the climate is regulated by specific natural oscillations more accurately fits the GST records at multiple time scales. For example, a quasi 60-year natural oscillation simultaneously explains the 1850-1880, 1910-1940 and 1970-2000 warming periods, the 1880-1910 and 1940-1970 cooling periods and the post 2000 GST plateau. This hypothesis implies that about 50% of the ~ 0.5 °C global surface warming observed from 1970 to 2000 was due to natural oscillations of the climate system, not to anthropogenic forcing as modeled by the CMIP3 and CMIP5 GCMs. Consequently, the climate sensitivity to CO2 doubling should be reduced by half, for example from the 2.0-4.5 °C range (as claimed by the IPCC, 2007) to 1.0-2.3 °C with a likely median of ~ 1.5 °C instead of ~ 3.0 °C. Also modern paleoclimatic temperature reconstructions showing a larger preindustrial variability than the hockey-stick shaped temperature reconstructions developed in early 2000 imply a weaker anthropogenic effect and a stronger solar contribution to climatic changes. The observed natural oscillations could be driven by astronomical forcings. The ~ 9.1 year oscillation appears to be a combination of long soli-lunar tidal oscillations, while quasi 10-11, 20 and 60 year oscillations are typically found among major solar and heliospheric oscillations driven mostly by Jupiter and Saturn movements. Solar models based

  11. Theoretical Study of the Noble Metals on Semiconductor Surfaces and Titanium-Base Shape Memory Alloys

    NASA Astrophysics Data System (ADS)

    Ding, Yungui

    The electronic and structural properties of the (sqrt3 x sqrt3) R30^circ Ag/Si(111) and ( sqrt3 x sqrt3) R30^ circ Au/Si(111) surfaces are investigated using first principles total energy calculations. We have tested almost all experimentally proposed structural models for both surfaces and found the energetically most favorable model for each of them. The lowest energy model structure of the (sqrt3 x sqrt3) R30^circ Ag/Si(111) surface consists of a top layer of Ag atoms arranged as "honeycomb -chained-trimers" lying above a distorted "missing top layer" Si(111) substrate. The coverage of Ag is 1 monolayer (ML). We find that the honeycomb structure observed in STM images arise from the electronic charge densities of an empty surface band near the Fermi level. The electronic density of states of this model gives a "pseudo-gap" around the Fermi level, which is consistent with experimental results. The lowest energy model for the (sqrt3 x sqrt3) R30^circ Au/Si(111) surface is a conjugate honeycomb-chained-trimer (CHCT-1) configuration which consists of a top layer of trimers formed by 1 ML Au atoms lying above a "missing top layer" Si(111) substrate with a honeycomb-chained-trimer structure for its first layer. The structures of Au and Ag are in fact quite similar and belong to the same class of structural models. However, small variation in the structural details gives rise to quite different observed STM images, as revealed in the theoretical calculations. The electronic charge density from bands around the Fermi level for the (sqrt3 x sqrt3) R30^circ Au/Si(111) surface also gives a good description of the images observed in STM experiments. First principles calculations are performed to study the electronic and structural properties of a series of Ti-base binary alloys TiFe, TiNi, TiPd, TiMo, and TiAu in the B2 structure. Calculations are also done for Ti in bcc structure and hypothetical B2-structured TiAl, TiAg, and TiCu. Our results show correlation between the

  12. Theoretical and Simulations-Based Modeling of Micellization in Linear and Branched Surfactant Systems

    NASA Astrophysics Data System (ADS)

    Mendenhall, Jonathan D.

    's and other micellization properties for a variety of linear and branched surfactant chemical architectures which are commonly encountered in practice. Single-component surfactant solutions are investigated, in order to clarify the specific contributions of the surfactant head and tail to the free energy of micellization, a quantity which determines the cmc and all other aspects of micellization. First, a molecular-thermodynamic (MT) theory is presented which makes use of bulk-phase thermodynamics and a phenomenological thought process to describe the energetics related to the formation of a micelle from its constituent surfactant monomers. Second, a combined computer-simulation/molecular-thermodynamic (CSMT) framework is discussed which provides a more detailed quantification of the hydrophobic effect using molecular dynamics simulations. A novel computational strategy to identify surfactant head and tail using an iterative dividing surface approach, along with simulated micelle results, is proposed. Force-field development for novel surfactant structures is also discussed. Third, a statistical-thermodynamic, single-chain, mean-field theory for linear and branched tail packing is formulated, which enables quantification of the specific energetic penalties related to confinement and constraint of surfactant tails within micelles. Finally, these theoretical and simulations-based strategies are used to predict the micellization behavior of 55 linear surfactants and 28 branched surfactants. Critical micelle concentration and optimal micelle properties are reported and compared with experiment, demonstrating good agreement across a range of surfactant head and tail types. In particular, the CSMT framework is found to provide improved agreement with experimental cmc's for the branched surfactants considered. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)

  13. Photothermal Deflection Experiments: Comparison of Existing Theoretical Models and Their Applications to Characterization of -Based Thin Films

    NASA Astrophysics Data System (ADS)

    Korte, Dorota; Franko, Mladen

    2014-12-01

    A method for determination of thermooptical, transport, and structural parameters of -based thin films is presented. The measurements were conducted using beam deflection spectroscopy (BDS) and supporting theoretical analysis performed in the framework of complex geometrical optics providing a novel method of BDS data modeling. It was observed that the material's thermal parameters strongly depend on sample properties determining its photocatalytic activity such as the energy bandgap, carrier lifetime, surface structure, or porosity. Because of that, the fitting procedure of the theoretical dependence into experimental data was developed to determine the sample's thermal parameters, on the basis of which the information about its structure was further found. The obtained results were compared to those based on geometrical and wave optics approaches that are currently widely used for that purpose. It was demonstrated that the choice of the proper model for data modeling is a crucial point when performing such a type of analysis.

  14. Theoretical links supporting the use of problem-based learning in the education of the nurse practitioner.

    PubMed

    Chikotas, Noreen Elaine

    2008-01-01

    The need to evaluate current strategies in educating the advanced practice nurse, specifically the nurse practitioner, is becoming more and more imperative due to the ever-changing health care environment. This article addresses the role of problem-based learning (PBL) as an instructional strategy in educating and preparing the nurse practitioner for future practice.Two theoretical frameworks supporting PBL, andragogy and constructivism, are presented as important to the use of PBL in the education of the nurse practitioner.

  15. Frequency recognition in an SSVEP-based brain computer interface using empirical mode decomposition and refined generalized zero-crossing.

    PubMed

    Wu, Chi-Hsun; Chang, Hsiang-Chih; Lee, Po-Lei; Li, Kuen-Shing; Sie, Jyun-Jie; Sun, Chia-Wei; Yang, Chia-Yen; Li, Po-Hung; Deng, Hua-Ting; Shyu, Kuo-Kai

    2011-03-15

    This paper presents an empirical mode decomposition (EMD) and refined generalized zero crossing (rGZC) approach to achieve frequency recognition in steady-stated visual evoked potential (SSVEP)-based brain computer interfaces (BCIs). Six light emitting diode (LED) flickers with high flickering rates (30, 31, 32, 33, 34, and 35 Hz) functioned as visual stimulators to induce the subjects' SSVEPs. EEG signals recorded in the Oz channel were segmented into data epochs (0.75 s). Each epoch was then decomposed into a series of oscillation components, representing fine-to-coarse information of the signal, called intrinsic mode functions (IMFs). The instantaneous frequencies in each IMF were calculated by refined generalized zero-crossing (rGZC). IMFs with mean instantaneous frequencies (f(GZC)) within 29.5 Hz and 35.5 Hz (i.e., 29.5≤f(GZC)≤35.5 Hz) were designated as SSVEP-related IMFs. Due to the time-locked and phase-locked characteristics of SSVEP, the induced SSVEPs had the same frequency as the gazing visual stimulator. The LED flicker that contributed the majority of the frequency content in SSVEP-related IMFs was chosen as the gaze target. This study tests the proposed system in five male subjects (mean age=25.4±2.07 y/o). Each subject attempted to activate four virtual commands by inputting a sequence of cursor commands on an LCD screen. The average information transfer rate (ITR) and accuracy were 36.99 bits/min and 84.63%. This study demonstrates that EMD is capable of extracting SSVEP data in SSVEP-based BCI system.

  16. Single-trial analysis of cortical oscillatory activities during voluntary movements using empirical mode decomposition (EMD)-based spatiotemporal approach.

    PubMed

    Lee, Po-Lei; Shang, Li-Zen; Wu, Yu-Te; Shu, Chih-Hung; Hsieh, Jen-Chuen; Lin, Yung-Yang; Wu, Chi-Hsun; Liu, Yu-Lu; Yang, Chia-Yen; Sun, Chia-Wei; Shyu, Kuo-Kai

    2009-08-01

    This study presents a method based on empirical mode decomposition (EMD) and a spatial template-based matching approach to extract sensorimotor oscillatory activities from multi-channel magnetoencephalographic (MEG) measurements during right index finger lifting. The longitudinal gradiometer of the sensor unit which presents most prominent SEF was selected on which each single-trial recording was decomposed into a set of intrinsic mode functions (IMFs). The correlation between each IMF of the selected channel and raw data on other channels were created and represented as a spatial map. The sensorimotor-related IMFs with corresponding correlational spatial map exhibiting large values on primary sensorimotor area (SMI) were selected via spatial-template matching process. Trial-specific alpha and beta bands were determined in sensorimotor-related oscillatory activities using a two-spectrum comparison between the spectra obtained from baseline period (-4 to -3 s) and movement-onset period (-0.5 to 0.5 s). Sensorimotor-related oscillatory activities were filtered within the trial-specific frequency bands to resolve task-related oscillatory activities. Results demonstrated that the optimal phase and amplitude information were preserved not only for alpha suppression (event-related desynchronization) and beta rebound (event-related synchronization) but also for profound analysis of subtle dynamics across trials. The retention of high SNR in the extracted oscillatory activities allow various methods of source estimation that can be applied to study the intricate brain dynamics of motor control mechanisms. The present study enables the possibility of investigating cortical pathophysiology of movement disorder on a trial-by-trial basis which also permits an effective alternative for participants or patients who can not endure lengthy procedures or are incapable of sustaining long experiments.

  17. Assessing changes to South African maize production areas in 2055 using empirical and process-based crop models

    NASA Astrophysics Data System (ADS)

    Estes, L.; Bradley, B.; Oppenheimer, M.; Beukes, H.; Schulze, R. E.; Tadross, M.

    2010-12-01

    Rising temperatures and altered precipitation patterns associated with climate change pose a significant threat to crop production, particularly in developing countries. In South Africa, a semi-arid country with a diverse agricultural sector, anthropogenic climate change is likely to affect staple crops and decrease food security. Here, we focus on maize production, South Africa’s most widely grown crop and one with high socio-economic value. We build on previous coarser-scaled studies by working at a finer spatial resolution and by employing two different modeling approaches: the process-based DSSAT Cropping System Model (CSM, version 4.5), and an empirical distribution model (Maxent). For climate projections, we use an ensemble of 10 general circulation models (GCMs) run under both high and low CO2 emissions scenarios (SRES A2 and B1). The models were down-scaled to historical climate records for 5838 quinary-scale catchments covering South Africa (mean area = 164.8 km2), using a technique based on self-organizing maps (SOMs) that generates precipitation patterns more consistent with observed gradients than those produced by the parent GCMs. Soil hydrological and mechanical properties were derived from textural and compositional data linked to a map of 26422 land forms (mean area = 46 km2), while organic carbon from 3377 soil profiles was mapped using regression kriging with 8 spatial predictors. CSM was run using typical management parameters for the several major dryland maize production regions, and with projected CO2 values. The Maxent distribution model was trained using maize locations identified using annual phenology derived from satellite images coupled with airborne crop sampling observations. Temperature and precipitation projections were based on GCM output, with an additional 10% increase in precipitation to simulate higher water-use efficiency under future CO2 concentrations. The two modeling approaches provide spatially explicit projections of

  18. Empirical State Error Covariance Matrix for Batch Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joe

    2015-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.

  19. Theoretical performance of solar cell based on mini-bands quantum dots

    SciTech Connect

    Aly, Abou El-Maaty M. E-mail: ashraf.nasr@gmail.com; Nasr, A. E-mail: ashraf.nasr@gmail.com

    2014-03-21

    The tremendous amount of research in solar energy is directed toward intermediate band solar cell for its advantages compared with the conventional solar cell. The latter has lower efficiency because the photons have lower energy than the bandgap energy and cannot excite mobile carriers from the valence band to the conduction band. On the other hand, if mini intermediate band is introduced between the valence and conduction bands, then the smaller energy photons can be used to promote charge carriers transfer to the conduction band and thereby the total current increases while maintaining a large open circuit voltage. In this article, the influence of the new band on the power conversion efficiency for structure of quantum dots intermediate band solar cell is theoretically investigated and studied. The time-independent Schrödinger equation is used to determine the optimum width and location of the intermediate band. Accordingly, achievement of a maximum efficiency by changing the width of quantum dots and barrier distances is studied. Theoretical determination of the power conversion efficiency under the two different ranges of QD width is presented. From the obtained results, the maximum power conversion efficiency is about 70.42%. It is carried out for simple cubic quantum dot crystal under fully concentrated light. It is strongly dependent on the width of quantum dots and barrier distances.

  20. Theoretical and experimental investigation of generating pulsed Bessel-Gauss beams by using an axicon-based resonator.

    PubMed

    Parsa, Shahrzad; Fallah, Hamid Reza; Ramezani, Mohsen; Soltanolkotabi, Mahmood

    2012-11-01

    Nondiffracting Bessel-Gauss beams are assumed as the superposition of infinite numbers of Gaussian beams whose wave vectors lie on a cone. Based on such a description, different methods are suggested to generate these fields. In this paper, we followed an active scheme to generate these beams. By introducing an axicon-based resonator, we designed the appropriate resonator, studied its resonance modes, and analyzed the beam propagation outside the resonator. Experimentally, we succeeded to obtain Bessel-Gauss beams of the first kind and zero order. We also investigated the changes in effective parameters on the output beam, both theoretically and experimentally.