Science.gov

Sample records for empirically based theoretical

  1. Generalized Constitutive-Based Theoretical and Empirical Models for Hot Working Behavior of Functionally Graded Steels

    NASA Astrophysics Data System (ADS)

    Vanini, Seyed Ali Sadough; Abolghasemzadeh, Mohammad; Assadi, Abbas

    2013-07-01

    Functionally graded steels with graded ferritic and austenitic regions including bainite and martensite intermediate layers produced by electroslag remelting have attracted much attention in recent years. In this article, an empirical model based on the Zener-Hollomon (Z-H) constitutive equation with generalized material constants is presented to investigate the effects of temperature and strain rate on the hot working behavior of functionally graded steels. Next, a theoretical model, generalized by strain compensation, is developed for the flow stress estimation of functionally graded steels under hot compression based on the phase mixture rule and boundary layer characteristics. The model is used for different strains and grading configurations. Specifically, the results for αβγMγ steels from empirical and theoretical models showed excellent agreement with those of experiments of other references within acceptable error.

  2. Distributed optical fiber-based theoretical and empirical methods monitoring hydraulic engineering subjected to seepage velocity

    NASA Astrophysics Data System (ADS)

    Su, Huaizhi; Tian, Shiguang; Cui, Shusheng; Yang, Meng; Wen, Zhiping; Xie, Wei

    2016-09-01

    In order to systematically investigate the general principle and method of monitoring seepage velocity in the hydraulic engineering, the theoretical analysis and physical experiment were implemented based on distributed fiber-optic temperature sensing (DTS) technology. During the coupling influence analyses between seepage field and temperature field in the embankment dam or dike engineering, a simplified model was constructed to describe the coupling relationship of two fields. Different arrangement schemes of optical fiber and measuring approaches of temperature were applied on the model. The inversion analysis idea was further used. The theoretical method of monitoring seepage velocity in the hydraulic engineering was finally proposed. A new concept, namely the effective thermal conductivity, was proposed referring to the thermal conductivity coefficient in the transient hot-wire method. The influence of heat conduction and seepage could be well reflected by this new concept, which was proved to be a potential approach to develop an empirical method monitoring seepage velocity in the hydraulic engineering.

  3. Outcome (competency) based education: an exploration of its origins, theoretical basis, and empirical evidence.

    PubMed

    Morcke, Anne Mette; Dornan, Tim; Eika, Berit

    2013-10-01

    Outcome based or competency based education (OBE) is so firmly established in undergraduate medical education that it might not seem necessary to ask why it was included in recommendations for the future, like the Flexner centenary report. Uncritical acceptance may not, however, deliver its greatest benefits. Our aim was to explore the underpinnings of OBE: its historical origins, theoretical basis, and empirical evidence of its effects in order to answer the question: How can predetermined learning outcomes influence undergraduate medical education? This literature review had three components: A review of historical landmarks in the evolution of OBE; a review of conceptual frameworks and theories; and a systematic review of empirical publications from 1999 to 2010 that reported data concerning the effects of learning outcomes on undergraduate medical education. OBE had its origins in behaviourist theories of learning. It is tightly linked to the assessment and regulation of proficiency, but less clearly linked to teaching and learning activities. Over time, there have been cycles of advocacy for, then criticism of, OBE. A recurring critique concerns the place of complex personal and professional attributes as "competencies". OBE has been adopted by consensus in the face of weak empirical evidence. OBE, which has been advocated for over 50 years, can contribute usefully to defining requisite knowledge and skills, and blueprinting assessments. Its applicability to more complex aspects of clinical performance is not clear. OBE, we conclude, provides a valuable approach to some, but not all, important aspects of undergraduate medical education. PMID:22987194

  4. A Review of Theoretical and Empirical Advancements

    ERIC Educational Resources Information Center

    Wang, Mo; Henkens, Kene; van Solinge, Hanna

    2011-01-01

    In this article, we review both theoretical and empirical advancements in retirement adjustment research. After reviewing and integrating current theories about retirement adjustment, we propose a resource-based dynamic perspective to apply to the understanding of retirement adjustment. We then review empirical findings that are associated with…

  5. Theoretical and empirical bases for dialect-neutral language assessment: contributions from theoretical and applied linguistics to communication disorders.

    PubMed

    Pearson, Barbara Zurer

    2004-02-01

    Three avenues of theoretical research provide insights for discovering abstract properties of language that are subject to disorder and amenable to assessment: (1) the study of universal grammar and its acquisition; (2) descriptions of African American English (AAE) Syntax, Semantics, and Phonology within theoretical linguistics; and (3) the study of specific language impairment (SLI) cross-linguistically. Abstract linguistic concepts were translated into a set of assessment protocols that were used to establish normative data on language acquisition (developmental milestones) in typically developing AAE children ages 4 to 9 years. Testing AAE-speaking language impaired (LI) children and both typically developing (TD) and LI Mainstream American English (MAE)-learning children on these same measures provided the data to select assessments for which (1) TD MAE and AAE children performed the same, and (2) TD performance was reliably different from LI performance in both dialect groups. PMID:15088229

  6. Outcome (Competency) Based Education: An Exploration of Its Origins, Theoretical Basis, and Empirical Evidence

    ERIC Educational Resources Information Center

    Morcke, Anne Mette; Dornan, Tim; Eika, Berit

    2013-01-01

    Outcome based or competency based education (OBE) is so firmly established in undergraduate medical education that it might not seem necessary to ask why it was included in recommendations for the future, like the Flexner centenary report. Uncritical acceptance may not, however, deliver its greatest benefits. Our aim was to explore the…

  7. Theoretical and Empirical Base for Implementation Components of Health-Promoting Schools

    ERIC Educational Resources Information Center

    Samdal, Oddrun; Rowling, Louise

    2011-01-01

    Purpose: Efforts to create a scientific base for the health-promoting school approach have so far not articulated a clear "Science of Delivery". There is thus a need for systematic identification of clearly operationalised implementation components. To address a next step in the refinement of the health-promoting schools' work, this paper sets out…

  8. Designing Educative Curriculum Materials: A Theoretically and Empirically Driven Process

    ERIC Educational Resources Information Center

    Davis, Elizabeth A.; Palincsar, Annemarie Sullivan; Arias, Anna Maria; Bismack, Amber Schultz; Marulis, Loren M.; Iwashyna, Stefanie K.

    2014-01-01

    In this article, the authors argue for a design process in the development of educative curriculum materials that is theoretically and empirically driven. Using a design-based research approach, they describe their design process for incorporating educative features intended to promote teacher learning into existing, high-quality curriculum…

  9. Theoretical and Empirical Descriptions of Thermospheric Density

    NASA Astrophysics Data System (ADS)

    Solomon, S. C.; Qian, L.

    2004-12-01

    The longest-term and most accurate overall description the density of the upper thermosphere is provided by analysis of change in the ephemeris of Earth-orbiting satellites. Empirical models of the thermosphere developed in part from these measurements can do a reasonable job of describing thermospheric properties on a climatological basis, but the promise of first-principles global general circulation models of the coupled thermosphere/ionosphere system is that a true high-resolution, predictive capability may ultimately be developed for thermospheric density. However, several issues are encountered when attempting to tune such models so that they accurately represent absolute densities as a function of altitude, and their changes on solar-rotational and solar-cycle time scales. Among these are the crucial ones of getting the heating rates (from both solar and auroral sources) right, getting the cooling rates right, and establishing the appropriate boundary conditions. However, there are several ancillary issues as well, such as the problem of registering a pressure-coordinate model onto an altitude scale, and dealing with possible departures from hydrostatic equilibrium in empirical models. Thus, tuning a theoretical model to match empirical climatology may be difficult, even in the absence of high temporal or spatial variation of the energy sources. We will discuss some of the challenges involved, and show comparisons of simulations using the NCAR Thermosphere-Ionosphere-Electrodynamics General Circulation Model (TIE-GCM) to empirical model estimates of neutral thermosphere density and temperature. We will also show some recent simulations using measured solar irradiance from the TIMED/SEE instrument as input to the TIE-GCM.

  10. Theoretical modeling of stream potholes based upon empirical observations from the Orange River, Republic of South Africa

    NASA Astrophysics Data System (ADS)

    Springer, Gregory S.; Tooth, Stephen; Wohl, Ellen E.

    2006-12-01

    Potholes carved into streambeds can be important components of channel incision, but they have received little quantitative attention. Here empirical evidence is presented from three sites along the Orange River, Republic of South Africa that demonstrates that the pothole dimensions of radius and depth are strongly correlated using a simple power law. Where radius is the dependent variable, the exponent of the power law describes the rate of increase in radius with increasing depth. Erosion within potholes is complexly related to erosion on the adjacent bed. Erosion efficiencies within small, hemispherical potholes must be high if the potholes are to survive in the face of bed translation (incision). As potholes deepen, however, the necessary efficiencies decline rapidly. Increasing concavity associated with growth imposes stricter constraints; comparatively deep potholes must erode orders of magnitude larger volumes of substrate than shallower potholes in the face of bed retreat. Hemispherical potholes are eventually converted to cylindrical potholes, the geometries of which favor enlargement while they are small. Geometric models constructed using the power law show unambiguously that more substrate is eroded by volume from cylindrical pothole walls during growth than from cylindrical pothole floors. Grinders thus play a secondary role to suspended sediment entrained within the vortices that occur in potholes. Continued growth leads to coalescence with other potholes or destruction through block detachment depending on local geology. The combination of geology and erosion mechanisms may determine whether a strath or inner channel develops as a consequence of the process.

  11. Semivolatile Organic Compounds in Homes: Strategies for Efficient and Systematic Exposure Measurement Based on Empirical and Theoretical Factors

    PubMed Central

    2014-01-01

    Residential exposure can dominate total exposure for commercial chemicals of health concern; however, despite the importance of consumer exposures, methods for estimating household exposures remain limited. We collected house dust and indoor air samples in 49 California homes and analyzed for 76 semivolatile organic compounds (SVOCs)—phthalates, polybrominated diphenyl ethers (PBDEs), polychlorinated biphenyls (PCBs), polycyclic aromatic hydrocarbons (PAHs), and pesticides. Sixty chemicals were detected in either dust or air and here we report 58 SVOCs detected in dust for the first time. In dust, phthalates (bis(2-ethylhexyl) phthalate, benzyl butyl phthalate, di-n-butyl phthalate) and flame retardants (PBDE 99, PBDE 47) were detected at the highest concentrations relative to other chemicals at the 95th percentile, while phthalates were highest at the median. Because SVOCs are found in both gas and condensed phases and redistribute from their original source over time, partitioning models can clarify their fate indoors. We use empirical data to validate air-dust partitioning models and use these results, combined with experience in SVOC exposure assessment, to recommend residential exposure measurement strategies. We can predict dust concentrations reasonably well from measured air concentrations (R2 = 0.80). Partitioning models and knowledge of chemical Koa elucidate exposure pathways and suggest priorities for chemical regulation. These findings also inform study design by allowing researchers to select sampling approaches optimized for their chemicals of interest and study goals. While surface wipes are commonly used in epidemiology studies because of ease of implementation, passive air sampling may be more standardized between homes and also relatively simple to deploy. Validation of passive air sampling methods for SVOCs is a priority. PMID:25488487

  12. Competence and drug use: theoretical frameworks, empirical evidence and measurement.

    PubMed

    Lindenberg, C S; Solorzano, R; Kelley, M; Darrow, V; Gendrop, S C; Strickland, O

    1998-01-01

    Statistics show that use of harmful substances (alcohol, cigarettes, marijuana, cocaine) among women of childbearing age is widespread and serious. Numerous theoretical models and empirical studies have attempted to explain the complex factors that lead individuals to use drugs. The Social Stress Model of Substance Abuse [1] is one model developed to explain parameters that influence drug use. According to the model, the likelihood of an individual engaging in drug use is seen as a function of the stress level and the extent to which it is offset by stress modifiers such as social networks, social competencies, and resources. The variables of the denominator are viewed as interacting with each other to buffer the impact of stress [1]. This article focuses on one of the constructs in this model: that of competence. It presents a summary of theoretical and conceptual formulations for the construct of competence, a review of empirical evidence for the association of competence with drug use, and describes the preliminary development of a multi-scale instrument designed to assess drug protective competence among low-income Hispanic childbearing women. Based upon theoretical and empirical studies, eight domains of drug protective competence were identified and conceptually defined. Using subscales from existing instruments with psychometric evidence for their validity and reliability, a multi-scale instrument was developed to assess drug protective competence. Hypothesis testing was used to assess construct validity. Four drug protective competence domains (social influence, sociability, self-worth, and control/responsibility) were found to be statistically associated with drug use behaviors. Although not statistically significant, expected trends were observed between drug use and the other four domains of drug protective competence (intimacy, nurturance, goal directedness, and spiritual directedness). Study limitations and suggestions for further psychometric testing

  13. Empirical and theoretical analysis of complex systems

    NASA Astrophysics Data System (ADS)

    Zhao, Guannan

    structures evolve on a similar timescale to individual level transmission, we investigated the process of transmission through a model population comprising of social groups which follow simple dynamical rules for growth and break-up, and the profiles produced bear a striking resemblance to empirical data obtained from social, financial and biological systems. Finally, for better implementation of a widely accepted power law test algorithm, we have developed a fast testing procedure using parallel computation.

  14. The Generality of Empirical and Theoretical Explanations of Behavior

    PubMed Central

    Guilhardi, Paulo; Church, Russell M.

    2009-01-01

    For theoretical explanations of data, parameter values estimated from a single dependent measure from one procedure are used to predict alternative dependent measures from many procedures. Theoretical explanations were compared to empirical explanations of data in which known functions and principles were used to fit only selected dependent measures. The comparison focused on the ability of theoretical and empirical explanations to generalize across samples of the data, across dependent measures of behavior, and across different procedures. Rat and human data from fixed-interval and peak procedures, in which principles (e.g., scalar timing) are well known, were described and fit by a theory with independent modules for perception, memory, and decision. The theoretical approach consisted of fitting closed-form equations of the theory to response rate gradients calculated from the data, simulating responses using parameter values previously estimated, and comparing theoretical predictions with dependent measures not used to estimate parameters. Although the empirical and theoretical explanations provided similar fits to the response rate gradients that generalized across samples and had the same number of parameters, only the theoretical explanation generalized across procedures and dependent measures. PMID:19429213

  15. Physical Violence between Siblings: A Theoretical and Empirical Analysis

    ERIC Educational Resources Information Center

    Hoffman, Kristi L.; Kiecolt, K. Jill; Edwards, John N.

    2005-01-01

    This study develops and tests a theoretical model to explain sibling violence based on the feminist, conflict, and social learning theoretical perspectives and research in psychology and sociology. A multivariate analysis of data from 651 young adults generally supports hypotheses from all three theoretical perspectives. Males with brothers have…

  16. Empirical and theoretical models of terrestrial trapped radiation

    SciTech Connect

    Panasyuk, M.I.

    1996-07-01

    A survey of current Skobeltsyn Institute of Nuclear Physics, Moscow State University (INP MSU) empirical and theoretical models of particles (electrons, protons and heavier irons) of the Earth{close_quote}s radiation belts developed to date is presented. Results of intercomparison of the different models as well as comparison with experimental data are reported. Aspects of further development of radiation condition modelling in near-Earth space are discussed. {copyright} {ital 1996 American Institute of Physics.}

  17. Segmented crystalline scintillators: empirical and theoretical investigation of a high quantum efficiency EPID based on an initial engineering prototype CsI(TI) detector.

    PubMed

    Sawant, Amit; Antonuk, Larry E; El-Mohri, Youcef; Zhao, Qihua; Wang, Yi; Li, Yixin; Du, Hong; Perna, Louis

    2006-04-01

    Modern-day radiotherapy relies on highly sophisticated forms of image guidance in order to implement increasingly conformal treatment plans and achieve precise dose delivery. One of the most important goals of such image guidance is to delineate the clinical target volume from surrounding normal tissue during patient setup and dose delivery, thereby avoiding dependence on surrogates such as bony landmarks. In order to achieve this goal, it is necessary to integrate highly efficient imaging technology, capable of resolving soft-tissue contrast at very low doses, within the treatment setup. In this paper we report on the development of one such modality, which comprises a nonoptimized, prototype electronic portal imaging device (EPID) based on a 40 mm thick, segmented crystalline CsI(Tl) detector incorporated into an indirect-detection active matrix flat panel imager (AMFPI). The segmented detector consists of a matrix of 160 x 160 optically isolated, crystalline CsI(Tl) elements spaced at 1016 microm pitch. The detector was coupled to an indirect detection-based active matrix array having a pixel pitch of 508 microm, with each detector element registered to 2 x 2 array pixels. The performance of the prototype imager was evaluated under very low-dose radiotherapy conditions and compared to that of a conventional megavoltage AMFPI based on a Lanex Fast-B phosphor screen. Detailed quantitative measurements were performed in order to determine the x-ray sensitivity, modulation transfer function, noise power spectrum, and detective quantum efficiency (DQE). In addition, images of a contrast-detail phantom and an anthropomorphic head phantom were also acquired. The prototype imager exhibited approximately 22 times higher zero-frequency DQE (approximately 22%) compared to that of the conventional AMFPI (approximately 1%). The measured zero-frequency DQE was found to be lower than theoretical upper limits (approximately 27%) calculated from Monte Carlo simulations, which

  18. Segmented crystalline scintillators: Empirical and theoretical investigation of a high quantum efficiency EPID based on an initial engineering prototype CsI(Tl) detector

    SciTech Connect

    Sawant, Amit; Antonuk, Larry E.; El-Mohri, Youcef; Zhao Qihua; Wang Yi; Li Yixin; Du Hong; Perna, Louis

    2006-04-15

    Modern-day radiotherapy relies on highly sophisticated forms of image guidance in order to implement increasingly conformal treatment plans and achieve precise dose delivery. One of the most important goals of such image guidance is to delineate the clinical target volume from surrounding normal tissue during patient setup and dose delivery, thereby avoiding dependence on surrogates such as bony landmarks. In order to achieve this goal, it is necessary to integrate highly efficient imaging technology, capable of resolving soft-tissue contrast at very low doses, within the treatment setup. In this paper we report on the development of one such modality, which comprises a nonoptimized, prototype electronic portal imaging device (EPID) based on a 40 mm thick, segmented crystalline CsI(Tl) detector incorporated into an indirect-detection active matrix flat panel imager (AMFPI). The segmented detector consists of a matrix of 160x160 optically isolated, crystalline CsI(Tl) elements spaced at 1016 {mu}m pitch. The detector was coupled to an indirect detection-based active matrix array having a pixel pitch of 508 {mu}m, with each detector element registered to 2x2 array pixels. The performance of the prototype imager was evaluated under very low-dose radiotherapy conditions and compared to that of a conventional megavoltage AMFPI based on a Lanex Fast-B phosphor screen. Detailed quantitative measurements were performed in order to determine the x-ray sensitivity, modulation transfer function, noise power spectrum, and detective quantum efficiency (DQE). In addition, images of a contrast-detail phantom and an anthropomorphic head phantom were also acquired. The prototype imager exhibited approximately 22 times higher zero-frequency DQE ({approx}22%) compared to that of the conventional AMFPI ({approx}1%). The measured zero-frequency DQE was found to be lower than theoretical upper limits ({approx}27%) calculated from Monte Carlo simulations, which were based solely on the x

  19. Empirical STORM-E Model. [I. Theoretical and Observational Basis

    NASA Technical Reports Server (NTRS)

    Mertens, Christopher J.; Xu, Xiaojing; Bilitza, Dieter; Mlynczak, Martin G.; Russell, James M., III

    2013-01-01

    Auroral nighttime infrared emission observed by the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument onboard the Thermosphere-Ionosphere-Mesosphere Energetics and Dynamics (TIMED) satellite is used to develop an empirical model of geomagnetic storm enhancements to E-region peak electron densities. The empirical model is called STORM-E and will be incorporated into the 2012 release of the International Reference Ionosphere (IRI). The proxy for characterizing the E-region response to geomagnetic forcing is NO+(v) volume emission rates (VER) derived from the TIMED/SABER 4.3 lm channel limb radiance measurements. The storm-time response of the NO+(v) 4.3 lm VER is sensitive to auroral particle precipitation. A statistical database of storm-time to climatological quiet-time ratios of SABER-observed NO+(v) 4.3 lm VER are fit to widely available geomagnetic indices using the theoretical framework of linear impulse-response theory. The STORM-E model provides a dynamic storm-time correction factor to adjust a known quiescent E-region electron density peak concentration for geomagnetic enhancements due to auroral particle precipitation. Part II of this series describes the explicit development of the empirical storm-time correction factor for E-region peak electron densities, and shows comparisons of E-region electron densities between STORM-E predictions and incoherent scatter radar measurements. In this paper, Part I of the series, the efficacy of using SABER-derived NO+(v) VER as a proxy for the E-region response to solar-geomagnetic disturbances is presented. Furthermore, a detailed description of the algorithms and methodologies used to derive NO+(v) VER from SABER 4.3 lm limb emission measurements is given. Finally, an assessment of key uncertainties in retrieving NO+(v) VER is presented

  20. Converging Paradigms: A Reflection on Parallel Theoretical Developments in Psychoanalytic Metapsychology and Empirical Dream Research.

    PubMed

    Schmelowszky, Ágoston

    2016-08-01

    In the last decades one can perceive a striking parallelism between the shifting perspective of leading representatives of empirical dream research concerning their conceptualization of dreaming and the paradigm shift within clinically based psychoanalytic metapsychology with respect to its theory on the significance of dreaming. In metapsychology, dreaming becomes more and more a central metaphor of mental functioning in general. The theories of Klein, Bion, and Matte-Blanco can be considered as milestones of this paradigm shift. In empirical dream research, the competing theories of Hobson and of Solms respectively argued for and against the meaningfulness of the dream-work in the functioning of the mind. In the meantime, empirical data coming from various sources seemed to prove the significance of dream consciousness for the development and maintenance of adaptive waking consciousness. Metapsychological speculations and hypotheses based on empirical research data seem to point in the same direction, promising for contemporary psychoanalytic practice a more secure theoretical base. In this paper the author brings together these diverse theoretical developments and presents conclusions regarding psychoanalytic theory and technique, as well as proposing an outline of an empirical research plan for testing the specificity of psychoanalysis in developing dream formation. PMID:27500705

  1. Whole-body cryotherapy: empirical evidence and theoretical perspectives

    PubMed Central

    Bleakley, Chris M; Bieuzen, François; Davison, Gareth W; Costello, Joseph T

    2014-01-01

    Whole-body cryotherapy (WBC) involves short exposures to air temperatures below −100°C. WBC is increasingly accessible to athletes, and is purported to enhance recovery after exercise and facilitate rehabilitation postinjury. Our objective was to review the efficacy and effectiveness of WBC using empirical evidence from controlled trials. We found ten relevant reports; the majority were based on small numbers of active athletes aged less than 35 years. Although WBC produces a large temperature gradient for tissue cooling, the relatively poor thermal conductivity of air prevents significant subcutaneous and core body cooling. There is weak evidence from controlled studies that WBC enhances antioxidant capacity and parasympathetic reactivation, and alters inflammatory pathways relevant to sports recovery. A series of small randomized studies found WBC offers improvements in subjective recovery and muscle soreness following metabolic or mechanical overload, but little benefit towards functional recovery. There is evidence from one study only that WBC may assist rehabilitation for adhesive capsulitis of the shoulder. There were no adverse events associated with WBC; however, studies did not seem to undertake active surveillance of predefined adverse events. Until further research is available, athletes should remain cognizant that less expensive modes of cryotherapy, such as local ice-pack application or cold-water immersion, offer comparable physiological and clinical effects to WBC. PMID:24648779

  2. Whole-body cryotherapy: empirical evidence and theoretical perspectives.

    PubMed

    Bleakley, Chris M; Bieuzen, François; Davison, Gareth W; Costello, Joseph T

    2014-01-01

    Whole-body cryotherapy (WBC) involves short exposures to air temperatures below -100°C. WBC is increasingly accessible to athletes, and is purported to enhance recovery after exercise and facilitate rehabilitation postinjury. Our objective was to review the efficacy and effectiveness of WBC using empirical evidence from controlled trials. We found ten relevant reports; the majority were based on small numbers of active athletes aged less than 35 years. Although WBC produces a large temperature gradient for tissue cooling, the relatively poor thermal conductivity of air prevents significant subcutaneous and core body cooling. There is weak evidence from controlled studies that WBC enhances antioxidant capacity and parasympathetic reactivation, and alters inflammatory pathways relevant to sports recovery. A series of small randomized studies found WBC offers improvements in subjective recovery and muscle soreness following metabolic or mechanical overload, but little benefit towards functional recovery. There is evidence from one study only that WBC may assist rehabilitation for adhesive capsulitis of the shoulder. There were no adverse events associated with WBC; however, studies did not seem to undertake active surveillance of predefined adverse events. Until further research is available, athletes should remain cognizant that less expensive modes of cryotherapy, such as local ice-pack application or cold-water immersion, offer comparable physiological and clinical effects to WBC. PMID:24648779

  3. Ability and Learning: A Theoretical and Empirical Synthesis.

    ERIC Educational Resources Information Center

    Haertel, Geneva D.; Walberg, Herbert J.

    To gauge the relationship between intellectual ability and learning, the authors review the work of 20 theorists and analyze empirical correlations at both the elementary and secondary school levels. Intellectual ability is defined in the paper as including intelligence, prior learning, special aptitudes, and other cognitive characteristics. The…

  4. Potential benefits of remote sensing: Theoretical framework and empirical estimate

    NASA Technical Reports Server (NTRS)

    Eisgruber, L. M.

    1972-01-01

    A theoretical framwork is outlined for estimating social returns from research and application of remote sensing. The approximate dollar magnitude is given of a particular application of remote sensing, namely estimates of corn production, soybeans, and wheat. Finally, some comments are made on the limitations of this procedure and on the implications of results.

  5. Alternative Information Theoretic Measures of Television Messages: An Empirical Test.

    ERIC Educational Resources Information Center

    Danowski, James A.

    This research examines two information theoretic measures of media exposure within the same sample of respondents and examines their relative strengths in predicting self-reported aggression. The first measure is the form entropy (DYNUFAM) index of Watt and Krull, which assesses the structural and organizational properties of specific television…

  6. The ascent of man: Theoretical and empirical evidence for blatant dehumanization.

    PubMed

    Kteily, Nour; Bruneau, Emile; Waytz, Adam; Cotterill, Sarah

    2015-11-01

    Dehumanization is a central concept in the study of intergroup relations. Yet although theoretical and methodological advances in subtle, "everyday" dehumanization have progressed rapidly, blatant dehumanization remains understudied. The present research attempts to refocus theoretical and empirical attention on blatant dehumanization, examining when and why it provides explanatory power beyond subtle dehumanization. To accomplish this, we introduce and validate a blatant measure of dehumanization based on the popular depiction of evolutionary progress in the "Ascent of Man." We compare blatant dehumanization to established conceptualizations of subtle and implicit dehumanization, including infrahumanization, perceptions of human nature and human uniqueness, and implicit associations between ingroup-outgroup and human-animal concepts. Across 7 studies conducted in 3 countries, we demonstrate that blatant dehumanization is (a) more strongly associated with individual differences in support for hierarchy than subtle or implicit dehumanization, (b) uniquely predictive of numerous consequential attitudes and behaviors toward multiple outgroup targets, (c) predictive above prejudice, and (d) reliable over time. Finally, we show that blatant-but not subtle-dehumanization spikes immediately after incidents of real intergroup violence and strongly predicts support for aggressive actions like torture and retaliatory violence (after the Boston Marathon bombings and Woolwich attacks in England). This research extends theory on the role of dehumanization in intergroup relations and intergroup conflict and provides an intuitive, validated empirical tool to reliably measure blatant dehumanization. PMID:26121523

  7. Theoretical, Methodological, and Empirical Approaches to Cost Savings: A Compendium

    SciTech Connect

    M Weimar

    1998-12-10

    This publication summarizes and contains the original documentation for understanding why the U.S. Department of Energy's (DOE's) privatization approach provides cost savings and the different approaches that could be used in calculating cost savings for the Tank Waste Remediation System (TWRS) Phase I contract. The initial section summarizes the approaches in the different papers. The appendices are the individual source papers which have been reviewed by individuals outside of the Pacific Northwest National Laboratory and the TWRS Program. Appendix A provides a theoretical basis for and estimate of the level of savings that can be" obtained from a fixed-priced contract with performance risk maintained by the contractor. Appendix B provides the methodology for determining cost savings when comparing a fixed-priced contractor with a Management and Operations (M&O) contractor (cost-plus contractor). Appendix C summarizes the economic model used to calculate cost savings and provides hypothetical output from preliminary calculations. Appendix D provides the summary of the approach for the DOE-Richland Operations Office (RL) estimate of the M&O contractor to perform the same work as BNFL Inc. Appendix E contains information on cost growth and per metric ton of glass costs for high-level waste at two other DOE sites, West Valley and Savannah River. Appendix F addresses a risk allocation analysis of the BNFL proposal that indicates,that the current approach is still better than the alternative.

  8. Submarine gas hydrate estimation: Theoretical and empirical approaches

    SciTech Connect

    Ginsburg, G.D.; Soloviev, V.A.

    1995-12-01

    The published submarine gas hydrate resource estimates are based on the concepts of their continuous extent over large areas and depth intervals and/or the regionally high hydrate concentrations in sediments. The observational data are in conflict with these concepts. At present such estimates cannot be made to an accuracy better than an order of magnitude. The amount of methane in shallow subbottom (seepage associated) gas-hydrate accumulations is estimated at 10{sup 14} m{sup 3} STP, and in deep-seated hydrates at 10{sup 15} m{sup 3} according to observational data. From the genetic standpoint for the time being gas hydrate potential could be only assessed as far less than 10{sup 17} m{sup 3} because rates of related hydrogeological and geochemical processes have not been adequately studied.

  9. The sensations of everyday life: empirical, theoretical, and pragmatic considerations.

    PubMed

    Dunn, W

    2001-01-01

    The experience of being human is embedded in sensory events of everyday life. This lecture reviews sensory processing literature, including neuroscience and social science perspectives. Introduced is Dunns Model of Sensory Processing, and the evidence supporting this model is summarized. Specifically, using Sensory Profile questionnaires (i.e., items describing responses to sensory events in daily life; persons mark the frequency of each behavior), persons birth to 90 years of age demonstrate four sensory processing patterns: sensory seeking, sensory avoiding, sensory sensitivity, and low registration. These patterns are based on a persons neurological thresholds and self-regulation strategies. Psychophysiology studies verify these sensory processing patterns; persons with strong preferences in each pattern also have unique patterns of habituation and responsivity in skin conductance. Studies also indicate that persons with disabilities respond differently than peers on these questionnaires, suggesting underlying poor sensory processing in certain disorders, including autism, attention deficit hyperactivity disorder, developmental delays, and schizophrenia. The author proposes relationships between sensory processing and temperament and personality traits. The four categories of temperament share some consistency with the four sensory processing patterns described in Dunn's model. As with temperament, each person has some level of responsiveness within each sensory processing preference (i.e., a certain amount of seeking, avoiding, etc., not one or the other). The author suggests that one's sensory processing preferences simultaneously reflect his or her nervous system needs and form the basis for the manifestation of temperament and personality. The final section of this lecture outlines parameters for developing best practice that supports interventions based on this knowledge. PMID:12959225

  10. A Unified Model of Knowledge Sharing Behaviours: Theoretical Development and Empirical Test

    ERIC Educational Resources Information Center

    Chennamaneni, Anitha; Teng, James T. C.; Raja, M. K.

    2012-01-01

    Research and practice on knowledge management (KM) have shown that information technology alone cannot guarantee that employees will volunteer and share knowledge. While previous studies have linked motivational factors to knowledge sharing (KS), we took a further step to thoroughly examine this theoretically and empirically. We developed a…

  11. An Analysis of Enabling School Structure: Theoretical, Empirical, and Research Considerations

    ERIC Educational Resources Information Center

    Sinden, James E.; Hoy, Wayne K.; Sweetland, Scott R.

    2004-01-01

    The construct of enabling school structure is empirically analyzed in this qualitative study of high schools. First, the theoretical underpinning of enabling school structure is developed. Then, six high schools, which were determined to have enabling structures in a large quantitative study of Ohio schools, were analyzed in depth using…

  12. Theoretical Foundation of Zisman's Empirical Equation for Wetting of Liquids on Solid Surfaces

    ERIC Educational Resources Information Center

    Zhu, Ruzeng; Cui, Shuwen; Wang, Xiaosong

    2010-01-01

    Theories of wetting of liquids on solid surfaces under the condition that van der Waals force is dominant are briefly reviewed. We show theoretically that Zisman's empirical equation for wetting of liquids on solid surfaces is a linear approximation of the Young-van der Waals equation in the wetting region, and we express the two parameters in…

  13. University Students' Understanding of the Concepts Empirical, Theoretical, Qualitative and Quantitative Research

    ERIC Educational Resources Information Center

    Murtonen, Mari

    2015-01-01

    University research education in many disciplines is frequently confronted by problems with students' weak level of understanding of research concepts. A mind map technique was used to investigate how students understand central methodological concepts of empirical, theoretical, qualitative and quantitative. The main hypothesis was that some…

  14. Empirically Based Play Interventions for Children

    ERIC Educational Resources Information Center

    Reddy, Linda A., Ed.; Files-Hall, Tara M., Ed.; Schaefer, Charles E., Ed.

    2005-01-01

    "Empirically Based Play Interventions for Children" is a compilation of innovative, well-designed play interventions, presented for the first time in one text. Play therapy is the oldest and most popular form of child therapy in clinical practice and is widely considered by practitioners to be uniquely responsive to children's developmental needs.…

  15. An empirical and theoretical investigation of the intensities of 4f-4f electronic transitions

    SciTech Connect

    Devlin, M.T.

    1987-01-01

    The intensities of certain lanthanide 4f-4f electronic transitions exhibit extraordinary sensitivity to the ligand environment near a lanthanide ion, and empirical and theoretical investigations of these 4f-4f electric-dipole transitions are reported herein. From these studies, the mechanistic basis of 4f-4f electric-dipole transition intensities are evaluated. Additionally, correlations between the structure of lanthanide-ligand complexes and empirically observed electronic transition intensities are developed. The general applicability and utility of these spectra-structure correlations are also evaluated. The influence of the ligand environment of 4f-4f transition intensities is investigated by measuring the absorption spectra of a series of well-characterized neodymium (Nd{sup 3+}), holmium (Ho{sup 3+}) and erbium (Er{sup 3+})-ligand complexes. Trends in the absorption intensity spectra of these lanthanide complexes are related to specific structural features of each complex. The empirically observed spectral trends are evaluated by theoretically investigation the mechanism by which 4f-4f electric-dipole transitions occur. Two separate models of 4f-4f electronic transitions, the static - coupling and the dynamic - coupling models, are incorporated into the general Judd-Ofelt intensity theory. Using these two models, theoretical calculations of 4f-4f electronic transition intensities are performed. The results of these calculations are in good agreement with empirically observed 4f-4f electronic transition intensities, and they are useful in rationalizing the observed spectra-structure correlations.

  16. Color and psychological functioning: a review of theoretical and empirical work

    PubMed Central

    Elliot, Andrew J.

    2015-01-01

    In the past decade there has been increased interest in research on color and psychological functioning. Important advances have been made in theoretical work and empirical work, but there are also important weaknesses in both areas that must be addressed for the literature to continue to develop apace. In this article, I provide brief theoretical and empirical reviews of research in this area, in each instance beginning with a historical background and recent advancements, and proceeding to an evaluation focused on weaknesses that provide guidelines for future research. I conclude by reiterating that the literature on color and psychological functioning is at a nascent stage of development, and by recommending patience and prudence regarding conclusions about theory, findings, and real-world application. PMID:25883578

  17. Dignity in the care of older people – a review of the theoretical and empirical literature

    PubMed Central

    Gallagher, Ann; Li, Sarah; Wainwright, Paul; Jones, Ian Rees; Lee, Diana

    2008-01-01

    Background Dignity has become a central concern in UK health policy in relation to older and vulnerable people. The empirical and theoretical literature relating to dignity is extensive and as likely to confound and confuse as to clarify the meaning of dignity for nurses in practice. The aim of this paper is critically to examine the literature and to address the following questions: What does dignity mean? What promotes and diminishes dignity? And how might dignity be operationalised in the care of older people? This paper critically reviews the theoretical and empirical literature relating to dignity and clarifies the meaning and implications of dignity in relation to the care of older people. If nurses are to provide dignified care clarification is an essential first step. Methods This is a review article, critically examining papers reporting theoretical perspectives and empirical studies relating to dignity. The following databases were searched: Assia, BHI, CINAHL, Social Services Abstracts, IBSS, Web of Knowledge Social Sciences Citation Index and Arts & Humanities Citation Index and location of books a chapters in philosophy literature. An analytical approach was adopted to the publications reviewed, focusing on the objectives of the review. Results and discussion We review a range of theoretical and empirical accounts of dignity and identify key dignity promoting factors evident in the literature, including staff attitudes and behaviour; environment; culture of care; and the performance of specific care activities. Although there is scope to learn more about cultural aspects of dignity we know a good deal about dignity in care in general terms. Conclusion We argue that what is required is to provide sufficient support and education to help nurses understand dignity and adequate resources to operationalise dignity in their everyday practice. Using the themes identified from our review we offer proposals for the direction of future research. PMID:18620561

  18. Conceptual and empirical problems with game theoretic approaches to language evolution

    PubMed Central

    Watumull, Jeffrey; Hauser, Marc D.

    2014-01-01

    The importance of game theoretic models to evolutionary theory has been in formulating elegant equations that specify the strategies to be played and the conditions to be satisfied for particular traits to evolve. These models, in conjunction with experimental tests of their predictions, have successfully described and explained the costs and benefits of varying strategies and the dynamics for establishing equilibria in a number of evolutionary scenarios, including especially cooperation, mating, and aggression. Over the past decade or so, game theory has been applied to model the evolution of language. In contrast to the aforementioned scenarios, however, we argue that these models are problematic due to conceptual confusions and empirical difficiences. In particular, these models conflate the comptutations and representations of our language faculty (mechanism) with its utility in communication (function); model languages as having different fitness functions for which there is no evidence; depend on assumptions for the starting state of the system, thereby begging the question of how these systems evolved; and to date, have generated no empirical studies at all. Game theoretic models of language evolution have therefore failed to advance how or why language evolved, or why it has the particular representations and computations that it does. We conclude with some brief suggestions for how this situation might be ameliorated, enabling this important theoretical tool to make substantive empirical contributions. PMID:24678305

  19. Conceptual and empirical problems with game theoretic approaches to language evolution.

    PubMed

    Watumull, Jeffrey; Hauser, Marc D

    2014-01-01

    The importance of game theoretic models to evolutionary theory has been in formulating elegant equations that specify the strategies to be played and the conditions to be satisfied for particular traits to evolve. These models, in conjunction with experimental tests of their predictions, have successfully described and explained the costs and benefits of varying strategies and the dynamics for establishing equilibria in a number of evolutionary scenarios, including especially cooperation, mating, and aggression. Over the past decade or so, game theory has been applied to model the evolution of language. In contrast to the aforementioned scenarios, however, we argue that these models are problematic due to conceptual confusions and empirical difficiences. In particular, these models conflate the comptutations and representations of our language faculty (mechanism) with its utility in communication (function); model languages as having different fitness functions for which there is no evidence; depend on assumptions for the starting state of the system, thereby begging the question of how these systems evolved; and to date, have generated no empirical studies at all. Game theoretic models of language evolution have therefore failed to advance how or why language evolved, or why it has the particular representations and computations that it does. We conclude with some brief suggestions for how this situation might be ameliorated, enabling this important theoretical tool to make substantive empirical contributions. PMID:24678305

  20. Why It Is Hard to Find Genes Associated With Social Science Traits: Theoretical and Empirical Considerations

    PubMed Central

    Lee, James J.; Benjamin, Daniel J.; Beauchamp, Jonathan P.; Glaeser, Edward L.; Borst, Gregoire; Pinker, Steven; Laibson, David I.

    2013-01-01

    Objectives. We explain why traits of interest to behavioral scientists may have a genetic architecture featuring hundreds or thousands of loci with tiny individual effects rather than a few with large effects and why such an architecture makes it difficult to find robust associations between traits and genes. Methods. We conducted a genome-wide association study at 2 sites, Harvard University and Union College, measuring more than 100 physical and behavioral traits with a sample size typical of candidate gene studies. We evaluated predictions that alleles with large effect sizes would be rare and most traits of interest to social science are likely characterized by a lack of strong directional selection. We also carried out a theoretical analysis of the genetic architecture of traits based on R.A. Fisher’s geometric model of natural selection and empirical analyses of the effects of selection bias and phenotype measurement stability on the results of genetic association studies. Results. Although we replicated several known genetic associations with physical traits, we found only 2 associations with behavioral traits that met the nominal genome-wide significance threshold, indicating that physical and behavioral traits are mainly affected by numerous genes with small effects. Conclusions. The challenge for social science genomics is the likelihood that genes are connected to behavioral variation by lengthy, nonlinear, interactive causal chains, and unraveling these chains requires allying with personal genomics to take advantage of the potential for large sample sizes as well as continuing with traditional epidemiological studies. PMID:23927501

  1. The Role of Trait Emotional Intelligence in Academic Performance: Theoretical Overview and Empirical Update.

    PubMed

    Perera, Harsha N

    2016-01-01

    Considerable debate still exists among scholars over the role of trait emotional intelligence (TEI) in academic performance. The dominant theoretical position is that TEI should be orthogonal or only weakly related to achievement; yet, there are strong theoretical reasons to believe that TEI plays a key role in performance. The purpose of the current article is to provide (a) an overview of the possible theoretical mechanisms linking TEI with achievement and (b) an update on empirical research examining this relationship. To elucidate these theoretical mechanisms, the overview draws on multiple theories of emotion and regulation, including TEI theory, social-functional accounts of emotion, and expectancy-value and psychobiological model of emotion and regulation. Although these theoretical accounts variously emphasize different variables as focal constructs, when taken together, they provide a comprehensive picture of the possible mechanisms linking TEI with achievement. In this regard, the article redresses the problem of vaguely specified theoretical links currently hampering progress in the field. The article closes with a consideration of directions for future research. PMID:26515326

  2. On the complex relationship between energy expenditure and longevity: Reconciling the contradictory empirical results with a simple theoretical model.

    PubMed

    Hou, Chen; Amunugama, Kaushalya

    2015-07-01

    The relationship between energy expenditure and longevity has been a central theme in aging studies. Empirical studies have yielded controversial results, which cannot be reconciled by existing theories. In this paper, we present a simple theoretical model based on first principles of energy conservation and allometric scaling laws. The model takes into considerations the energy tradeoffs between life history traits and the efficiency of the energy utilization, and offers quantitative and qualitative explanations for a set of seemingly contradictory empirical results. We show that oxidative metabolism can affect cellular damage and longevity in different ways in animals with different life histories and under different experimental conditions. Qualitative data and the linearity between energy expenditure, cellular damage, and lifespan assumed in previous studies are not sufficient to understand the complexity of the relationships. Our model provides a theoretical framework for quantitative analyses and predictions. The model is supported by a variety of empirical studies, including studies on the cellular damage profile during ontogeny; the intra- and inter-specific correlations between body mass, metabolic rate, and lifespan; and the effects on lifespan of (1) diet restriction and genetic modification of growth hormone, (2) the cold and exercise stresses, and (3) manipulations of antioxidant. PMID:26086438

  3. Theoretical and empirical qualification of a mechanical-optical interface for parallel optics links

    NASA Astrophysics Data System (ADS)

    Chuang, S.; Schoellner, D.; Ugolini, A.; Wakjira, J.; Wolf, G.; Gandhi, P.; Persaud, A.

    2015-03-01

    As the implementation of parallel optics continues to evolve, development of a universal coupling interface between VCSEL/PD arrays and the corresponding photonic turn connector is necessary. A newly developed monolithic mechanical-optical interface efficiently couples optical transmit/receive arrays to the accompanying fiber optic connector. This paper describes the optical model behind the coupling interface and validates the model using empirical measurements. Optical modeling will address how the interface is adaptable to the broad range of VCSEL/PD optical parameters from commercially available VCSEL hardware manufacturers; the optical model will illustrate coupling efficiencies versus launch specifications. Theoretical modeling will examine system sensitivity through Monte Carlo simulations and provide alignment tolerance requirements. Empirical results will be presented to validate the optical model predictions and subsequent system performance. Functionality will be demonstrated through optical loss and coupling efficiency measurements. System metrics will include characterizations such as eye diagram results and link loss measurements.

  4. SAGE II/Umkehr ozone comparisons and aerosols effects: An empirical and theoretical study. Final report

    SciTech Connect

    Newchurch, M.

    1997-09-15

    The objectives of this research were to: (1) examine empirically the aerosol effect on Umkehr ozone profiles using SAGE II aerosol and ozone data; (2) examine theoretically the aerosol effect on Umkehr ozone profiles; (3) examine the differences between SAGE II ozone profiles and both old- and new-format Umkehr ozone profiles for ozone-trend information; (4) reexamine SAGE I-Umkehr ozone differences with the most recent version of SAGE I data; and (5) contribute to the SAGE II science team.

  5. Scientific thinking in young children: theoretical advances, empirical research, and policy implications.

    PubMed

    Gopnik, Alison

    2012-09-28

    New theoretical ideas and empirical research show that very young children's learning and thinking are strikingly similar to much learning and thinking in science. Preschoolers test hypotheses against data and make causal inferences; they learn from statistics and informal experimentation, and from watching and listening to others. The mathematical framework of probabilistic models and Bayesian inference can describe this learning in precise ways. These discoveries have implications for early childhood education and policy. In particular, they suggest both that early childhood experience is extremely important and that the trend toward more structured and academic early childhood programs is misguided. PMID:23019643

  6. The Theoretical and Empirical Basis for Meditation as an Intervention for PTSD

    ERIC Educational Resources Information Center

    Lang, Ariel J.; Strauss, Jennifer L.; Bomyea, Jessica; Bormann, Jill E.; Hickman, Steven D.; Good, Raquel C.; Essex, Michael

    2012-01-01

    In spite of the existence of good empirically supported treatments for posttraumatic stress disorder (PTSD), consumers and providers continue to ask for more options for managing this common and often chronic condition. Meditation-based approaches are being widely implemented, but there is minimal research rigorously assessing their effectiveness.…

  7. Common liability to addiction and “gateway hypothesis”: Theoretical, empirical and evolutionary perspective

    PubMed Central

    Vanyukov, Michael M.; Tarter, Ralph E.; Kirillova, Galina P.; Kirisci, Levent; Reynolds, Maureen D.; Kreek, Mary Jeanne; Conway, Kevin P.; Maher, Brion S.; Iacono, William G.; Bierut, Laura; Neale, Michael C.; Clark, Duncan B.; Ridenour, Ty A.

    2013-01-01

    Background Two competing concepts address the development of involvement with psychoactive substances: the “gateway hypothesis” (GH) and common liability to addiction (CLA). Method The literature on theoretical foundations and empirical findings related to both concepts is reviewed. Results The data suggest that drug use initiation sequencing, the core GH element, is variable and opportunistic rather than uniform and developmentally deterministic. The association between risks for use of different substances, if any, can be more readily explained by common underpinnings than by specific staging. In contrast, the CLA concept is grounded in genetic theory and supported by data identifying common sources of variation in the risk for specific addictions. This commonality has identifiable neurobiological substrate and plausible evolutionary explanations. Conclusions Whereas the “gateway” hypothesis does not specify mechanistic connections between “stages”, and does not extend to the risks for addictions, the concept of common liability to addictions incorporates sequencing of drug use initiation as well as extends to related addictions and their severity, provides a parsimonious explanation of substance use and addiction co-occurrence, and establishes a theoretical and empirical foundation to research in etiology, quantitative risk and severity measurement, as well as targeted non-drug-specific prevention and early intervention. PMID:22261179

  8. Theoretical and empirical scale dependency of Z-R relationships: Evidence, impacts, and correction

    NASA Astrophysics Data System (ADS)

    Verrier, Sébastien; Barthès, Laurent; Mallet, Cécile

    2013-07-01

    Estimation of rainfall intensities from radar measurements relies to a large extent on power-laws relationships between rain rates R and radar reflectivities Z, i.e., Z = a*R^b. These relationships are generally applied unawarely of the scale, which is questionable since the nonlinearity of these relations could lead to undesirable discrepancies when combined with scale aggregation. Since the parameters (a,b) are expectedly related with drop size distribution (DSD) properties, they are often derived at disdrometer scale, not at radar scale, which could lead to errors at the latter. We propose to investigate the statistical behavior of Z-R relationships across scales both on theoretical and empirical sides. Theoretically, it is shown that claimed multifractal properties of rainfall processes could constrain the parameters (a,b) such that the exponent b would be scale independent but the prefactor a would be growing as a (slow) power law of time or space scale. In the empirical part (which may be read independently of theoretical considerations), high-resolution disdrometer (Dual-Beam Spectropluviometer) data of rain rates and reflectivity factors are considered at various integration times comprised in the range 15 s - 64 min. A variety of regression techniques is applied on Z-R scatterplots at all these time scales, establishing empirical evidence of a behavior coherent with theoretical considerations: a grows as a 0.1 power law of scale while b decreases more slightly. The properties of a are suggested to be closely linked to inhomogeneities in the DSDs since extensions of Z-R relationships involving (here, strongly nonconstant) normalization parameters of the DSDs seem to be more robust across scales. The scale dependence of simple Z = a*R^b relationships is advocated to be a possible source of overestimation of rainfall intensities or accumulations. Several ways for correcting such scaling biases (which can reach >15-20% in terms of relative error) are suggested

  9. Evolution of the empirical and theoretical foundations of eyewitness identification reform.

    PubMed

    Clark, Steven E; Moreland, Molly B; Gronlund, Scott D

    2014-04-01

    Scientists in many disciplines have begun to raise questions about the evolution of research findings over time (Ioannidis in Epidemiology, 19, 640-648, 2008; Jennions & Møller in Proceedings of the Royal Society, Biological Sciences, 269, 43-48, 2002; Mullen, Muellerleile, & Bryan in Personality and Social Psychology Bulletin, 27, 1450-1462, 2001; Schooler in Nature, 470, 437, 2011), since many phenomena exhibit decline effects-reductions in the magnitudes of effect sizes as empirical evidence accumulates. The present article examines empirical and theoretical evolution in eyewitness identification research. For decades, the field has held that there are identification procedures that, if implemented by law enforcement, would increase eyewitness accuracy, either by reducing false identifications, with little or no change in correct identifications, or by increasing correct identifications, with little or no change in false identifications. Despite the durability of this no-cost view, it is unambiguously contradicted by data (Clark in Perspectives on Psychological Science, 7, 238-259, 2012a; Clark & Godfrey in Psychonomic Bulletin & Review, 16, 22-42, 2009; Clark, Moreland, & Rush, 2013; Palmer & Brewer in Law and Human Behavior, 36, 247-255, 2012), raising questions as to how the no-cost view became well-accepted and endured for so long. Our analyses suggest that (1) seminal studies produced, or were interpreted as having produced, the no-cost pattern of results; (2) a compelling theory was developed that appeared to account for the no-cost pattern; (3) empirical results changed over the years, and subsequent studies did not reliably replicate the no-cost pattern; and (4) the no-cost view survived despite the accumulation of contradictory empirical evidence. Theories of memory that were ruled out by early data now appear to be supported by data, and the theory developed to account for early data now appears to be incorrect. PMID:24258271

  10. Theoretic Fit and Empirical Fit: The Performance of Maximum Likelihood versus Generalized Least Squares Estimation in Structural Equation Models.

    ERIC Educational Resources Information Center

    Olsson, Ulf Henning; Troye, Sigurd Villads; Howell, Roy D.

    1999-01-01

    Used simulation to compare the ability of maximum likelihood (ML) and generalized least-squares (GLS) estimation to provide theoretic fit in models that are parsimonious representations of a true model. The better empirical fit obtained for GLS, compared with ML, was obtained at the cost of lower theoretic fit. (Author/SLD)

  11. Ignorance, Vulnerability and the Occurrence of "Radical Surprises": Theoretical Reflections and Empirical Findings

    NASA Astrophysics Data System (ADS)

    Kuhlicke, C.

    2009-04-01

    , that the flood was far beyond people's power of imagination (nescience). The reason therefore is that previous to the flood an institutionalized space of experience and horizon of expectation existed, which did not consider the possibility that the "stability" of the river is artificially created by engineering achievements to reduce its naturally given variability. Based on the empirical findings and the theoretical reasoning overall conclusions are drawn and implications for flood risk management under conditions global environmental change are outlined.

  12. The adaptive evolution of virulence: a review of theoretical predictions and empirical tests.

    PubMed

    Cressler, Clayton E; McLEOD, David V; Rozins, Carly; VAN DEN Hoogen, Josée; Day, Troy

    2016-06-01

    Why is it that some parasites cause high levels of host damage (i.e. virulence) whereas others are relatively benign? There are now numerous reviews of virulence evolution in the literature but it is nevertheless still difficult to find a comprehensive treatment of the theory and data on the subject that is easily accessible to non-specialists. Here we attempt to do so by distilling the vast theoretical literature on the topic into a set of relatively few robust predictions. We then provide a comprehensive assessment of the available empirical literature that tests these predictions. Our results show that there have been some notable successes in integrating theory and data but also that theory and empiricism in this field do not 'speak' to each other very well. We offer a few suggestions for how the connection between the two might be improved. PMID:26302775

  13. How beauty works. Theoretical mechanisms and two empirical applications on students' evaluation of teaching.

    PubMed

    Wolbring, Tobias; Riordan, Patrick

    2016-05-01

    Plenty of studies show that the physical appearance of a person affects a variety of outcomes in everyday life. However, due to an incomplete theoretical explication and empirical problems in disentangling different beauty effects, it is unclear which mechanisms are at work. To clarify how beauty works we present explanations from evolutionary theory and expectation states theory and show where both perspectives differ and where interlinkage appears promising. Using students' evaluations of teaching we find observational and experimental evidence for the different causal pathways of physical attractiveness. First, independent raters strongly agree over the physical attractiveness of a person. Second, attractive instructors receive better student ratings. Third, students attend classes of attractive instructors more frequently - even after controlling for teaching quality. Fourth, we find no evidence that attractiveness effects become stronger if rater and ratee are of the opposite sex. Finally, the beauty premium turns into a penalty if an attractive instructor falls short of students' expectations. PMID:26973043

  14. The Influence of Education and Socialization on Radicalization: An Exploration of Theoretical Presumptions and Empirical Research.

    PubMed

    Pels, Trees; de Ruyter, Doret J

    2012-06-01

    BACKGROUND AND OBJECTIVE: Research into radicalization does not pay much attention to education. This is remarkable and possibly misses an important influence on the process of radicalization. Therefore this article sets out to explore the relation between education on the one hand and the onset or prevention of radicalization on the other hand. METHOD: This article is a theoretical literature review. It has analyzed empirical studies-mainly from European countries-about the educational aims, content and style of Muslim parents and parents with (extreme) right-wing sympathies. RESULTS: Research examining similarity in right-wing sympathies between parents and children yields mixed results, but studies among adolescents point to a significant concordance. Research also showed that authoritarian parenting may play a significant role. Similar research among Muslim families was not found. While raising children with distrust and an authoritarian style are prevalent, the impact on adolescents has not been investigated. The empirical literature we reviewed does not give sufficient evidence to conclude that democratic ideal in and an authoritative style of education are conducive to the development of a democratic attitude. CONCLUSION: There is a knowledge gap with regard to the influence of education on the onset or the prevention of radicalization. Schools and families are underappreciated sources of informal social control and social capital and therefore the gap should be closed. If there is a better understanding of the effect of education, policy as well as interventions can be developed to assist parents and teachers in preventing radicalization. PMID:22611328

  15. From the bench to modeling--R0 at the interface between empirical and theoretical approaches in epidemiology of environmentally transmitted infectious diseases.

    PubMed

    Ivanek, Renata; Lahodny, Glenn

    2015-02-01

    transmission rate of infection and the pathogen growth rate in the environment. Moreover, we identified experimental conditions for which the theoretical R0 predictions based on the hypotheses H2 and H3 differ greatly, which would assist their discrimination and conclusive validation against future empirical studies. Once a valid theoretical R0 is identified for Salmonella Typhimurium in mice, its generalizability to other host-pathogen-environment systems should be tested. The present study may serve as a template for integrated empirical and theoretical research of R0 in the epidemiology of ETIDs. PMID:25441048

  16. Theoretical Foundations for Evidence-Based Health Informatics: Why? How?

    PubMed

    Scott, Philip J; Georgiou, Andrew; Hyppönen, Hannele; Craven, Catherine K; Rigby, Michael; Brender McNair, Jytte

    2016-01-01

    A scientific approach to health informatics requires sound theoretical foundations. Health informatics implementation would be more effective if evidence-based and guided by theories about what is likely to work in what circumstances. We report on a Medinfo 2015 workshop on this topic jointly organized by the EFMI Working Group on Assessment of Health Information Systems and the IMIA Working Group on Technology Assessment and Quality Development. We discuss the findings of the workshop and propose an approach to consolidate empirical knowledge into testable middle-range theories. PMID:27577457

  17. Coaching and guidance with patient decision aids: A review of theoretical and empirical evidence

    PubMed Central

    2013-01-01

    Background Coaching and guidance are structured approaches that can be used within or alongside patient decision aids (PtDAs) to facilitate the process of decision making. Coaching is provided by an individual, and guidance is embedded within the decision support materials. The purpose of this paper is to: a) present updated definitions of the concepts “coaching” and “guidance”; b) present an updated summary of current theoretical and empirical insights into the roles played by coaching/guidance in the context of PtDAs; and c) highlight emerging issues and research opportunities in this aspect of PtDA design. Methods We identified literature published since 2003 on shared decision making theoretical frameworks inclusive of coaching or guidance. We also conducted a sub-analysis of randomized controlled trials included in the 2011 Cochrane Collaboration Review of PtDAs with search results updated to December 2010. The sub-analysis was conducted on the characteristics of coaching and/or guidance included in any trial of PtDAs and trials that allowed the impact of coaching and/or guidance with PtDA to be compared to another intervention or usual care. Results Theoretical evidence continues to justify the use of coaching and/or guidance to better support patients in the process of thinking about a decision and in communicating their values/preferences with others. In 98 randomized controlled trials of PtDAs, 11 trials (11.2%) included coaching and 63 trials (64.3%) provided guidance. Compared to usual care, coaching provided alongside a PtDA improved knowledge and decreased mean costs. The impact on some other outcomes (e.g., participation in decision making, satisfaction, option chosen) was more variable, with some trials showing positive effects and other trials reporting no differences. For values-choice agreement, decisional conflict, adherence, and anxiety there were no differences between groups. None of these outcomes were worse when patients were exposed

  18. Empirically Based Myths: Astrology, Biorhythms, and ATIs.

    ERIC Educational Resources Information Center

    Ragsdale, Ronald G.

    1980-01-01

    A myth may have an empirical basis through chance occurrence; perhaps Aptitude Treatment Interactions (ATIs) are in this category. While ATIs have great utility in describing, planning, and implementing instruction, few disordinal interactions have been found. Article suggests narrowing of ATI research with replications and estimates of effect…

  19. Multisystemic Therapy: An Empirically Supported, Home-Based Family Therapy Approach.

    ERIC Educational Resources Information Center

    Sheidow, Ashli J.; Woodford, Mark S.

    2003-01-01

    Multisystemic Therapy (MST) is a well-validated, evidenced-based treatment for serious clinical problems presented by adolescents and their families. This article is an introduction to the MST approach and outlines key clinical features, describes the theoretical underpinnings, and discusses the empirical support for MST's effectiveness with a…

  20. A theoretical and empirical analysis of context: neighbourhoods, smoking and youth.

    PubMed

    Frohlich, Katherine L; Potvin, Louise; Chabot, Patrick; Corin, Ellen

    2002-05-01

    Numerous studies are currently addressing the issue of contextual effects on health and disease outcomes. The majority of these studies fall short of providing a theoretical basis with which to explain what context is and how it affects individual disease outcomes. We propose a theoretical model, entitled collective lifestyles, which brings together three concepts from practice theory: social structure, social practices and agency. We do so in an attempt to move away from both behavioural and structural-functionalist explanations of the differential distribution of disease outcomes among areas by including a contextualisation of health behaviours that considers their meaning. We test the framework using the empirical example of smoking and pre-adolescents in 32 communities across Québec, Canada. Social structure is operationalised as characteristics and resources; characteristics are the socio-economic aggregate characteristics of individuals culled from the 1996 Canadian Census, and resources are what regulates and transforms smoking practices. Information about social practices was collected in focus groups with pre-adolescents from four of the participating communities. Using zero-order and partial correlations we find that a portrait of communities emerges. Where there is a high proportion of more socio-economically advantaged people, resources tend to be more smoking discouraging, with the opposite being true for disadvantaged communities. Upon analysis of the focus group material, however, we find that the social practices in communities do not necessarily reflect the "objectified" measures of social structure. We suggest that a different conceptualisation of accessibility and lifestyle in contextual studies may enable us to improve our grasp on how differential rates of disease come about in local areas. PMID:12058856

  1. Evaluation of theoretical and empirical water vapor sorption isotherm models for soils

    NASA Astrophysics Data System (ADS)

    Arthur, Emmanuel; Tuller, Markus; Moldrup, Per; de Jonge, Lis W.

    2016-01-01

    The mathematical characterization of water vapor sorption isotherms of soils is crucial for modeling processes such as volatilization of pesticides and diffusive and convective water vapor transport. Although numerous physically based and empirical models were previously proposed to describe sorption isotherms of building materials, food, and other industrial products, knowledge about the applicability of these functions for soils is noticeably lacking. We present an evaluation of nine models for characterizing adsorption/desorption isotherms for a water activity range from 0.03 to 0.93 based on measured data of 207 soils with widely varying textures, organic carbon contents, and clay mineralogy. In addition, the potential applicability of the models for prediction of sorption isotherms from known clay content was investigated. While in general, all investigated models described measured adsorption and desorption isotherms reasonably well, distinct differences were observed between physical and empirical models and due to the different degrees of freedom of the model equations. There were also considerable differences in model performance for adsorption and desorption data. While regression analysis relating model parameters and clay content and subsequent model application for prediction of measured isotherms showed promise for the majority of investigated soils, for soils with distinct kaolinitic and smectitic clay mineralogy predicted isotherms did not closely match the measurements.

  2. Safety climate and injuries: an examination of theoretical and empirical relationships.

    PubMed

    Beus, Jeremy M; Payne, Stephanie C; Bergman, Mindy E; Arthur, Winfred

    2010-07-01

    Our purpose in this study was to meta-analytically address several theoretical and empirical issues regarding the relationships between safety climate and injuries. First, we distinguished between extant safety climate-->injury and injury-->safety climate relationships for both organizational and psychological safety climates. Second, we examined several potential moderators of these relationships. Meta-analyses revealed that injuries were more predictive of organizational safety climate than safety climate was predictive of injuries. Additionally, the injury-->safety climate relationship was stronger for organizational climate than for psychological climate. Moderator analyses revealed that the degree of content contamination in safety climate measures inflated effects, whereas measurement deficiency attenuated effects. Additionally, moderator analyses showed that as the time period over which injuries were assessed lengthened, the safety climate-->injury relationship was attenuated. Supplemental meta-analyses of specific safety climate dimensions also revealed that perceived management commitment to safety is the most robust predictor of occupational injuries. Contrary to expectations, the operationalization of injuries did not meaningfully moderate safety climate-injury relationships. Implications and recommendations for future research and practice are discussed. PMID:20604591

  3. Empirical corroboration of an earlier theoretical resolution to the UV paradox of insect polarized skylight orientation.

    PubMed

    Wang, Xin; Gao, Jun; Fan, Zhiguo

    2014-02-01

    It is surprising that many insect species use only the ultraviolet (UV) component of the polarized skylight for orientation and navigation purposes, while both the intensity and the degree of polarization of light from the clear sky are lower in the UV than at longer (blue, green, red) wavelengths. Why have these insects chosen the UV part of the polarized skylight? This strange phenomenon is called the "UV-sky-pol paradox". Although earlier several speculations tried to resolve this paradox, they did this without any quantitative data. A theoretical and computational model has convincingly explained why it is advantageous for certain animals to detect celestial polarization in the UV. We performed a sky-polarimetric approach and built a polarized skylight sensor that models the processing of polarization signals by insect photoreceptors. Using this model sensor, we carried out measurements under clear and cloudy sky conditions. Our results showed that light from the cloudy sky has maximal degree of polarization in the UV. Furthermore, under both clear and cloudy skies the angle of polarization of skylight can be detected with a higher accuracy. By this, we corroborated empirically the soundness of the earlier computational resolution of the UV-sky-pol paradox. PMID:24402685

  4. Empirical corroboration of an earlier theoretical resolution to the UV paradox of insect polarized skylight orientation

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Gao, Jun; Fan, Zhiguo

    2014-02-01

    It is surprising that many insect species use only the ultraviolet (UV) component of the polarized skylight for orientation and navigation purposes, while both the intensity and the degree of polarization of light from the clear sky are lower in the UV than at longer (blue, green, red) wavelengths. Why have these insects chosen the UV part of the polarized skylight? This strange phenomenon is called the "UV-sky-pol paradox". Although earlier several speculations tried to resolve this paradox, they did this without any quantitative data. A theoretical and computational model has convincingly explained why it is advantageous for certain animals to detect celestial polarization in the UV. We performed a sky-polarimetric approach and built a polarized skylight sensor that models the processing of polarization signals by insect photoreceptors. Using this model sensor, we carried out measurements under clear and cloudy sky conditions. Our results showed that light from the cloudy sky has maximal degree of polarization in the UV. Furthermore, under both clear and cloudy skies the angle of polarization of skylight can be detected with a higher accuracy. By this, we corroborated empirically the soundness of the earlier computational resolution of the UV-sky-pol paradox.

  5. Solubility of caffeine from green tea in supercritical CO2: a theoretical and empirical approach.

    PubMed

    Gadkari, Pravin Vasantrao; Balaraman, Manohar

    2015-12-01

    Decaffeination of fresh green tea was carried out with supercritical CO2 in the presence of ethanol as co-solvent. The solubility of caffeine in supercritical CO2 varied from 44.19 × 10(-6) to 149.55 × 10(-6) (mole fraction) over a pressure and temperature range of 15 to 35 MPa and 313 to 333 K, respectively. The maximum solubility of caffeine was obtained at 25 MPa and 323 K. Experimental solubility data were correlated with the theoretical equation of state models Peng-Robinson (PR), Soave Redlich-Kwong (SRK), and Redlich-Kwong (RK). The RK model had regressed experimental data with 15.52 % average absolute relative deviation (AARD). In contrast, Gordillo empirical model regressed the best to experimental data with only 0.96 % AARD. Under supercritical conditions, solubility of caffeine in tea matrix was lower than the solubility of pure caffeine. Further, solubility of caffeine in supercritical CO2 was compared with solubility of pure caffeine in conventional solvents and a maximum solubility 90 × 10(-3) mol fraction was obtained with chloroform. PMID:26604372

  6. Rural Employment, Migration, and Economic Development: Theoretical Issues and Empirical Evidence from Africa. Africa Rural Employment Paper No. 1.

    ERIC Educational Resources Information Center

    Byerlee, Derek; Eicher, Carl K.

    Employment problems in Africa were examined with special emphasis on rural employment and migration within the context of overall economic development. A framework was provided for analyzing rural employment in development; that framework was used to analyze empirical information from Africa; and theoretical issues were raised in analyzing rural…

  7. Calculation of theoretical and empirical nutrient N critical loads in the mixed conifer ecosystems of southern California.

    PubMed

    Breiner, Joan; Gimeno, Benjamin S; Fenn, Mark

    2007-01-01

    Edaphic, foliar, and hydrologic forest nutrient status indicators from 15 mixed conifer forest stands in the Sierra Nevada, San Gabriel Mountains, and San Bernardino National Forest were used to estimate empirical or theoretical critical loads (CL) for nitrogen (N) as a nutrient. Soil acidification response to N deposition was also evaluated. Robust empirical relationships were found relating N deposition to plant N uptake (N in foliage), N fertility (litter C/N ratio), and soil acidification. However, no consistent empirical CL were obtained when the thresholds for parameters indicative of N excess from other types of ecosystems were used. Similarly, the highest theoretical CL for nutrient N calculated using the simple mass balance steady state model (estimates ranging from 1.4-8.8 kg N/ha/year) was approximately two times lower than the empirical observations. Further research is needed to derive the thresholds for indicators associated with the impairment of these mixed conifer forests exposed to chronic N deposition within a Mediterranean climate. Further development or parameterization of models for the calculation of theoretical critical loads suitable for these ecosystems will also be an important aspect of future critical loads research. PMID:17450298

  8. Theoretical and Empirical Comparisons between Two Models for Continuous Item Responses.

    ERIC Educational Resources Information Center

    Ferrando, Pere J.

    2002-01-01

    Analyzed the relations between two continuous response models intended for typical response items: the linear congeneric model and Samejima's continuous response model (CRM). Illustrated the relations described using an empirical example and assessed the relations through a simulation study. (SLD)

  9. Attachment-based family therapy for depressed and suicidal adolescents: theory, clinical model and empirical support.

    PubMed

    Ewing, E Stephanie Krauthamer; Diamond, Guy; Levy, Suzanne

    2015-01-01

    Attachment-Based Family Therapy (ABFT) is a manualized family-based intervention designed for working with depressed adolescents, including those at risk for suicide, and their families. It is an empirically informed and supported treatment. ABFT has its theoretical underpinnings in attachment theory and clinical roots in structural family therapy and emotion focused therapies. ABFT relies on a transactional model that aims to transform the quality of adolescent-parent attachment, as a means of providing the adolescent with a more secure relationship that can support them during challenging times generally, and the crises related to suicidal thinking and behavior, specifically. This article reviews: (1) the theoretical foundations of ABFT (attachment theory, models of emotional development); (2) the ABFT clinical model, including training and supervision factors; and (3) empirical support. PMID:25778674

  10. A Model of Resource Allocation in Public School Districts: A Theoretical and Empirical Analysis.

    ERIC Educational Resources Information Center

    Chambers, Jay G.

    This paper formulates a comprehensive model of resource allocation in a local public school district. The theoretical framework specified could be applied equally well to any number of local public social service agencies. Section 1 develops the theoretical model describing the process of resource allocation. This involves the determination of the…

  11. Empirically Based Strategies for Preventing Juvenile Delinquency.

    PubMed

    Pardini, Dustin

    2016-04-01

    Juvenile crime is a serious public health problem that results in significant emotional and financial costs for victims and society. Using etiologic models as a guide, multiple interventions have been developed to target risk factors thought to perpetuate the emergence and persistence of delinquent behavior. Evidence suggests that the most effective interventions tend to have well-defined treatment protocols, focus on therapeutic approaches as opposed to external control techniques, and use multimodal cognitive-behavioral treatment strategies. Moving forward, there is a need to develop effective policies and procedures that promote the widespread adoption of evidence-based delinquency prevention practices across multiple settings. PMID:26980128

  12. Why Do People Need Self-Esteem? A Theoretical and Empirical Review

    ERIC Educational Resources Information Center

    Pyszczynsi, Tom; Greenberg, Jeff; Solomon, Sheldon; Arndt, Jamie; Schimel, Jeff

    2004-01-01

    Terror management theory (TMT; J. Greenberg, T. Pyszczynski, & S. Solomon, 1986) posits that people are motivated to pursue positive self-evaluations because self-esteem provides a buffer against the omnipresent potential for anxiety engendered by the uniquely human awareness of mortality. Empirical evidence relevant to the theory is reviewed…

  13. Development of an Axiomatic Theory of Organization/Environment Interaction: A Theoretical and Empirical Analysis.

    ERIC Educational Resources Information Center

    Ganey, Rodney F.

    The goal of this paper was to develop a theory of organization/environment interaction by examining the impact of perceived environmental uncertainty on organizational processes and on organizational goal attainment. It examines theories from the organizational environment literature and derives corollaries that are empirically tested using a data…

  14. Culminating Experience Empirical and Theoretical Research Projects, University of Tennessee at Chattanooga, Spring, 2005

    ERIC Educational Resources Information Center

    Watson, Sandy White, Ed.

    2005-01-01

    This document represents a sample collection of master's theses from the University of Tennessee at Chattanooga's Teacher Education Program, spring semester, 2005. The majority of these student researchers were simultaneously student teaching while writing their theses. Studies were empirical and conceptual in nature and demonstrate some ways in…

  15. Empirical social-ecological system analysis: from theoretical framework to latent variable structural equation model.

    PubMed

    Asah, Stanley Tanyi

    2008-12-01

    The social-ecological system (SES) approach to natural resource management holds enormous promise towards achieving sustainability. Despite this promise, social-ecological interactions are complex and elusive; they require simplification to guide effective application of the SES approach. The complex, adaptive and place-specific nature of human-environment interactions impedes determination of state and trends in SES parameters of interest to managers and policy makers. Based on a rigorously developed systemic theoretical model, this paper integrates field observations, interviews, surveys, and latent variable modeling to illustrate the development of simplified and easily interpretable indicators of the state of, and trends in, relevant SES processes. Social-agricultural interactions in the Logone floodplain, in the Lake Chad basin, served as case study. This approach is found to generate simplified determinants of the state of SESs, easily communicable across the array of stakeholders common in human-environment interactions. The approach proves to be useful for monitoring SESs, guiding interventions, and assessing the effectiveness of interventions. It incorporates real time responses to biophysical change in understanding coarse scale processes within which finer scales are embedded. This paper emphasizes the importance of merging quantitative and qualitative methods for effective monitoring and assessment of SESs. PMID:18773239

  16. Image-Based Empirical Modeling of the Plasmasphere

    NASA Technical Reports Server (NTRS)

    Adrian, Mark L.; Gallagher, D. L.

    2008-01-01

    A new suite of empirical models of plasmaspheric plasma based on remote, global images from the IMAGE EUV instrument is proposed for development. The purpose of these empirical models is to establish the statistical properties of the plasmasphere as a function of conditions. This suite of models will mark the first time the plasmaspheric plume is included in an empirical model. Development of these empirical plasmaspheric models will support synoptic studies (such as for wave propagation and growth, energetic particle loss through collisions and dust transport as influenced by charging) and serves as a benchmark against which physical models can be tested. The ability to know that a specific global density distribution occurs in response to specific magnetospheric and solar wind factors is a huge advantage over all previous in-situ based empirical models. The consequence of creating these new plasmaspheric models will be to provide much higher fidelity and much richer quantitative descriptions of the statistical properties of plasmaspheric plasma in the inner magnetosphere, whether that plasma is in the main body of the plasmasphere, nearby during recovery or in the plasmaspheric plume. Model products to be presented include statistical probabilities for being in the plasmasphere, near thermal He+ density boundaries and the complexity of its spatial structure.

  17. Public Disaster Communication and Child and Family Disaster Mental Health: a Review of Theoretical Frameworks and Empirical Evidence.

    PubMed

    Houston, J Brian; First, Jennifer; Spialek, Matthew L; Sorenson, Mary E; Koch, Megan

    2016-06-01

    Children have been identified as particularly vulnerable to psychological and behavioral difficulties following disaster. Public child and family disaster communication is one public health tool that can be utilized to promote coping/resilience and ameliorate maladaptive child reactions following an event. We conducted a review of the public disaster communication literature and identified three main functions of child and family disaster communication: fostering preparedness, providing psychoeducation, and conducting outreach. Our review also indicates that schools are a promising system for child and family disaster communication. We complete our review with three conclusions. First, theoretically, there appears to be a great opportunity for public disaster communication focused on child disaster reactions. Second, empirical research assessing the effects of public child and family disaster communication is essentially nonexistent. Third, despite the lack of empirical evidence in this area, there is opportunity for public child and family disaster communication efforts that address new domains. PMID:27086315

  18. GIS Teacher Training: Empirically-Based Indicators of Effectiveness

    ERIC Educational Resources Information Center

    Höhnle, Steffen; Fögele, Janis; Mehren, Rainer; Schubert, Jan Christoph

    2016-01-01

    In spite of various actions, the implementation of GIS (geographic information systems) in German schools is still very low. In the presented research, teaching experts as well as teaching novices were presented with empirically based constraints for implementation stemming from an earlier survey. In the process of various group discussions, the…

  19. Responses to Commentaries on Advances in Empirically Based Assessment.

    ERIC Educational Resources Information Center

    McConaughy, Stephanie H.

    1993-01-01

    Author of article (this issue) describing research program to advance assessment of children's behavioral and emotional problems; presenting conceptual framework for multiaxial empirically based assessment; and summarizing research efforts to develop cross-informant scales for scoring parent, teacher, and self-reports responds to commentaries on…

  20. Theoretical NMR correlations based Structure Discussion

    PubMed Central

    2011-01-01

    The constitutional assignment of natural products by NMR spectroscopy is usually based on 2D NMR experiments like COSY, HSQC, and HMBC. The actual difficulty of the structure elucidation problem depends more on the type of the investigated molecule than on its size. The moment HMBC data is involved in the process or a large number of heteroatoms is present, a possibility of multiple solutions fitting the same data set exists. A structure elucidation software can be used to find such alternative constitutional assignments and help in the discussion in order to find the correct solution. But this is rarely done. This article describes the use of theoretical NMR correlation data in the structure elucidation process with WEBCOCON, not for the initial constitutional assignments, but to define how well a suggested molecule could have been described by NMR correlation data. The results of this analysis can be used to decide on further steps needed to assure the correctness of the structural assignment. As first step the analysis of the deviation of carbon chemical shifts is performed, comparing chemical shifts predicted for each possible solution with the experimental data. The application of this technique to three well known compounds is shown. Using NMR correlation data alone for the description of the constitutions is not always enough, even when including 13C chemical shift prediction. PMID:21797997

  1. Denoising ECG signal based on ensemble empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Zhi-dong, Zhao; Liu, Juan; Wang, Sheng-tao

    2011-10-01

    The electrocardiogram (ECG) has been used extensively for detection of heart disease. Frequently the signal is corrupted by various kinds of noise such as muscle noise, electromyogram (EMG) interference, instrument noise etc. In this paper, a new ECG denoising method is proposed based on the recently developed ensemble empirical mode decomposition (EEMD). Noisy ECG signal is decomposed into a series of intrinsic mode functions (IMFs). The statistically significant information content is build by the empirical energy model of IMFs. Noisy ECG signal collected from clinic recording is processed using the method. The results show that on contrast with traditional methods, the novel denoising method can achieve the optimal denoising of the ECG signal.

  2. Mechanisms of risk and resilience in military families: theoretical and empirical basis of a family-focused resilience enhancement program.

    PubMed

    Saltzman, William R; Lester, Patricia; Beardslee, William R; Layne, Christopher M; Woodward, Kirsten; Nash, William P

    2011-09-01

    Recent studies have confirmed that repeated wartime deployment of a parent exacts a toll on military children and families and that the quality and functionality of familial relations is linked to force preservation and readiness. As a result, family-centered care has increasingly become a priority across the military health system. FOCUS (Families OverComing Under Stress), a family-centered, resilience-enhancing program developed by a team at UCLA and Harvard Schools of Medicine, is a primary initiative in this movement. In a large-scale implementation project initiated by the Bureau of Navy Medicine, FOCUS has been delivered to thousands of Navy, Marine, Navy Special Warfare, Army, and Air Force families since 2008. This article describes the theoretical and empirical foundation and rationale for FOCUS, which is rooted in a broad conception of family resilience. We review the literature on family resilience, noting that an important next step in building a clinically useful theory of family resilience is to move beyond developing broad "shopping lists" of risk indicators by proposing specific mechanisms of risk and resilience. Based on the literature, we propose five primary risk mechanisms for military families and common negative "chain reaction" pathways through which they undermine the resilience of families contending with wartime deployments and parental injury. In addition, we propose specific mechanisms that mobilize and enhance resilience in military families and that comprise central features of the FOCUS Program. We describe these resilience-enhancing mechanisms in detail, followed by a discussion of the ways in which evaluation data from the program's first 2 years of operation supports the proposed model and the specified mechanisms of action. PMID:21655938

  3. Ecological risk and resilience perspective: a theoretical framework supporting evidence-based practice in schools.

    PubMed

    Powers, Joelle D

    2010-10-01

    Multidisciplinary school practitioners are clearly being called to use evidence-based practices from reputable sources such as their own professional organizations and federal agencies. In spite of this encouragement, most schools are not regularly employing empirically supported interventions. This paper further promotes the use of this approach by describing the theoretical support for evidence-based practice in schools. The ecological risk and resilience theoretical framework presented fills a gap in the literature and advocates for evidence-based practice in schools by illustrating how it can assist practitioners such as school social workers to better address problems associated with school failure. PMID:21082473

  4. Empirical Testing of a Theoretical Extension of the Technology Acceptance Model: An Exploratory Study of Educational Wikis

    ERIC Educational Resources Information Center

    Liu, Xun

    2010-01-01

    This study extended the technology acceptance model and empirically tested the new model with wikis, a new type of educational technology. Based on social cognitive theory and the theory of planned behavior, three new variables, wiki self-efficacy, online posting anxiety, and perceived behavioral control, were added to the original technology…

  5. Predicting Child Abuse Potential: An Empirical Investigation of Two Theoretical Frameworks

    ERIC Educational Resources Information Center

    Begle, Angela Moreland; Dumas, Jean E.; Hanson, Rochelle F.

    2010-01-01

    This study investigated two theoretical risk models predicting child maltreatment potential: (a) Belsky's (1993) developmental-ecological model and (b) the cumulative risk model in a sample of 610 caregivers (49% African American, 46% European American; 53% single) with a child between 3 and 6 years old. Results extend the literature by using a…

  6. Rural Schools, Social Capital and the Big Society: A Theoretical and Empirical Exposition

    ERIC Educational Resources Information Center

    Bagley, Carl; Hillyard, Sam

    2014-01-01

    The paper commences with a theoretical exposition of the current UK government's policy commitment to the idealised notion of the Big Society and the social capital currency underpinning its formation. The paper positions this debate in relation to the rural and adopts an ethnographically-informed methodological approach to provide an…

  7. Corrective Feedback in L2 Writing: Theoretical Perspectives, Empirical Insights, and Future Directions

    ERIC Educational Resources Information Center

    Van Beuningen, Catherine

    2010-01-01

    The role of (written) corrective feedback (CF) in the process of acquiring a second language (L2) has been an issue of considerable controversy among theorists and researchers alike. Although CF is a widely applied pedagogical tool and its use finds support in SLA theory, practical and theoretical objections to its usefulness have been raised…

  8. Perceptual Organization in Schizophrenia Spectrum Disorders: Empirical Research and Theoretical Implications

    ERIC Educational Resources Information Center

    Uhlhaas, Peter J.; Silverstein, Steven M.

    2005-01-01

    The research into perceptual organization in schizophrenia spectrum disorders has found evidence for and against a perceptual organization deficit and has interpreted the data from within several different theoretical frameworks. A synthesis of this evidence, however, reveals that this body of work has produced reliable evidence for deficits in…

  9. Multiple Embedded Inequalities and Cultural Diversity in Educational Systems: A Theoretical and Empirical Exploration

    ERIC Educational Resources Information Center

    Verhoeven, Marie

    2011-01-01

    This article explores the social construction of cultural diversity in education, with a view to social justice. It examines how educational systems organize ethno-cultural difference and how this process contributes to inequalities. Theoretical resources are drawn from social philosophy as well as from recent developments in social organisation…

  10. Chronic Pain in a Couples Context: A Review and Integration of Theoretical Models and Empirical Evidence

    PubMed Central

    Leonard, Michelle T.; Cano, Annmarie; Johansen, Ayna B.

    2007-01-01

    Researchers have become increasingly interested in the social context of chronic pain conditions. The purpose of this article is to provide an integrated review of the evidence linking marital functioning with chronic pain outcomes including pain severity, physical disability, pain behaviors, and psychological distress. We first present an overview of existing models that identify an association between marital functioning and pain variables. We then review the empirical evidence for a relationship between pain variables and several marital functioning variables including marital satisfaction, spousal support, spouse responses to pain, and marital interaction. On the basis of the evidence, we present a working model of marital and pain variables, identify gaps in the literature, and offer recommendations for research and clinical work. Perspective The authors provide a comprehensive review of the relationships between marital functioning and chronic pain variables to advance future research and help treatment providers understand marital processes in chronic pain. PMID:16750794

  11. Three essays on energy and environmental economics: Empirical, applied, and theoretical

    NASA Astrophysics Data System (ADS)

    Karney, Daniel Houghton

    Energy and environmental economics are closely related fields as nearly all forms of energy production generate pollution and thus nearly all forms of environmental policy affect energy production and consumption. The three essays in this dissertation are related by their common themes of energy and environmental economics, but they differ in their methodologies. The first chapter is an empirical exercise that looks that the relationship between electricity price deregulation and maintenance outages at nuclear power plants. The second chapter is an applied theory paper that investigates environmental regulation in a multiple pollutants setting. The third chapter develops a new methodology regarding the construction of analytical general equilibrium models that can be used to study topics in energy and environmental economics.

  12. Colour in insect thermoregulation: empirical and theoretical tests in the colour-changing grasshopper, Kosciuscola tristis.

    PubMed

    Umbers, K D L; Herberstein, M E; Madin, J S

    2013-01-01

    Body colours can result in different internal body temperatures, but evidence for the biological significance of colour-induced temperature differences is inconsistent. We investigated the relationship between body colour and temperature in a model insect species that rapidly changes colour. We used an empirical approach and constructed a heat budget model to quantify whether a colour change from black to turquoise has a role in thermoregulation for the chameleon grasshopper (Kosciuscola tristis). Our study shows that colour change in K. tristis provides relatively small temperature differences that vary greatly with wind speed (0.55 °C at ms(-1) to 0.05 °C at 10 ms(-1)). The biological significance of this difference is unclear and we discuss the requirement for more studies that directly test hypotheses regarding the fitness effects of colour in manipulating body temperature. PMID:23108152

  13. Evidence-based Nursing Education - a Systematic Review of Empirical Research

    PubMed Central

    Reiber, Karin

    2011-01-01

    The project „Evidence-based Nursing Education – Preparatory Stage“, funded by the Landesstiftung Baden-Württemberg within the programme Impulsfinanzierung Forschung (Funding to Stimulate Research), aims to collect information on current research concerned with nursing education and to process existing data. The results of empirical research which has already been carried out were systematically evaluated with aim of identifying further topics, fields and matters of interest for empirical research in nursing education. In the course of the project, the available empirical studies on nursing education were scientifically analysed and systematised. The over-arching aim of the evidence-based training approach – which extends beyond the aims of this project - is the conception, organisation and evaluation of vocational training and educational processes in the caring professions on the basis of empirical data. The following contribution first provides a systematic, theoretical link to the over-arching reference framework, as the evidence-based approach is adapted from thematically related specialist fields. The research design of the project is oriented towards criteria introduced from a selection of studies and carries out a two-stage systematic review of the selected studies. As a result, the current status of research in nursing education, as well as its organisation and structure, and questions relating to specialist training and comparative education are introduced and discussed. Finally, the empirical research on nursing training is critically appraised as a complementary element in educational theory/psychology of learning and in the ethical tradition of research. This contribution aims, on the one hand, to derive and describe the methods used, and to introduce the steps followed in gathering and evaluating the data. On the other hand, it is intended to give a systematic overview of empirical research work in nursing education. In order to preserve a

  14. Guiding Empirical and Theoretical Explorations of Organic Matter Decay By Synthesizing Temperature Responses of Enzyme Kinetics, Microbes, and Isotope Fluxes

    NASA Astrophysics Data System (ADS)

    Billings, S. A.; Ballantyne, F.; Lehmeier, C.; Min, K.

    2014-12-01

    Soil organic matter (SOM) transformation rates generally increase with temperature, but whether this is realized depends on soil-specific features. To develop predictive models applicable to all soils, we must understand two key, ubiquitous features of SOM transformation: the temperature sensitivity of myriad enzyme-substrate combinations and temperature responses of microbial physiology and metabolism, in isolation from soil-specific conditions. Predicting temperature responses of production of CO2 vs. biomass is also difficult due to soil-specific features: we cannot know the identity of active microbes nor the substrates they employ. We highlight how recent empirical advances describing SOM decay can help develop theoretical tools relevant across diverse spatial and temporal scales. At a molecular level, temperature effects on purified enzyme kinetics reveal distinct temperature sensitivities of decay of diverse SOM substrates. Such data help quantify the influence of microbial adaptations and edaphic conditions on decay, have permitted computation of the relative availability of carbon (C) and nitrogen (N) liberated upon decay, and can be used with recent theoretical advances to predict changes in mass specific respiration rates as microbes maintain biomass C:N with changing temperature. Enhancing system complexity, we can subject microbes to temperature changes while controlling growth rate and without altering substrate availability or identity of the active population, permitting calculation of variables typically inferred in soils: microbial C use efficiency (CUE) and isotopic discrimination during C transformations. Quantified declines in CUE with rising temperature are critical for constraining model CUE estimates, and known changes in δ13C of respired CO2 with temperature is useful for interpreting δ13C-CO2 at diverse scales. We suggest empirical studies important for advancing knowledge of how microbes respond to temperature, and ideas for theoretical

  15. Social-Emotional Well-Being and Resilience of Children in Early Childhood Settings--PERIK: An Empirically Based Observation Scale for Practitioners

    ERIC Educational Resources Information Center

    Mayr, Toni; Ulich, Michaela

    2009-01-01

    Compared with the traditional focus on developmental problems, research on positive development is relatively new. Empirical research in children's well-being has been scarce. The aim of this study was to develop a theoretically and empirically based instrument for practitioners to observe and assess preschool children's well-being in early…

  16. Advanced airfoil design empirically based transonic aircraft drag buildup technique

    NASA Technical Reports Server (NTRS)

    Morrison, W. D., Jr.

    1976-01-01

    To systematically investigate the potential of advanced airfoils in advance preliminary design studies, empirical relationships were derived, based on available wind tunnel test data, through which total drag is determined recognizing all major aircraft geometric variables. This technique recognizes a single design lift coefficient and Mach number for each aircraft. Using this technique drag polars are derived for all Mach numbers up to MDesign + 0.05 and lift coefficients -0.40 to +0.20 from CLDesign.

  17. Theoretical performance assessment and empirical analysis of super-resolution under unknown affine sensor motion.

    PubMed

    Thelen, Brian J; Valenzuela, John R; LeBlanc, Joel W

    2016-04-01

    This paper deals with super-resolution (SR) processing and associated theoretical performance assessment for under-sampled video data collected from a moving imaging platform with unknown motion and assuming a relatively flat scene. This general scenario requires joint estimation of the high-resolution image and the parameters that determine a projective transform that relates the collected frames to one another. A quantitative assessment of the variance in the random error as achieved through a joint-estimation approach (e.g., SR image reconstruction and motion estimation) is carried out via the general framework of M-estimators and asymptotic statistics. This approach provides a performance measure on estimating the fine-resolution scene when there is a lack of perspective information and represents a significant advancement over previous work that considered only the more specific scenario of mis-registration. A succinct overview of the theoretical framework is presented along with some specific results on the approximate random error for the case of unknown translation and affine motions. A comparison is given between the approximated random error and that actually achieved by an M-estimator approach to the joint-estimation problem. These results provide insight on the reduction in SR reconstruction accuracy when jointly estimating unknown inter-frame affine motion. PMID:27140759

  18. Why do people need self-esteem? A theoretical and empirical review.

    PubMed

    Pyszczynski, Tom; Greenberg, Jeff; Solomon, Sheldon; Arndt, Jamie; Schimel, Jeff

    2004-05-01

    Terror management theory (TMT; J. Greenberg, T. Pyszczynski, & S. Solomon, 1986) posits that people are motivated to pursue positive self-evaluations because self-esteem provides a buffer against the omnipresent potential for anxiety engendered by the uniquely human awareness of mortality. Empirical evidence relevant to the theory is reviewed showing that high levels of self-esteem reduce anxiety and anxiety-related defensive behavior, reminders of one's mortality increase self-esteem striving and defense of self-esteem against threats in a variety of domains, high levels of self-esteem eliminate the effect of reminders of mortality on both self-esteem striving and the accessibility of death-related thoughts, and convincing people of the existence of an afterlife eliminates the effect of mortality salience on self-esteem striving. TMT is compared with other explanations for why people need self-esteem, and a critique of the most prominent of these, sociometer theory, is provided. PMID:15122930

  19. Pharmaceuticals, political money, and public policy: a theoretical and empirical agenda.

    PubMed

    Jorgensen, Paul D

    2013-01-01

    Why, when confronted with policy alternatives that could improve patient care, public health, and the economy, does Congress neglect those goals and tailor legislation to suit the interests of pharmaceutical corporations? In brief, for generations, the pharmaceutical industry has convinced legislators to define policy problems in ways that protect its profit margin. It reinforces this framework by selectively providing information and by targeting campaign contributions to influential legislators and allies. In this way, the industry displaces the public's voice in developing pharmaceutical policy. Unless citizens mobilize to confront the political power of pharmaceutical firms, objectionable industry practices and public policy will not change. Yet we need to refine this analysis. I propose a research agenda to uncover pharmaceutical influence. It develops the theory of dependence corruption to explain how the pharmaceutical industry is able to deflect the broader interests of the general public. It includes empirical studies of lobbying and campaign finance to uncover the means drug firms use to: (1) shape the policy framework adopted and information used to analyze policy; (2) subsidize the work of political allies; and (3) influence congressional voting. PMID:24088146

  20. Predicting child abuse potential: an empirical investigation of two theoretical frameworks.

    PubMed

    Begle, Angela Moreland; Dumas, Jean E; Hanson, Rochelle F

    2010-01-01

    This study investigated two theoretical risk models predicting child maltreatment potential: (a) Belsky's (1993) developmental-ecological model and (b) the cumulative risk model in a sample of 610 caregivers (49% African American, 46% European American; 53% single) with a child between 3 and 6 years old. Results extend the literature by using a widely accepted and valid risk instrument rather than occurrence rates (e.g., reports to child protective services, observations). Results indicated Belsky's developmental-ecological model, in which risk markers were organized into three separate conceptual domains, provided a poor fit to the data. In contrast, the cumulative risk model, which included the accumulation of risk markers, was significant in predicting child abuse potential. PMID:20390812

  1. Empirical, theoretical, and practical advantages of the HEXACO model of personality structure.

    PubMed

    Ashton, Michael C; Lee, Kibeom

    2007-05-01

    The authors argue that a new six-dimensional framework for personality structure--the HEXACO model--constitutes a viable alternative to the well-known Big Five or five-factor model. The new model is consistent with the cross-culturally replicated finding of a common six-dimensional structure containing the factors Honesty-Humility (H), Emotionality (E), eExtraversion (X), Agreeableness (A), Conscientiousness (C), and Openness to Experience (O). Also, the HEXACO model predicts several personality phenomena that are not explained within the B5/FFM, including the relations of personality factors with theoretical biologists' constructs of reciprocal and kin altruism and the patterns of sex differences in personality traits. In addition, the HEXACO model accommodates several personality variables that are poorly assimilated within the B5/FFM. PMID:18453460

  2. UVCS Empirical Constraints on Theoretical Models of Solar Wind Source Regions

    NASA Astrophysics Data System (ADS)

    Kohl, J. L.; Cranmer, S. R.; Miralles, M. P.; Panasyuk, A.; Strachan, L.

    2007-12-01

    Spectroscopic observations from the Ultraviolet Coronagraph Spectrometer (UVCS) on the Solar and Heliospheric Observatory (SOHO) have resulted in empirical models of polar coronal holes, polar plumes, coronal jets and streamers. These findings have been used to make significant progress toward identifying and characterizing the physical processes that produce extended heating in the corona and accelerate fast and slow solar wind streams. The UVCS scientific observations, which began in April 1996 and continue at this writing, have provided determinations of proton and minor ion temperatures (including evidence for anisotropic microscopic velocity distributions in coronal holes and quiescent equatorial streamers), outflow velocities, and elemental abundances. The variations in these quantities over the solar cycle also have been determined. For example, observations of large polar coronal holes at different phases of the solar cycle indicate that line width is positively correlated with outflow speed and anti-correlated with electron density. This paper will review these results, and present new results from measurements taken as the current solar activity cycle approaches solar minimum. The results regarding preferential ion heating and acceleration of heavy ions (i.e., O5+) in polar coronal holes have contributed in a major way to the advances in understanding solar wind acceleration that have occurred during the past decade. It is important to verify and confirm the key features of these findings. Hence, the results from a new analysis of an expanded set of UVCS data from polar coronal holes at solar minimum by S. R. Cranmer, A. Panasyuk and J. L. Kohl will be presented. This work has been supported by the National Aeronautics and Space Administration (NASA) under Grants NNG06G188G and NNX07AL72G and NNX06AG95G to the Smithsonian Astrophysical Observatory.

  3. Lay attitudes toward deception in medicine: Theoretical considerations and empirical evidence

    PubMed Central

    Pugh, Jonathan; Kahane, Guy; Maslen, Hannah; Savulescu, Julian

    2016-01-01

    Abstract Background: There is a lack of empirical data on lay attitudes toward different sorts of deception in medicine. However, lay attitudes toward deception should be taken into account when we consider whether deception is ever permissible in a medical context. The objective of this study was to examine lay attitudes of U.S. citizens toward different sorts of deception across different medical contexts. Methods: A one-time online survey was administered to U.S. users of the Amazon “Mechanical Turk” website. Participants were asked to answer questions regarding a series of vignettes depicting different sorts of deception in medical care, as well as a question regarding their general attitudes toward truth-telling. Results: Of the 200 respondents, the majority found the use of placebos in different contexts to be acceptable following partial disclosure but found it to be unacceptable if it involved outright lying. Also, 55.5% of respondents supported the use of sham surgery in clinical research, although 55% claimed that it would be unacceptable to deceive patients in this research, even if this would improve the quality of the data from the study. Respondents supported fully informing patients about distressing medical information in different contexts, especially when the patient is suffering from a chronic condition. In addition, 42.5% of respondents believed that it is worse to deceive someone by providing the person with false information than it is to do so by giving the person true information that is likely to lead them to form a false belief, without telling them other important information that shows it to be false. However, 41.5% believed that the two methods of deception were morally equivalent. Conclusions: Respondents believed that some forms of deception were acceptable in some circumstances. While the majority of our respondents opposed outright lying in medical contexts, they were prepared to support partial disclosure and the use of

  4. Adult Coping with Childhood Sexual Abuse: A Theoretical and Empirical Review

    PubMed Central

    Walsh, Kate; Fortier, Michelle A.; DiLillo, David

    2009-01-01

    Coping has been suggested as an important element in understanding the long-term functioning of individuals with a history of child sexual abuse (CSA). The present review synthesizes the literature on coping with CSA, first by examining theories of coping with trauma, and, second by examining how these theories have been applied to studies of coping in samples of CSA victims. Thirty-nine studies were reviewed, including eleven descriptive studies of the coping strategies employed by individuals with a history of CSA, eighteen correlational studies of the relationship between coping strategies and long-term functioning of CSA victims, and ten investigations in which coping was examined as a mediational factor in relation to long-term outcomes. These studies provide initial information regarding early sexual abuse and subsequent coping processes. However, this literature is limited by several theoretical and methodological issues, including a failure to specify the process of coping as it occurs, a disparity between theory and research, and limited applicability to clinical practice. Future directions of research are discussed and include the need to understand coping as a process, identification of coping in relation to adaptive outcomes, and considerations of more complex mediational and moderational processes in the study of coping with CSA. PMID:20161502

  5. Swahili women since the nineteenth century: theoretical and empirical considerations on gender and identity construction.

    PubMed

    Gower, R; Salm, S; Falola, T

    1996-01-01

    This paper provides an analysis and update on the theoretical discussion about the link between gender and identity and uses a group of Swahili women in eastern Africa as an example of how this link works in practice. The first part of the study provides a brief overview of gender theory related to the terms "gender" and "identity." It is noted that gender is only one aspect of identity and that the concept of gender has undergone important changes such as the reconceptualization of the terms "sex" and "gender." The second part of the study synthesizes the experiences of Swahili women in the 19th century when the convergence of gender and class was very important. The status of Muslim women is reviewed, and it is noted that even influential women practiced purdah and that all Swahili women experienced discrimination, which inhibited their opportunities for socioeconomic mobility. Slavery and concubinage were widespread during this period, and the participation of Islamic women in spirit possession cults was a way for women to express themselves culturally. The separation of men and women in Swahili culture led to the development of two distinct subcultures, which excluded women from most aspects of public life. The third part of the study looks at the experiences of Swahili women since the 19th century both during and after the colonial period. It is shown that continuity exists in trends observed over a period of 200 years. For example, the mobility of Swahili women remains limited by Islam, but women do exert influence behind the scenes. It is concluded that the socioeconomic status of Swahili woman has been shaped more by complex forces such as class, ethnic, religious, and geographic area than by the oppression of Islam and colonialism. This study indicates that gender cannot be studied in isolation from other salient variables affecting identity. PMID:12292423

  6. Innovation in Information Technology: Theoretical and Empirical Study in SMQR Section of Export Import in Automotive Industry

    NASA Astrophysics Data System (ADS)

    Edi Nugroho Soebandrija, Khristian; Pratama, Yogi

    2014-03-01

    This paper has the objective to provide the innovation in information technology in both theoretical and empirical study. Precisely, both aspects relate to the Shortage Mispacking Quality Report (SMQR) Claims in Export and Import in Automotive Industry. This paper discusses the major aspects of Innovation, Information Technology, Performance and Competitive Advantage. Furthermore, In the empirical study of PT. Astra Honda Motor (AHM) refers to SMQR Claims, Communication Systems, Analysis and Design Systems. Briefly both aspects of the major aspects and its empirical study are discussed in the Introduction Session. Furthermore, the more detail discussion is conducted in the related aspects in other sessions of this paper, in particular in Literature Review in term classical and updated reference of current research. The increases of SMQR claim and communication problem at PT. Astra Daihatsu Motor (PT. ADM) which still using the email cause the time of claim settlement become longer and finally it causes the rejected of SMQR claim by supplier. With presence of this problem then performed to design the integrated communication system to manage the communication process of SMQR claim between PT. ADM with supplier. The systems was analyzed and designed is expected to facilitate the claim communication process so that can be run in accordance with the procedure and fulfill the target of claim settlement time and also eliminate the difficulties and problems on the previous manual communication system with the email. The design process of the system using the approach of system development life cycle method by Kendall & Kendall (2006)which design process covers the SMQR problem communication process, judgment process by the supplier, claim process, claim payment process and claim monitoring process. After getting the appropriate system designs for managing the SMQR claim, furthermore performed the system implementation and can be seen the improvement in claim communication

  7. Alignment of Standards and Assessment: A Theoretical and Empirical Study of Methods for Alignment

    ERIC Educational Resources Information Center

    Nasstrom, Gunilla; Henriksson, Widar

    2008-01-01

    Introduction: In a standards-based school-system alignment of policy documents with standards and assessment is important. To be able to evaluate whether schools and students have reached the standards, the assessment should focus on the standards. Different models and methods can be used for measuring alignment, i.e. the correspondence between…

  8. Theoretical and Empirical Underpinnings of the What Works Clearinghouse Attrition Standard for Randomized Controlled Trials

    ERIC Educational Resources Information Center

    Deke, John; Chiang, Hanley

    2014-01-01

    Meeting the What Works Clearinghouse (WWC) attrition standard (or one of the attrition standards based on the WWC standard) is now an important consideration for researchers conducting studies that could potentially be reviewed by the WWC (or other evidence reviews). Understanding the basis of this standard is valuable for anyone seeking to meet…

  9. AGENT-BASED MODELS IN EMPIRICAL SOCIAL RESEARCH*

    PubMed Central

    Bruch, Elizabeth; Atwell, Jon

    2014-01-01

    Agent-based modeling has become increasingly popular in recent years, but there is still no codified set of recommendations or practices for how to use these models within a program of empirical research. This article provides ideas and practical guidelines drawn from sociology, biology, computer science, epidemiology, and statistics. We first discuss the motivations for using agent-based models in both basic science and policy-oriented social research. Next, we provide an overview of methods and strategies for incorporating data on behavior and populations into agent-based models, and review techniques for validating and testing the sensitivity of agent-based models. We close with suggested directions for future research. PMID:25983351

  10. An empirically based electrosource horizon lead-acid battery model

    SciTech Connect

    Moore, S.; Eshani, M.

    1996-09-01

    An empirically based mathematical model of a lead-acid battery for use in the Texas A and M University`s Electrically Peaking Hybrid (ELPH) computer simulation is presented. The battery model is intended to overcome intuitive difficulties with currently available models by employing direct relationships between state-of-charge, voltage, and power demand. The model input is the power demand or load. Model outputs include voltage, an instantaneous battery efficiency coefficient and a state-of-charge indicator. A time and current depend voltage hysteresis is employed to ensure correct voltage tracking inherent with the highly transient nature of a hybrid electric drivetrain.

  11. Coping, acculturation, and psychological adaptation among migrants: a theoretical and empirical review and synthesis of the literature

    PubMed Central

    Kuo, Ben C.H.

    2014-01-01

    Given the continuous, dynamic demographic changes internationally due to intensive worldwide migration and globalization, the need to more fully understand how migrants adapt and cope with acculturation experiences in their new host cultural environment is imperative and timely. However, a comprehensive review of what we currently know about the relationship between coping behavior and acculturation experience for individuals undergoing cultural changes has not yet been undertaken. Hence, the current article aims to compile, review, and examine cumulative cross-cultural psychological research that sheds light on the relationships among coping, acculturation, and psychological and mental health outcomes for migrants. To this end, this present article reviews prevailing literature pertaining to: (a) the stress and coping conceptual perspective of acculturation; (b) four theoretical models of coping, acculturation and cultural adaptation; (c) differential coping pattern among diverse acculturating migrant groups; and (d) the relationship between coping variabilities and acculturation levels among migrants. In terms of theoretical understanding, this review points to the relative strengths and limitations associated with each of the four theoretical models on coping-acculturation-adaptation. These theories and the empirical studies reviewed in this article further highlight the central role of coping behaviors/strategies in the acculturation process and outcome for migrants and ethnic populations, both conceptually and functionally. Moreover, the review shows that across studies culturally preferred coping patterns exist among acculturating migrants and migrant groups and vary with migrants' acculturation levels. Implications and limitations of the existing literature for coping, acculturation, and psychological adaptation research are discussed and recommendations for future research are put forth. PMID:25750766

  12. Dialectical behavior therapy for borderline personality disorder: theoretical and empirical foundations.

    PubMed

    Shearin, E N; Linehan, M M

    1994-01-01

    Dialectical behavior therapy (DBT) is a cognitive-behavioral psychotherapy developed by Linehan for parasuicidal patients with a diagnosis of borderline personality disorder (BPD). DBT is based on a biosocial theory that views BPD as primarily a dysfunction of the emotion regulation system. The treatment is organized around a hierarchy of behavioral goals that vary in different modes of therapy. In two randomized trials, DBT has shown superiority in reducing parasuicide, medical risk of parasuicides, number of hospital days, dropout from treatment and anger while improving social adjustment. Most gains were maintained through a 1-year follow-up. In one process study testing DBT theory, dialectical techniques balancing acceptance and change were more effective than pure change or acceptance techniques in reducing suicidal behavior. PMID:8010153

  13. The Ease of Language Understanding (ELU) model: theoretical, empirical, and clinical advances

    PubMed Central

    Rönnberg, Jerker; Lunner, Thomas; Zekveld, Adriana; Sörqvist, Patrik; Danielsson, Henrik; Lyxell, Björn; Dahlström, Örjan; Signoret, Carine; Stenfelt, Stefan; Pichora-Fuller, M. Kathleen; Rudner, Mary

    2013-01-01

    Working memory is important for online language processing during conversation. We use it to maintain relevant information, to inhibit or ignore irrelevant information, and to attend to conversation selectively. Working memory helps us to keep track of and actively participate in conversation, including taking turns and following the gist. This paper examines the Ease of Language Understanding model (i.e., the ELU model, Rönnberg, 2003; Rönnberg et al., 2008) in light of new behavioral and neural findings concerning the role of working memory capacity (WMC) in uni-modal and bimodal language processing. The new ELU model is a meaning prediction system that depends on phonological and semantic interactions in rapid implicit and slower explicit processing mechanisms that both depend on WMC albeit in different ways. It is based on findings that address the relationship between WMC and (a) early attention processes in listening to speech, (b) signal processing in hearing aids and its effects on short-term memory, (c) inhibition of speech maskers and its effect on episodic long-term memory, (d) the effects of hearing impairment on episodic and semantic long-term memory, and finally, (e) listening effort. New predictions and clinical implications are outlined. Comparisons with other WMC and speech perception models are made. PMID:23874273

  14. Periodic limb movements of sleep: empirical and theoretical evidence supporting objective at-home monitoring

    PubMed Central

    Moro, Marilyn; Goparaju, Balaji; Castillo, Jelina; Alameddine, Yvonne; Bianchi, Matt T

    2016-01-01

    Introduction Periodic limb movements of sleep (PLMS) may increase cardiovascular and cerebrovascular morbidity. However, most people with PLMS are either asymptomatic or have nonspecific symptoms. Therefore, predicting elevated PLMS in the absence of restless legs syndrome remains an important clinical challenge. Methods We undertook a retrospective analysis of demographic data, subjective symptoms, and objective polysomnography (PSG) findings in a clinical cohort with or without obstructive sleep apnea (OSA) from our laboratory (n=443 with OSA, n=209 without OSA). Correlation analysis and regression modeling were performed to determine predictors of periodic limb movement index (PLMI). Markov decision analysis with TreeAge software compared strategies to detect PLMS: in-laboratory PSG, at-home testing, and a clinical prediction tool based on the regression analysis. Results Elevated PLMI values (>15 per hour) were observed in >25% of patients. PLMI values in No-OSA patients correlated with age, sex, self-reported nocturnal leg jerks, restless legs syndrome symptoms, and hypertension. In OSA patients, PLMI correlated only with age and self-reported psychiatric medications. Regression models indicated only a modest predictive value of demographics, symptoms, and clinical history. Decision modeling suggests that at-home testing is favored as the pretest probability of PLMS increases, given plausible assumptions regarding PLMS morbidity, costs, and assumed benefits of pharmacological therapy. Conclusion Although elevated PLMI values were commonly observed, routinely acquired clinical information had only weak predictive utility. As the clinical importance of elevated PLMI continues to evolve, it is likely that objective measures such as PSG or at-home PLMS monitors will prove increasingly important for clinical and research endeavors. PMID:27540316

  15. Size-dependent standard deviation for growth rates: empirical results and theoretical modeling.

    PubMed

    Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H Eugene; Grosse, I

    2008-05-01

    We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation sigma(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation sigma(R) on the average value of the wages with a scaling exponent beta approximately 0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation sigma(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of sigma(R) on the average payroll with a scaling exponent beta approximately -0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable. PMID:18643131

  16. Size-dependent standard deviation for growth rates: Empirical results and theoretical modeling

    NASA Astrophysics Data System (ADS)

    Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H. Eugene; Grosse, I.

    2008-05-01

    We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation σ(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation σ(R) on the average value of the wages with a scaling exponent β≈0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation σ(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of σ(R) on the average payroll with a scaling exponent β≈-0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.

  17. Comparison between empirical and physically based models of atmospheric correction

    NASA Astrophysics Data System (ADS)

    Mandanici, E.; Franci, F.; Bitelli, G.; Agapiou, A.; Alexakis, D.; Hadjimitsis, D. G.

    2015-06-01

    A number of methods have been proposed for the atmospheric correction of the multispectral satellite images, based on either atmosphere modelling or images themselves. Full radiative transfer models require a lot of ancillary information about the atmospheric conditions at the acquisition time. Whereas, image based methods cannot account for all the involved phenomena. Therefore, the aim of this paper is the comparison of different atmospheric correction methods for multispectral satellite images. The experimentation was carried out on a study area located in the catchment area of Yialias river, 20 km South of Nicosia, the Cyprus capital. The following models were tested, both empirical and physically based: Dark object subtraction, QUAC, Empirical line, 6SV, and FLAASH. They were applied on a Landsat 8 multispectral image. The spectral signatures of ten different land cover types were measured during a field campaign in 2013 and 15 samples were collected for laboratory measurements in a second campaign in 2014. GER 1500 spectroradiometer was used; this instrument can record electromagnetic radiation from 350 up to 1050 nm, includes 512 different channels and each channel covers about 1.5 nm. The spectral signatures measured were used to simulate the reflectance values for the multispectral sensor bands by applying relative spectral response filters. These data were considered as ground truth to assess the accuracy of the different image correction models. Results do not allow to establish which method is the most accurate. The physics-based methods describe better the shape of the signatures, whereas the image-based models perform better regarding the overall albedo.

  18. Theoretical bases of radar (selected pages)

    NASA Astrophysics Data System (ADS)

    Shirman, Ya. D.; Golikov, V. N.; Busygin, I. N.; Kostin, G. A.; Manshos, V. N.

    1987-06-01

    A textbook is presented for radio engineering departments of schools of higher education, which prepare specialists in radar. The use of statistical methods of analysis as the single base is it special feature. The principles are given of construction and the theory of the devices/equipment of optimum detection in the presence of interferences; the methods are examined for obtaining the radar information taking into account achievements in the region of the optimum working/treatment of serrated radar signals, laws governing secondary radiation and radiowave propagation. A large number of examples, which permits the reader to more rapidly master main questions of theory and its application, are given.

  19. Development of an empirically based dynamic biomechanical strength model

    NASA Technical Reports Server (NTRS)

    Pandya, A.; Maida, J.; Aldridge, A.; Hasson, S.; Woolford, B.

    1992-01-01

    The focus here is on the development of a dynamic strength model for humans. Our model is based on empirical data. The shoulder, elbow, and wrist joints are characterized in terms of maximum isolated torque, position, and velocity in all rotational planes. This information is reduced by a least squares regression technique into a table of single variable second degree polynomial equations determining the torque as a function of position and velocity. The isolated joint torque equations are then used to compute forces resulting from a composite motion, which in this case is a ratchet wrench push and pull operation. What is presented here is a comparison of the computed or predicted results of the model with the actual measured values for the composite motion.

  20. Developing an empirical base for clinical nurse specialist education.

    PubMed

    Stahl, Arleen M; Nardi, Deena; Lewandowski, Margaret A

    2008-01-01

    This article reports on the design of a clinical nurse specialist (CNS) education program using National Association of Clinical Nurse Specialists (NACNS) CNS competencies to guide CNS program clinical competency expectations and curriculum outcomes. The purpose is to contribute to the development of an empirical base for education and credentialing of CNSs. The NACNS CNS core competencies and practice competencies in all 3 spheres of influence guided the creation of clinical competency grids for this university's practicum courses. This project describes the development, testing, and application of these clinical competency grids that link the program's CNS clinical courses with the NACNS CNS competencies. These documents guide identification, tracking, measurement, and evaluation of the competencies throughout the clinical practice portion of the CNS program. This ongoing project will continue to provide data necessary to the benchmarking of CNS practice competencies, which is needed to evaluate the effectiveness of direct practice performance and the currency of graduate nursing education. PMID:18438164

  1. Ontology-Based Empirical Knowledge Verification for Professional Virtual Community

    ERIC Educational Resources Information Center

    Chen, Yuh-Jen

    2011-01-01

    A professional virtual community provides an interactive platform for enterprise experts to create and share their empirical knowledge cooperatively, and the platform contains a tremendous amount of hidden empirical knowledge that knowledge experts have preserved in the discussion process. Therefore, enterprise knowledge management highly…

  2. Open-circuit sensitivity model based on empirical parameters for a capacitive-type MEMS acoustic sensor

    NASA Astrophysics Data System (ADS)

    Lee, Jaewoo; Jeon, J. H.; Je, C. H.; Lee, S. Q.; Yang, W. S.; Lee, S.-G.

    2016-03-01

    An empirical-based open-circuit sensitivity model for a capacitive-type MEMS acoustic sensor is presented. To intuitively evaluate the characteristic of the open-circuit sensitivity, the empirical-based model is proposed and analysed by using a lumped spring-mass model and a pad test sample without a parallel plate capacitor for the parasitic capacitance. The model is composed of three different parameter groups: empirical, theoretical, and mixed data. The empirical residual stress from the measured pull-in voltage of 16.7 V and the measured surface topology of the diaphragm were extracted as +13 MPa, resulting in the effective spring constant of 110.9 N/m. The parasitic capacitance for two probing pads including the substrate part was 0.25 pF. Furthermore, to verify the proposed model, the modelled open-circuit sensitivity was compared with the measured value. The MEMS acoustic sensor had an open- circuit sensitivity of -43.0 dBV/Pa at 1 kHz with a bias of 10 V, while the modelled open- circuit sensitivity was -42.9 dBV/Pa, which showed good agreement in the range from 100 Hz to 18 kHz. This validates the empirical-based open-circuit sensitivity model for designing capacitive-type MEMS acoustic sensors.

  3. Noise cancellation in IR video based on empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Piñeiro-Ave, José; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando; Artés-Rodríguez, Antonio

    2013-05-01

    Currently there is a huge demand for simple low cost IR cameras for both civil and military applications, among which one of the most common is the surveillance of restricted access zones. In the design of low cost IR cameras, it is necessary to avoid the use of several elements present in more sophisticated cameras, such as the refrigeration systems and the temperature control of the detectors, so as to prevent the use of a mechanical modulator of the incident radiation (chopper). Consequently, the detection algorithms must reliably separate the target signal from high noise and drift caused by temporal variations of the background image of the scene and the additional drift due to thermal instability detectors. A very important step towards this goal is the design of a preprocessing stage to eliminate noise. Thus, in this work we propose using the Empirical Mode Decomposition (EMD) method to attain this objective. In order to evaluate the quality of the reconstructed clean signal, the Average to Peak Ratio is assessed to evaluate the effectiveness in reconstructing the waveform of the signal from the target. We compare the EMD method with other classical method of noise cancellation based on the Discrete Wavelet Transform (DWT). The results reported by simulations show that the proposed scheme based on EMD performs better than traditional ones.

  4. Empirically and theoretically determined spatial and temporal variability of the Late Holocene sea level in the South-Central Pacific (Invited)

    NASA Astrophysics Data System (ADS)

    Eisenhauer, A.; Rashid, R. J.; Hallmann, N.; Stocchi, P.; Fietzke, J.; Camoin, G.; Vella, C.; Samankassou, E.

    2013-12-01

    We present U/Th dated fossil corals which were collected from reef platforms on three islands (Moorea, Huahine and Bora Bora) of the Society Islands, French Polynesia. In particular U/Th-dated fossil microatolls precisely constrain the timing and amplitude of sea-level variations at and after the 'Holocene Sea Level Maximum, HSLM' because microatolls grow close or even directly at the current sea-level position. We found that sea level reached a subsidence corrected position of at least ~1.5 m above present sea level (apsl) at ~5.4 ka before present (BP) relative to Huahine island and a maximum amplitude of at least ~2.0 m apsl at ~2.0 ka BP relative to Moorea. In between 5.4 and 2 ka minimum sealevel oscillated between 1.5 and 2 m for ~3 ka but then declined to the present position after ~2 ka BP. Based on statistical arguments on the coral age distribution HSLM is constrained to an interval of 3.5×0.8 ka. Former studies being in general accord with our data show that sea level in French Polynesia was ~1 m higher than present between 5,000 and 1,250 yrs BP and that a highstand was reached between 2,000 and 1,500 yrs BP (Pirazzoli and Montaggioni, 1988) and persisted until 1,200 yrs BP in the Tuamotu Archipelago (Pirazzoli and Montaggioni, 1986). Modeling of the Late Holocene sea-level rise performed during the course of this study taking glacio-isostatic and the ocean syphoning effect into account predicts a Late Holocene sea-level highstand of ~1 m apsl at ~4 ka BP for Bora Bora which is in general agreement with the statistical interpretation of our empirical data. However, the modeled HSLM amplitude of ~1 m apsl is considerably smaller than predicted by the empirical data indicating amplitudes of more than 2 m. Furthermore, the theoretical model predicts a continuously falling sea level after ~4 ka to the present. This is in contrast to the empirical data indicating a sea level remaining above at least ~1 m apsl between 5 ka and 2 ka then followed by a certain

  5. Painting by numbers: nanoparticle-based colorants in the post-empirical age.

    PubMed

    Klupp Taylor, Robin N; Seifrt, Frantisek; Zhuromskyy, Oleksandr; Peschel, Ulf; Leugering, Günter; Peukert, Wolfgang

    2011-06-17

    The visual appearance of the artificial world is largely governed by films or composites containing particles with at least one dimension smaller than a micron. Over the past century and a half, the optical properties of such materials have been scrutinized and a broad range of colorant products, based mostly on empirical microstructural improvements, developed. With the advent of advanced synthetic approaches capable of tailoring particle shape, size and composition on the nanoscale, the question of what is the optimum particle for a certain optical property can no longer be answered solely by experimentation. Instead, new and improved computational approaches are required to invert the structure-function relationship. This progress report reviews the development in our understanding of this relationship and indicates recent examples of how theoretical design is taking an ever increasingly important role in the search for enhanced or multifunctional colorants. PMID:21538592

  6. Activity Theory as a Theoretical Framework for Health Self-Quantification: A Systematic Review of Empirical Studies

    PubMed Central

    2016-01-01

    Background Self-quantification (SQ) is a way of working in which, by using tracking tools, people aim to collect, manage, and reflect on personal health data to gain a better understanding of their own body, health behavior, and interaction with the world around them. However, health SQ lacks a formal framework for describing the self-quantifiers’ activities and their contextual components or constructs to pursue these health related goals. Establishing such framework is important because it is the first step to operationalize health SQ fully. This may in turn help to achieve the aims of health professionals and researchers who seek to make or study changes in the self-quantifiers’ health systematically. Objective The aim of this study was to review studies on health SQ in order to answer the following questions: What are the general features of the work and the particular activities that self-quantifiers perform to achieve their health objectives? What constructs of health SQ have been identified in the scientific literature? How have these studies described such constructs? How would it be possible to model these constructs theoretically to characterize the work of health SQ? Methods A systematic review of peer-reviewed literature was conducted. A total of 26 empirical studies were included. The content of these studies was thematically analyzed using Activity Theory as an organizing framework. Results The literature provided varying descriptions of health SQ as data-driven and objective-oriented work mediated by SQ tools. From the literature, we identified two types of SQ work: work on data (ie, data management activities) and work with data (ie, health management activities). Using Activity Theory, these activities could be characterized into 6 constructs: users, tracking tools, health objectives, division of work, community or group setting, and SQ plan and rules. We could not find a reference to any single study that accounted for all these activities and

  7. Methods for combining a theoretical and an empirical approach in modelling pressure and flow control valves for CAE-programs for fluid power circuits

    NASA Astrophysics Data System (ADS)

    Handroos, Heikki

    An analytical mathematical model for a fluid power valve uses equations based on physical laws. The parameters consist of physical coefficients, dimensions of the internal elements, spring constants, etc. which are not provided by the component manufacturers. The valve has to be dismantled in order to determine their values. The model is only in accordance with a particular type of valve construction and there are a large number of parameters. This is a major common problem in computer aided engineering (CAE) programs for fluid power circuits. Methods for solving this problem by combining a theoretical and an empirical approach are presented. Analytical models for single stage pressure and flow control valves are brought into forms which contain fewer parameters whose values can be determined from measured characteristic curves. The least squares criterion is employed to identify the parameter values describing the steady state of a valve. The steady state characteristic curves that are required data for this identification are quite often provided by the manufacturers. The parameters describing the dynamics of a valve are determined using a simple noncomputational method using dynamic characteristic curves that can be easily measured. The importance of the identification accuracy of the different parameters of the single stage pressure relief valve model is compared using a parameter sensitivity analysis method. A new comparison method called relative mean value criterion is used to compare the influences of variations of the different parameters to a nominal dynamic response.

  8. Evidence-based ethics? On evidence-based practice and the "empirical turn" from normative bioethics

    PubMed Central

    Goldenberg, Maya J

    2005-01-01

    Background The increase in empirical methods of research in bioethics over the last two decades is typically perceived as a welcomed broadening of the discipline, with increased integration of social and life scientists into the field and ethics consultants into the clinical setting, however it also represents a loss of confidence in the typical normative and analytic methods of bioethics. Discussion The recent incipiency of "Evidence-Based Ethics" attests to this phenomenon and should be rejected as a solution to the current ambivalence toward the normative resolution of moral problems in a pluralistic society. While "evidence-based" is typically read in medicine and other life and social sciences as the empirically-adequate standard of reasonable practice and a means for increasing certainty, I propose that the evidence-based movement in fact gains consensus by displacing normative discourse with aggregate or statistically-derived empirical evidence as the "bottom line". Therefore, along with wavering on the fact/value distinction, evidence-based ethics threatens bioethics' normative mandate. The appeal of the evidence-based approach is that it offers a means of negotiating the demands of moral pluralism. Rather than appealing to explicit values that are likely not shared by all, "the evidence" is proposed to adjudicate between competing claims. Quantified measures are notably more "neutral" and democratic than liberal markers like "species normal functioning". Yet the positivist notion that claims stand or fall in light of the evidence is untenable; furthermore, the legacy of positivism entails the quieting of empirically non-verifiable (or at least non-falsifiable) considerations like moral claims and judgments. As a result, evidence-based ethics proposes to operate with the implicit normativity that accompanies the production and presentation of all biomedical and scientific facts unchecked. Summary The "empirical turn" in bioethics signals a need for

  9. A patient-centered pharmacy services model of HIV patient care in community pharmacy settings: a theoretical and empirical framework.

    PubMed

    Kibicho, Jennifer; Owczarzak, Jill

    2012-01-01

    Reflecting trends in health care delivery, pharmacy practice has shifted from a drug-specific to a patient-centered model of care, aimed at improving the quality of patient care and reducing health care costs. In this article, we outline a theoretical model of patient-centered pharmacy services (PCPS), based on in-depth, qualitative interviews with a purposive sample of 28 pharmacists providing care to HIV-infected patients in specialty, semispecialty, and nonspecialty pharmacy settings. Data analysis was an interactive process informed by pharmacists' interviews and a review of the general literature on patient centered care, including Medication Therapy Management (MTM) services. Our main finding was that the current models of pharmacy services, including MTM, do not capture the range of pharmacy services in excess of mandated drug dispensing services. In this article, we propose a theoretical PCPS model that reflects the actual services pharmacists provide. The model includes five elements: (1) addressing patients as whole, contextualized persons; (2) customizing interventions to unique patient circumstances; (3) empowering patients to take responsibility for their own health care; (4) collaborating with clinical and nonclinical providers to address patient needs; and (5) developing sustained relationships with patients. The overarching goal of PCPS is to empower patients' to take responsibility for their own health care and self-manage their HIV-infection. Our findings provide the foundation for future studies regarding how widespread these practices are in diverse community settings, the validity of the proposed PCPS model, the potential for standardizing pharmacist practices, and the feasibility of a PCPS framework to reimburse pharmacists services. PMID:22149903

  10. Theoretical bases for conducting certain technological processes in space

    NASA Technical Reports Server (NTRS)

    Okhotin, A. S.

    1979-01-01

    Dimensionless conservation equations are presented and the theoretical bases of fluid behavior aboard orbiting satellites with application to the processes of manufacturing crystals in weightlessness. The small amount of gravitational acceleration is shown to increase the separation of bands of varying concentration. Natural convection is shown to have no practical effect on crystallization from a liquid melt. Barodiffusion is also negligibly small in realistic conditions of weightlessness. The effects of surface tension become increasingly large, and suggestions are made for further research.

  11. Segmented Labor Markets: A Review of the Theoretical and Empirical Literature and Its Implication for Educational Planning.

    ERIC Educational Resources Information Center

    Carnoy, Martin

    The study reviews orthodox theories of labor markets, presents new formulations of segmentation theory, and provides empirical tests of segmentation in the United States and several developing nations. Orthodox labor market theory views labor as being paid for its contribution to production and that investment in education and vocational training…

  12. E-learning in engineering education: a theoretical and empirical study of the Algerian higher education institution

    NASA Astrophysics Data System (ADS)

    Benchicou, Soraya; Aichouni, Mohamed; Nehari, Driss

    2010-06-01

    Technology-mediated education or e-learning is growing globally both in scale and delivery capacity due to the large diffusion of the ubiquitous information and communication technologies (ICT) in general and the web technologies in particular. This statement has not yet been fully supported by research, especially in developing countries such as Algeria. The purpose of this paper was to identify directions for addressing the needs of academics in higher education institutions in Algeria in order to adopt the e-learning approach as a strategy to improve quality of education. The paper will report results of an empirical study that measures the readiness of the Algerian higher education institutions towards the implementation of ICT in the educational process and the attitudes of faculty members towards the application of the e-learning approach in engineering education. Three main objectives were targeted, namely: (a) to provide an initial evaluation of faculty members' attitudes and perceptions towards web-based education; (b) reporting on their perceived requirements for implementing e-learning in university courses; (c) providing an initial input for a collaborative process of developing an institutional strategy for e-learning. Statistical analysis of the survey results indicates that the Algerian higher education institution, which adopted the Licence - Master and Doctorate educational system, is facing a big challenge to take advantage of emerging technological innovations and the advent of e-learning to further develop its teaching programmes and to enhance the quality of education in engineering fields. The successful implementation of this modern approach is shown to depend largely on a set of critical success factors that would include: 1. The extent to which the institution will adopt a formal and official e-learning strategy. 2. The extent to which faculty members will adhere and adopt this strategy and develop ownership of the various measures in the

  13. A Physically Based Theoretical Model of Spore Deposition for Predicting Spread of Plant Diseases.

    PubMed

    Isard, Scott A; Chamecki, Marcelo

    2016-03-01

    A physically based theory for predicting spore deposition downwind from an area source of inoculum is presented. The modeling framework is based on theories of turbulence dispersion in the atmospheric boundary layer and applies only to spores that escape from plant canopies. A "disease resistance" coefficient is introduced to convert the theoretical spore deposition model into a simple tool for predicting disease spread at the field scale. Results from the model agree well with published measurements of Uromyces phaseoli spore deposition and measurements of wheat leaf rust disease severity. The theoretical model has the advantage over empirical models in that it can be used to assess the influence of source distribution and geometry, spore characteristics, and meteorological conditions on spore deposition and disease spread. The modeling framework is refined to predict the detailed two-dimensional spatial pattern of disease spread from an infection focus. Accounting for the time variations of wind speed and direction in the refined modeling procedure improves predictions, especially near the inoculum source, and enables application of the theoretical modeling framework to field experiment design. PMID:26595112

  14. An empirical/theoretical model with dimensionless numbers to predict the performance of electrodialysis systems on the basis of operating conditions.

    PubMed

    Karimi, Leila; Ghassemi, Abbas

    2016-07-01

    Among the different technologies developed for desalination, the electrodialysis/electrodialysis reversal (ED/EDR) process is one of the most promising for treating brackish water with low salinity when there is high risk of scaling. Multiple researchers have investigated ED/EDR to optimize the process, determine the effects of operating parameters, and develop theoretical/empirical models. Previously published empirical/theoretical models have evaluated the effect of the hydraulic conditions of the ED/EDR on the limiting current density using dimensionless numbers. The reason for previous studies' emphasis on limiting current density is twofold: 1) to maximize ion removal, most ED/EDR systems are operated close to limiting current conditions if there is not a scaling potential in the concentrate chamber due to a high concentration of less-soluble salts; and 2) for modeling the ED/EDR system with dimensionless numbers, it is more accurate and convenient to use limiting current density, where the boundary layer's characteristics are known at constant electrical conditions. To improve knowledge of ED/EDR systems, ED/EDR models should be also developed for the Ohmic region, where operation reduces energy consumption, facilitates targeted ion removal, and prolongs membrane life compared to limiting current conditions. In this paper, theoretical/empirical models were developed for ED/EDR performance in a wide range of operating conditions. The presented ion removal and selectivity models were developed for the removal of monovalent ions and divalent ions utilizing the dominant dimensionless numbers obtained from laboratory scale electrodialysis experiments. At any system scale, these models can predict ED/EDR performance in terms of monovalent and divalent ion removal. PMID:27108213

  15. A theoretical and empirical investigation into the willingness-to-pay function for new innovative drugs by Germany's health technology assessment agency (IQWiG).

    PubMed

    Gandjour, Afschin

    2013-11-01

    Under the recently enacted pharmaceutical price and reimbursement regulation in Germany, new drugs are subject to a rapid assessment to determine whether there is sufficient evidence of added clinical benefits compared with the existing standard of treatment. If such added benefits are confirmed, manufacturers and representatives of the Statutory Health Insurance (SHI) are expected to negotiate an appropriate reimbursement price. If parties fail to reach an agreement, a final decision on the reimbursement price will be made by an arbitration body. If one of the parties involved wishes so, then the Institute for Quality and Efficiency in Health Care (Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen, IQWiG) will be commissioned with a formal evaluation of costs and benefits of the product in question. IQWiG will make a recommendation for a reimbursement price based on the 'efficiency frontier' in a therapeutic area. The purpose of the assessments is to provide support for decision-making bodies that act on behalf of the SHI insurants. To determine the willingness to pay for new drugs, IQWiG uses the following decision rule: the incremental cost-effectiveness ratio of a new drug compared with the next effective intervention should not be higher than that of the next effective intervention compared with its comparator. The purpose of this paper was to investigate the theoretical and empirical relationship between the willingness to pay for drugs and their health benefits. The analysis shows that across disease areas IQWiG has a curvilinear relationship between willingness to pay and health benefits. Future research may address the validity of the willingness-to-pay function from the viewpoint of the individual SHI insurants. PMID:25595007

  16. Theoretical geology

    NASA Astrophysics Data System (ADS)

    Mikeš, Daniel

    2010-05-01

    erroneous assumptions and do not solve the very fundamental issue that lies at the base of the problem. This problem is straighforward and obvious: a sedimentary system is inherently four-dimensional (3 spatial dimensions + 1 temporal dimension). Any method using an inferior number or dimensions is bound to fail to describe the evolution of a sedimentary system. It is indicative of the present day geological world that such fundamental issues be overlooked. The only reason for which one can appoint the socalled "rationality" in todays society. Simple "common sense" leads us to the conclusion that in this case the empirical method is bound to fail and the only method that can solve the problem is the theoretical approach. Reasoning that is completely trivial for the traditional exact sciences like physics and mathematics and applied sciences like engineering. However, not for geology, a science that was traditionally descriptive and jumped to empirical science, skipping the stage of theoretical science. I argue that the gap of theoretical geology is left open and needs to be filled. Every discipline in geology lacks a theoretical base. This base can only be filled by the theoretical/inductive approach and can impossibly be filled by the empirical/deductive approach. Once a critical mass of geologists realises this flaw in todays geology, we can start solving the fundamental problems in geology.

  17. Accuracy of Population Validity and Cross-Validity Estimation: An Empirical Comparison of Formula-Based, Traditional Empirical, and Equal Weights Procedures.

    ERIC Educational Resources Information Center

    Raju, Nambury S.; Bilgic, Reyhan; Edwards, Jack E.; Fleer, Paul F.

    1999-01-01

    Performed an empirical Monte Carlo study using predictor and criterion data from 84,808 U.S. Air Force enlistees. Compared formula-based, traditional empirical, and equal-weights procedures. Discusses issues for basic research on validation and cross-validation. (SLD)

  18. Time Domain Strain/Stress Reconstruction Based on Empirical Mode Decomposition: Numerical Study and Experimental Validation.

    PubMed

    He, Jingjing; Zhou, Yibin; Guan, Xuefei; Zhang, Wei; Zhang, Weifang; Liu, Yongming

    2016-01-01

    Structural health monitoring has been studied by a number of researchers as well as various industries to keep up with the increasing demand for preventive maintenance routines. This work presents a novel method for reconstruct prompt, informed strain/stress responses at the hot spots of the structures based on strain measurements at remote locations. The structural responses measured from usage monitoring system at available locations are decomposed into modal responses using empirical mode decomposition. Transformation equations based on finite element modeling are derived to extrapolate the modal responses from the measured locations to critical locations where direct sensor measurements are not available. Then, two numerical examples (a two-span beam and a 19956-degree of freedom simplified airfoil) are used to demonstrate the overall reconstruction method. Finally, the present work investigates the effectiveness and accuracy of the method through a set of experiments conducted on an aluminium alloy cantilever beam commonly used in air vehicle and spacecraft. The experiments collect the vibration strain signals of the beam via optical fiber sensors. Reconstruction results are compared with theoretical solutions and a detailed error analysis is also provided. PMID:27537889

  19. The Importance of Emotion in Theories of Motivation: Empirical, Methodological, and Theoretical Considerations from a Goal Theory Perspective

    ERIC Educational Resources Information Center

    Turner, Julianne C.; Meyer, Debra K.; Schweinle, Amy

    2003-01-01

    Despite its importance to educational psychology, prominent theories of motivation have mostly ignored emotion. In this paper, we review theoretical conceptions of the relation between motivation and emotion and discuss the role of emotion in understanding student motivation in classrooms. We demonstrate that emotion is one of the best indicators…

  20. The Equivalence of Information-Theoretic and Likelihood-Based Methods for Neural Dimensionality Reduction

    PubMed Central

    Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.

    2015-01-01

    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448

  1. Empirically Estimable Classification Bounds Based on a Nonparametric Divergence Measure

    PubMed Central

    Berisha, Visar; Wisler, Alan; Hero, Alfred O.; Spanias, Andreas

    2015-01-01

    Information divergence functions play a critical role in statistics and information theory. In this paper we show that a non-parametric f-divergence measure can be used to provide improved bounds on the minimum binary classification probability of error for the case when the training and test data are drawn from the same distribution and for the case where there exists some mismatch between training and test distributions. We confirm the theoretical results by designing feature selection algorithms using the criteria from these bounds and by evaluating the algorithms on a series of pathological speech classification tasks. PMID:26807014

  2. Why resilience is unappealing to social science: Theoretical and empirical investigations of the scientific use of resilience.

    PubMed

    Olsson, Lennart; Jerneck, Anne; Thoren, Henrik; Persson, Johannes; O'Byrne, David

    2015-05-01

    Resilience is often promoted as a boundary concept to integrate the social and natural dimensions of sustainability. However, it is a troubled dialogue from which social scientists may feel detached. To explain this, we first scrutinize the meanings, attributes, and uses of resilience in ecology and elsewhere to construct a typology of definitions. Second, we analyze core concepts and principles in resilience theory that cause disciplinary tensions between the social and natural sciences (system ontology, system boundary, equilibria and thresholds, feedback mechanisms, self-organization, and function). Third, we provide empirical evidence of the asymmetry in the use of resilience theory in ecology and environmental sciences compared to five relevant social science disciplines. Fourth, we contrast the unification ambition in resilience theory with methodological pluralism. Throughout, we develop the argument that incommensurability and unification constrain the interdisciplinary dialogue, whereas pluralism drawing on core social scientific concepts would better facilitate integrated sustainability research. PMID:26601176

  3. Why resilience is unappealing to social science: Theoretical and empirical investigations of the scientific use of resilience

    PubMed Central

    Olsson, Lennart; Jerneck, Anne; Thoren, Henrik; Persson, Johannes; O’Byrne, David

    2015-01-01

    Resilience is often promoted as a boundary concept to integrate the social and natural dimensions of sustainability. However, it is a troubled dialogue from which social scientists may feel detached. To explain this, we first scrutinize the meanings, attributes, and uses of resilience in ecology and elsewhere to construct a typology of definitions. Second, we analyze core concepts and principles in resilience theory that cause disciplinary tensions between the social and natural sciences (system ontology, system boundary, equilibria and thresholds, feedback mechanisms, self-organization, and function). Third, we provide empirical evidence of the asymmetry in the use of resilience theory in ecology and environmental sciences compared to five relevant social science disciplines. Fourth, we contrast the unification ambition in resilience theory with methodological pluralism. Throughout, we develop the argument that incommensurability and unification constrain the interdisciplinary dialogue, whereas pluralism drawing on core social scientific concepts would better facilitate integrated sustainability research. PMID:26601176

  4. Empirical Analysis and Refinement of Expert System Knowledge Bases

    PubMed Central

    Weiss, Sholom M.; Politakis, Peter; Ginsberg, Allen

    1986-01-01

    Recent progress in knowledge base refinement for expert systems is reviewed. Knowledge base refinement is characterized by the constrained modification of rule-components in an existing knowledge base. The goals are to localize specific weaknesses in a knowledge base and to improve an expert system's performance. Systems that automate some aspects of knowledge base refinement can have a significant impact on the related problems of knowledge base acquisition, maintenance, verification, and learning from experience. The SEEK empiricial analysis and refinement system is reviewed and its successor system, SEEK2, is introduced. Important areas for future research in knowledge base refinement are described.

  5. An empirical formula based on Monte Carlo simulation for diffuse reflectance from turbid media

    NASA Astrophysics Data System (ADS)

    Gnanatheepam, Einstein; Aruna, Prakasa Rao; Ganesan, Singaravelu

    2016-03-01

    Diffuse reflectance spectroscopy has been widely used in diagnostic oncology and characterization of laser irradiated tissue. However, still accurate and simple analytical equation does not exist for estimation of diffuse reflectance from turbid media. In this work, a diffuse reflectance lookup table for a range of tissue optical properties was generated using Monte Carlo simulation. Based on the generated Monte Carlo lookup table, an empirical formula for diffuse reflectance was developed using surface fitting method. The variance between the Monte Carlo lookup table surface and the surface obtained from the proposed empirical formula is less than 1%. The proposed empirical formula may be used for modeling of diffuse reflectance from tissue.

  6. Fleet Fatality Risk and its Sensitivity to Vehicle Mass Change in Frontal Vehicle-to-Vehicle Crashes, Using a Combined Empirical and Theoretical Model.

    PubMed

    Shi, Yibing; Nusholtz, Guy S

    2015-11-01

    The objective of this study is to analytically model the fatality risk in frontal vehicle-to-vehicle crashes of the current vehicle fleet, and its sensitivity to vehicle mass change. A model is built upon an empirical risk ratio-mass ratio relationship from field data and a theoretical mass ratio-velocity change ratio relationship dictated by conservation of momentum. The fatality risk of each vehicle is averaged over the closing velocity distribution to arrive at the mean fatality risks. The risks of the two vehicles are summed and averaged over all possible crash partners to find the societal mean fatality risk associated with a subject vehicle of a given mass from a fleet specified by a mass distribution function. Based on risk exponent and mass distribution from a recent fleet, the subject vehicle mean fatality risk is shown to increase, while at the same time that for the partner vehicles decreases, as the mass of the subject vehicle decreases. The societal mean fatality risk, the sum of these, incurs a penalty with respect to a fleet with complete mass equality. This penalty reaches its minimum (~8% for the example fleet) for crashes with a subject vehicle whose mass is close to the fleet mean mass. The sensitivity, i.e., the rate of change of the societal mean fatality risk with respect to the mass of the subject vehicle is assessed. Results from two sets of fully regression-based analyses, Kahane (2012) and Van Auken and Zellner (2013), are approximately compared with the current result. The general magnitudes of the results are comparable, but differences exist at a more detailed level. The subject vehicle-oriented societal mean fatality risk is averaged over all possible subject vehicle masses of a given fleet to obtain the overall mean fatality risk of the fleet. It is found to increase approximately linearly at a rate of about 0.8% for each 100 lb decrease in mass of all vehicles in the fleet. PMID:26660748

  7. Deep in Data. Empirical Data Based Software Accuracy Testing Using the Building America Field Data Repository

    SciTech Connect

    Neymark, J.; Roberts, D.

    2013-06-01

    This paper describes progress toward developing a usable, standardized, empirical data-based software accuracy test suite using home energy consumption and building description data. Empirical data collected from around the United States have been translated into a uniform Home Performance Extensible Markup Language format that may enable software developers to create translators to their input schemes for efficient access to the data. This could allow for modeling many homes expediently, and thus implementing software accuracy test cases by applying the translated data.

  8. Are prejudices against disabled persons determined by personality characteristics? Reviewing a theoretical approach on the basis of empirical research findings.

    PubMed

    Cloerkes, G

    1981-01-01

    Taking as point of departure the results obtained from research on prejudice, many authors believe that the quality of attitudes toward disabled persons is influenced by the personality structure of the nondisabled. In order to verify this assumption, a secondary analysis of 67 empirical studies was undertaken. These studies referred to different personality variables such as authoritarianism, ethnocentrism, dogmatism, rigidity, intolerance of ambiguity, cognitive simplicity, anxiety, ego-weakness, self-concept, body-concept, aggressiveness, empathy, intelligence, etc. The results can be summarized as follows: Statistical criteria show that single personality traits have relatively little influence on the attitudes towards disabled persons. An adequate evaluation of the research findings is complicated by, at times, considerable methodological problems which arise when applying the proper test instruments to non-clinical populations. Marked correlations are to be found in particular in the case of authoritarianism, ethnocentrism, intolerance of ambiguity, anxiety, and ego-weakness. The intercorrelations, however, between most of the personality variables are rather high, which by cumulation of "extreme" factors may, in fact, sometimes result in particularly unfavorable attitudes toward the disabled. Thus, personality-related research findings to provide certain valuable explanations. Special attention should be devoted to the multiple connections between personality structure and social structure. PMID:6452419

  9. The Demand for Cigarettes as Derived from the Demand for Weight Loss: A Theoretical and Empirical Investigation.

    PubMed

    Cawley, John; Dragone, Davide; Von Hinke Kessler Scholder, Stephanie

    2016-01-01

    This paper offers an economic model of smoking and body weight and provides new empirical evidence on the extent to which the demand for cigarettes is derived from the demand for weight loss. In the model, smoking causes weight loss in addition to having direct utility benefits and direct health consequences. It predicts that some individuals smoke for weight loss and that the practice is more common among those who consider themselves overweight and those who experience greater disutility from excess weight. We test these hypotheses using nationally representative data in which adolescents are directly asked whether they smoke to control their weight. We find that, among teenagers who smoke frequently, 46% of girls and 30% of boys are smoking in part to control their weight. As predicted by the model, this practice is significantly more common among those who describe themselves as too fat and among groups that tend to experience greater disutility from obesity. We conclude by discussing the implications of these findings for tax policy; specifically, the demand for cigarettes is less price elastic among those who smoke for weight loss, all else being equal. Public health efforts to reduce smoking initiation and encourage cessation may wish to design campaigns to alter the derived nature of cigarette demand, especially among adolescent girls. PMID:25346511

  10. Theoretical investigation of graphene-based photonic modulators

    PubMed Central

    Gosciniak, Jacek; Tan, Dawn T. H.

    2013-01-01

    Integration of electronics and photonics for future applications requires an efficient conversion of electrical to optical signals. The excellent electronic and photonic properties of graphene make it a suitable material for integrated systems with extremely wide operational bandwidth. In this paper, we analyze the novel geometry of modulator based on the rib photonic waveguide configuration with a double-layer graphene placed between a slab and ridge. The theoretical analysis of graphene-based electro-absorption modulator was performed showing that a 3 dB modulation with ~ 600 nm-long waveguide is possible resulting in energy per bit below 1 fJ/bit. The optical bandwidth of such modulators exceeds 12 THz with an operation speed ranging from 160 GHz to 850 GHz and limited only by graphene resistance. The performances of modulators were evaluated based on the figure of merit defined as the ratio between extinction ratio and insertion losses where it was found to exceed 220. PMID:23719514

  11. Theoretical detection ranges for acoustic based manatee avoidance technology.

    PubMed

    Phillips, Richard; Niezrecki, Christopher; Beusse, Diedrich O

    2006-07-01

    The West Indian manatee (Trichechus manatus latirostris) has become endangered partly because of watercraft collisions in Florida's coastal waterways. To reduce the number of collisions, warning systems based upon detecting manatee vocalizations have been proposed. One aspect of the feasibility of an acoustically based warning system relies upon the distance at which a manatee vocalization is detectable. Assuming a mixed spreading model, this paper presents a theoretical analysis of the system detection capabilities operating within various background and watercraft noise conditions. This study combines measured source levels of manatee vocalizations with the modeled acoustic properties of manatee habitats to develop a method for determining the detection range and hydrophone spacing requirements for acoustic based manatee avoidance technologies. In quiet environments (background noise approximately 70 dB) it was estimated that manatee vocalizations are detectable at approximately 250 m, with a 6 dB detection threshold, In louder environments (background noise approximately 100dB) the detection range drops to 2.5 m. In a habitat with 90 dB of background noise, a passing boat with a maximum noise floor of 120 dB would be the limiting factor when it is within approximately 100 m of a hydrophone. The detection range was also found to be strongly dependent on the manatee vocalization source level. PMID:16875213

  12. Genetic load, inbreeding depression, and hybrid vigor covary with population size: An empirical evaluation of theoretical predictions.

    PubMed

    Lohr, Jennifer N; Haag, Christoph R

    2015-12-01

    Reduced population size is thought to have strong consequences for evolutionary processes as it enhances the strength of genetic drift. In its interaction with selection, this is predicted to increase the genetic load, reduce inbreeding depression, and increase hybrid vigor, and in turn affect phenotypic evolution. Several of these predictions have been tested, but comprehensive studies controlling for confounding factors are scarce. Here, we show that populations of Daphnia magna, which vary strongly in genetic diversity, also differ in genetic load, inbreeding depression, and hybrid vigor in a way that strongly supports theoretical predictions. Inbreeding depression is positively correlated with genetic diversity (a proxy for Ne ), and genetic load and hybrid vigor are negatively correlated with genetic diversity. These patterns remain significant after accounting for potential confounding factors and indicate that, in small populations, a large proportion of the segregation load is converted into fixed load. Overall, the results suggest that the nature of genetic variation for fitness-related traits differs strongly between large and small populations. This has large consequences for evolutionary processes in natural populations, such as selection on dispersal, breeding systems, ageing, and local adaptation. PMID:26497949

  13. Empirically Based School Interventions Targeted at Academic and Mental Health Functioning

    ERIC Educational Resources Information Center

    Hoagwood, Kimberly E.; Olin, S. Serene; Kerker, Bonnie D.; Kratochwill, Thomas R.; Crowe, Maura; Saka, Noa

    2007-01-01

    This review examines empirically based studies of school-based mental health interventions. The review identified 64 out of more than 2,000 articles published between 1990 and 2006 that met methodologically rigorous criteria for inclusion. Of these 64 articles, only 24 examined both mental health "and" educational outcomes. The majority of…

  14. Performance-Based Service Quality Model: An Empirical Study on Japanese Universities

    ERIC Educational Resources Information Center

    Sultan, Parves; Wong, Ho

    2010-01-01

    Purpose: This paper aims to develop and empirically test the performance-based higher education service quality model. Design/methodology/approach: The study develops 67-item instrument for measuring performance-based service quality with a particular focus on the higher education sector. Scale reliability is confirmed using the Cronbach's alpha.…

  15. Empirical and theoretical dosimetry in support of whole body radio frequency (RF) exposure in seated human volunteers at 220 MHz.

    PubMed

    Allen, Stewart J; Adair, Eleanor R; Mylacraine, Kevin S; Hurt, William; Ziriax, John

    2005-09-01

    This study reports the dosimetry performed to support an experiment that measured physiological responses of seated volunteer human subjects exposed to 220 MHz fields. Exposures were performed in an anechoic chamber which was designed to provide uniform fields for frequencies of 100 MHz or greater. A vertical half-wave dipole with a 90 degrees reflector was used to optimize the field at the subject's location. The vertically polarized E field was incident on the dorsal side of the phantoms and human volunteers. The dosimetry plan required measurement of stationary probe drift, field strengths as a function of distance, electric and magnetic field maps at 200, 225, and 250 cm from the dipole antenna, and specific absorption rate (SAR) measurements using a human phantom, as well as theoretical predictions of SAR with the finite difference time domain (FDTD) method. A NBS (National Bureau of Standards, now NIST, National Institute of Standards and Technology, Boulder, CO) 10 cm loop antenna was positioned 150 cm to the right, 100 cm above and 60 cm behind the subject (toward the transmitting antenna) and was read prior to each subject's exposure and at 5 min intervals during all RF exposures. Transmitter stability was determined by measuring plate voltage, plate current, screen voltage and grid voltage for the driver and final amplifiers before and at 5 min intervals throughout the RF exposures. These dosimetry measurements assured accurate and consistent exposures. FDTD calculations were used to determine SAR distribution in a seated human subject. This study reports the necessary dosimetry to precisely control exposure levels for studies of the physiological consequences of human volunteer exposures to 220 MHz. PMID:15931686

  16. The Empirical Investigation of Perspective-Based Reading

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Green, Scott; Laitenberger, Oliver; Shull, Forrest; Sorumgard, Sivert; Zelkowitz, Marvin V.

    1996-01-01

    We consider reading techniques a fundamental means of achieving high quality software. Due to the lack of research in this area, we are experimenting with the application and comparison of various reading techniques. This paper deals with our experiences with Perspective-Based Reading (PBR), a particular reading technique for requirements documents. The goal of PBR is to provide operational scenarios where members of a review team read a document from a particular perspective (e.g., tester, developer, user). Our assumption is that the combination of different perspectives provides better coverage of the document than the same number of readers using their usual technique.

  17. Organizing the public health-clinical health interface: theoretical bases.

    PubMed

    St-Pierre, Michèle; Reinharz, Daniel; Gauthier, Jacques-Bernard

    2006-01-01

    This article addresses the issue of the interface between public health and clinical health within the context of the search for networking approaches geared to a more integrated delivery of health services. The articulation of an operative interface is complicated by the fact that the definition of networking modalities involves complex intra- and interdisciplinary and intra- and interorganizational systems across which a new transversal dynamics of intervention practices and exchanges between service structures must be established. A better understanding of the situation is reached by shedding light on the rationale underlying the organizational methods that form the bases of the interface between these two sectors of activity. The Quebec experience demonstrates that neither the structural-functionalist approach, which emphasizes remodelling establishment structures and functions as determinants of integration, nor the structural-constructivist approach, which prioritizes distinct fields of practice in public health and clinical health, adequately serves the purpose of networking and integration. Consequently, a theoretical reframing is imperative. In this regard, structuration theory, which fosters the simultaneous study of methods of inter-structure coordination and inter-actor cooperation, paves the way for a better understanding of the situation and, in turn, to the emergence of new integration possibilities. PMID:16645802

  18. A new entropy based on a group-theoretical structure

    NASA Astrophysics Data System (ADS)

    Curado, Evaldo M. F.; Tempesta, Piergiulio; Tsallis, Constantino

    2016-03-01

    A multi-parametric version of the nonadditive entropy Sq is introduced. This new entropic form, denoted by S a , b , r, possesses many interesting statistical properties, and it reduces to the entropy Sq for b = 0, a = r : = 1 - q (hence Boltzmann-Gibbs entropy SBG for b = 0, a = r → 0). The construction of the entropy S a , b , r is based on a general group-theoretical approach recently proposed by one of us, Tempesta (2016). Indeed, essentially all the properties of this new entropy are obtained as a consequence of the existence of a rational group law, which expresses the structure of S a , b , r with respect to the composition of statistically independent subsystems. Depending on the choice of the parameters, the entropy S a , b , r can be used to cover a wide range of physical situations, in which the measure of the accessible phase space increases say exponentially with the number of particles N of the system, or even stabilizes, by increasing N, to a limiting value. This paves the way to the use of this entropy in contexts where the size of the phase space does not increase as fast as the number of its constituting particles (or subsystems) increases.

  19. Polymer electrolyte membrane fuel cell fault diagnosis based on empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Damour, Cédric; Benne, Michel; Grondin-Perez, Brigitte; Bessafi, Miloud; Hissel, Daniel; Chabriat, Jean-Pierre

    2015-12-01

    Diagnosis tool for water management is relevant to improve the reliability and lifetime of polymer electrolyte membrane fuel cells (PEMFCs). This paper presents a novel signal-based diagnosis approach, based on Empirical Mode Decomposition (EMD), dedicated to PEMFCs. EMD is an empirical, intuitive, direct and adaptive signal processing method, without pre-determined basis functions. The proposed diagnosis approach relies on the decomposition of FC output voltage to detect and isolate flooding and drying faults. The low computational cost of EMD, the reduced number of required measurements, and the high diagnosis accuracy of flooding and drying faults diagnosis make this approach a promising online diagnosis tool for PEMFC degraded modes management.

  20. Behavioral Modeling Based on Probabilistic Finite Automata: An Empirical Study.

    PubMed

    Tîrnăucă, Cristina; Montaña, José L; Ontañón, Santiago; González, Avelino J; Pardo, Luis M

    2016-01-01

    Imagine an agent that performs tasks according to different strategies. The goal of Behavioral Recognition (BR) is to identify which of the available strategies is the one being used by the agent, by simply observing the agent's actions and the environmental conditions during a certain period of time. The goal of Behavioral Cloning (BC) is more ambitious. In this last case, the learner must be able to build a model of the behavior of the agent. In both settings, the only assumption is that the learner has access to a training set that contains instances of observed behavioral traces for each available strategy. This paper studies a machine learning approach based on Probabilistic Finite Automata (PFAs), capable of achieving both the recognition and cloning tasks. We evaluate the performance of PFAs in the context of a simulated learning environment (in this case, a virtual Roomba vacuum cleaner robot), and compare it with a collection of other machine learning approaches. PMID:27347956

  1. Lightning Detection Efficiency Analysis Process: Modeling Based on Empirical Data

    NASA Technical Reports Server (NTRS)

    Rompala, John T.

    2005-01-01

    A ground based lightning detection system employs a grid of sensors, which record and evaluate the electromagnetic signal produced by a lightning strike. Several detectors gather information on that signal s strength, time of arrival, and behavior over time. By coordinating the information from several detectors, an event solution can be generated. That solution includes the signal s point of origin, strength and polarity. Determination of the location of the lightning strike uses algorithms based on long used techniques of triangulation. Determination of the event s original signal strength relies on the behavior of the generated magnetic field over distance and time. In general the signal from the event undergoes geometric dispersion and environmental attenuation as it progresses. Our knowledge of that radial behavior together with the strength of the signal received by detecting sites permits an extrapolation and evaluation of the original strength of the lightning strike. It also limits the detection efficiency (DE) of the network. For expansive grids and with a sparse density of detectors, the DE varies widely over the area served. This limits the utility of the network in gathering information on regional lightning strike density and applying it to meteorological studies. A network of this type is a grid of four detectors in the Rondonian region of Brazil. The service area extends over a million square kilometers. Much of that area is covered by rain forests. Thus knowledge of lightning strike characteristics over the expanse is of particular value. I have been developing a process that determines the DE over the region [3]. In turn, this provides a way to produce lightning strike density maps, corrected for DE, over the entire region of interest. This report offers a survey of that development to date and a record of present activity.

  2. Towards an Empirically Based Parametric Explosion Spectral Model

    SciTech Connect

    Ford, S R; Walter, W R; Ruppert, S; Matzel, E; Hauk, T; Gok, R

    2009-08-31

    Small underground nuclear explosions need to be confidently detected, identified, and characterized in regions of the world where they have never before been tested. The focus of our work is on the local and regional distances (< 2000 km) and phases (Pn, Pg, Sn, Lg) necessary to see small explosions. We are developing a parametric model of the nuclear explosion seismic source spectrum that is compatible with the earthquake-based geometrical spreading and attenuation models developed using the Magnitude Distance Amplitude Correction (MDAC) techniques (Walter and Taylor, 2002). The explosion parametric model will be particularly important in regions without any prior explosion data for calibration. The model is being developed using the available body of seismic data at local and regional distances for past nuclear explosions at foreign and domestic test sites. Parametric modeling is a simple and practical approach for widespread monitoring applications, prior to the capability to carry out fully deterministic modeling. The achievable goal of our parametric model development is to be able to predict observed local and regional distance seismic amplitudes for event identification and yield determination in regions with incomplete or no prior history of underground nuclear testing. The relationship between the parametric equations and the geologic and containment conditions will assist in our physical understanding of the nuclear explosion source.

  3. The Empirical Investigation of Perspective-Based Reading

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Green, Scott; Laitenberger, Oliver; Shull, Forrest; Sorumgard, Sivert; Zelkowitz, Marvin V.

    1995-01-01

    We consider reading techniques a fundamental means of achieving high quality software. Due to lack of research in this area, we are experimenting with the application and comparison of various reading techniques. This paper deals with our experiences with Perspective Based Reading (PBR) a particular reading technique for requirement documents. The goal of PBR is to provide operation scenarios where members of a review team read a document from a particular perspective (eg., tester, developer, user). Our assumption is that the combination of different perspective provides better coverage of the document than the same number of readers using their usual technique. To test the efficacy of PBR, we conducted two runs of a controlled experiment in the environment of NASA GSFC Software Engineering Laboratory (SEL), using developers from the environment. The subjects read two types of documents, one generic in nature and the other from the NASA Domain, using two reading techniques, PBR and their usual technique. The results from these experiment as well as the experimental design, are presented and analyzed. When there is a statistically significant distinction, PBR performs better than the subjects' usual technique. However, PBR appears to be more effective on the generic documents than on the NASA documents.

  4. An Empirically Based Error-Model for Radar Rainfall Estimates

    NASA Astrophysics Data System (ADS)

    Ciach, G. J.

    2004-05-01

    Mathematical modeling of the way radar rainfall (RR) approximates the physical truth is a prospective method to quantify the RR uncertainties. In this approach one can represent RR in the form of an "observation equation," that is, as a function of the corresponding true rainfall and a random error process. The error process describes the cumulative effect of all the sources of RR uncertainties. We present the results of our work on the identification and estimation of this relationship. They are based on the Level II reflectivity data from the WSR-88D radar in Tulsa, Oklahoma, and rainfall measurements from 23 surrounding Oklahoma Mesonet raingauges. Accumulation intervals from one hour to one day were analyzed using this sample. The raingauge accumulations were used as an approximation of the true rainfall in this study. The RR error-model that we explored is factorized into a deterministic distortion, which is a function of the true rainfall, and a multiplicative random error factor that is a positively-defined random variable. The distribution of the error factor depends on the true rainfall, however, its expectation in this representation is always equal to one (all the biases are modeled by the deterministic component). With this constraint, the deterministic distortion function can be defined as the conditional mean of RR conditioned on the true rainfall. We use nonparametric regression to estimate the deterministic distortion, and the variance and quantiles of the random error factor, as functions of the true rainfall. The results show that the deterministic distortion is a nonlinear function of the true rainfall that indicates systematic overestimation of week rainfall and underestimation of strong rainfall (conditional bias). The standard deviation of the error factor is a decreasing function of the true rainfall that ranges from about 0.8 for week rainfall to about 0.3 for strong rainfall. For larger time-scales, both the deterministic distortion and the

  5. Implementing community-based provider participation in research: an empirical study

    PubMed Central

    2012-01-01

    Background Since 2003, the United States National Institutes of Health (NIH) has sought to restructure the clinical research enterprise in the United States by promoting collaborative research partnerships between academically-based investigators and community-based physicians. By increasing community-based provider participation in research (CBPPR), the NIH seeks to advance the science of discovery by conducting research in clinical settings where most people get their care, and accelerate the translation of research results into everyday clinical practice. Although CBPPR is seen as a promising strategy for promoting the use of evidence-based clinical services in community practice settings, few empirical studies have examined the organizational factors that facilitate or hinder the implementation of CBPPR. The purpose of this study is to explore the organizational start-up and early implementation of CBPPR in community-based practice. Methods We used longitudinal, case study research methods and an organizational model of innovation implementation to theoretically guide our study. Our sample consisted of three community practice settings that recently joined the National Cancer Institute’s (NCI) Community Clinical Oncology Program (CCOP) in the United States. Data were gathered through site visits, telephone interviews, and archival documents from January 2008 to May 2011. Results The organizational model for innovation implementation was useful in identifying and investigating the organizational factors influencing start-up and early implementation of CBPPR in CCOP organizations. In general, the three CCOP organizations varied in the extent to which they achieved consistency in CBPPR over time and across physicians. All three CCOP organizations demonstrated mixed levels of organizational readiness for change. Hospital management support and resource availability were limited across CCOP organizations early on, although they improved in one CCOP organization

  6. An Empirical Investigation of a Theoretically Based Measure of Perceived Wellness

    ERIC Educational Resources Information Center

    Harari, Marc J.; Waehler, Charles A.; Rogers, James R.

    2005-01-01

    The Perceived Wellness Survey (PWS; T. Adams, 1995; T. Adams, J. Bezner, & M. Steinhardt, 1997) is a recently developed instrument intended to operationalize the comprehensive Perceived Wellness Model (T. Adams, J. Bezner, & M. Steinhardt, 1997), an innovative model that attempts to include the balance of multiple life activities in its evaluation…

  7. Theoretical and empirical study of single-substance, upward two-phase flow in a constant-diameter adiabatic pipe

    SciTech Connect

    Laoulache, R.N.; Maeder, P.F.; DiPippo, R.

    1987-05-01

    A Scheme is developed to describe the upward flow of a two-phase mixture of a single substance in a vertical adiabatic constant area pipe. The scheme is based on dividing the mixture into a homogeneous core surrounded by a liquid film. This core may be a mixture of bubbles in a contiguous liquid phase, or a mixture of droplets in a contiguous vapor phase. Emphasis is placed upon the latter case since the range of experimental measurements of pressure, temperature, and void fraction collected in this study fall in the slug-churn''- annular'' flow regimes. The core is turbulent, whereas the liquid film may be laminar or turbulent. Turbulent stresses are modeled by using Prandtl's mixing-length theory. The working fluid is Dichlorotetrafluoroethane CCIF{sub 2}-CCIF{sub 2} known as refrigerant 114 (R-114); the two-phase mixture is generated from the single phase substance by the process of flashing. In this study, the effect of the Froude and Reynolds numbers on the liquid film characteristics is examined. The compressibility is accounted for through the acceleration pressure gradient of the core and not directly through the Mach number. An expression for an interfacial friction coefficient between the turbulent core and the liquid film is developed; it is similar to Darcy's friction coefficient for a single phase flow in a rough pipe. Finally, an actual steam-water geothermal well is simulated; it is based on actual field data from New Zealand. A similarity theory is used to predict the steam-water mixture pressure and temperature starting with laboratory measurements on the flow of R-114.

  8. Theoretical and empirical study of single-substance, upward two-phase flow in a constant-diameter adiabatic pipe

    SciTech Connect

    Laoulache, R.N.; Maeder, P.F.; DiPippo, R.

    1987-05-01

    A scheme is developed to describe the upward flow of a two-phase mixture of a single substance in a vertical adiabatic constant area pipe. The scheme is based on dividing the mixture into a homogeneous core surrounded by a liquid film. This core may be a mixture of bubbles in a contiguous liquid phase, or a mixture of droplets in a contiguous vapor phase. The core is turbulent, whereas the liquid film may be laminar or turbulent. The working fluid is Dichlorotetrafluoroethane CClF/sub 2/-CClF/sub 2/ known as refrigerant 114 (R-114); the two-phase mixture is generated from the single phase substance by the process of flashing. In this study, the effect of the Froude and Reynolds numbers on the liquid film characteristics is examined. An expression for an interfacial friction coefficient between the turbulent core and the liquid film is developed; it is similar to Darcy's friction coefficient for a single phase flow in a rough pipe. Results indicate that for the range of Reynolds and Froude numbers considered, the liquid film is likely to be turbulent rather than laminar. The study also shows that two-dimensional effects are important, and the flow is never fully developed either in the film or the core. In addition, the new approach for the turbulent film is capable of predicting a local net flow rate that may be upward, downward, stationary, or stalled. An actual steam-water geothermal well is simulated. A similarity theory is used to predict the steam-water mixture pressure and temperature starting with laboratory measurements on the flow of R-114. Results indicate that the theory can be used to predict the pressure gradient in the two-phase region based on laboratory measurements.

  9. An Empirically Based Method of Q-Matrix Validation for the DINA Model: Development and Applications

    ERIC Educational Resources Information Center

    de la Torre, Jimmy

    2008-01-01

    Most model fit analyses in cognitive diagnosis assume that a Q matrix is correct after it has been constructed, without verifying its appropriateness. Consequently, any model misfit attributable to the Q matrix cannot be addressed and remedied. To address this concern, this paper proposes an empirically based method of validating a Q matrix used…

  10. An Empirically-Based Statewide System for Identifying Quality Pre-Kindergarten Programs

    ERIC Educational Resources Information Center

    Williams, Jeffrey M.; Landry, Susan H.; Anthony, Jason L.; Swank, Paul R.; Crawford, April D.

    2012-01-01

    This study presents an empirically-based statewide system that links information about pre-kindergarten programs with children's school readiness scores to certify pre-kindergarten classrooms as promoting school readiness. Over 8,000 children from 1,255 pre-kindergarten classrooms were followed longitudinally for one year. Pre-kindergarten quality…

  11. Development of an Empirically Based Questionnaire to Investigate Young Students' Ideas about Nature of Science

    ERIC Educational Resources Information Center

    Chen, Sufen; Chang, Wen-Hua; Lieu, Sang-Chong; Kao, Huey-Lien; Huang, Mao-Tsai; Lin, Shu-Fen

    2013-01-01

    This study developed an empirically based questionnaire to monitor young learners' conceptions of nature of science (NOS). The questionnaire, entitled Students' Ideas about Nature of Science (SINOS), measured views on theory-ladenness, use of creativity and imagination, tentativeness of scientific knowledge, durability of scientific knowledge,…

  12. Untangling the Evidence: Introducing an Empirical Model for Evidence-Based Library and Information Practice

    ERIC Educational Resources Information Center

    Gillespie, Ann

    2014-01-01

    Introduction: This research is the first to investigate the experiences of teacher-librarians as evidence-based practice. An empirically derived model is presented in this paper. Method: This qualitative study utilised the expanded critical incident approach, and investigated the real-life experiences of fifteen Australian teacher-librarians,…

  13. Use of an Empirically Based Marriage Education Program by Religious Organizations: Results of a Dissemination Trial

    ERIC Educational Resources Information Center

    Markman, Howard J.; Whitton, Sarah W.; Kline, Galena H.; Stanley, Scott M.; Thompson, Huette; St. Peters, Michelle; Leber, Douglas B.; Olmos-Gallo, P. Antonio; Prado, Lydia; Williams, Tamara; Gilbert, Katy; Tonelli, Laurie; Bobulinski, Michelle; Cordova, Allen

    2004-01-01

    We present an evaluation of the extent to which an empirically based couples' intervention program was successfully disseminated in the community. Clergy and lay leaders from 27 religious organizations who were trained to deliver the Prevention and Relationship Enhancement Program (PREP) were contacted approximately yearly for 5 years following…

  14. Empirical vs. Expected IRT-Based Reliability Estimation in Computerized Multistage Testing (MST)

    ERIC Educational Resources Information Center

    Zhang, Yanwei; Breithaupt, Krista; Tessema, Aster; Chuah, David

    2006-01-01

    Two IRT-based procedures to estimate test reliability for a certification exam that used both adaptive (via a MST model) and non-adaptive design were considered in this study. Both procedures rely on calibrated item parameters to estimate error variance. In terms of score variance, one procedure (Method 1) uses the empirical ability distribution…

  15. Satellite-based empirical models linking river plume dynamics with hypoxic area andvolume

    EPA Science Inventory

    Satellite-based empirical models explaining hypoxic area and volume variation were developed for the seasonally hypoxic (O2 < 2 mg L−1) northern Gulf of Mexico adjacent to the Mississippi River. Annual variations in midsummer hypoxic area and ...

  16. Implementing Evidence-Based Practice: A Review of the Empirical Research Literature

    ERIC Educational Resources Information Center

    Gray, Mel; Joy, Elyssa; Plath, Debbie; Webb, Stephen A.

    2013-01-01

    The article reports on the findings of a review of empirical studies examining the implementation of evidence-based practice (EBP) in the human services. Eleven studies were located that defined EBP as a research-informed, clinical decision-making process and identified barriers and facilitators to EBP implementation. A thematic analysis of the…

  17. School-Based Management and Paradigm Shift in Education an Empirical Study

    ERIC Educational Resources Information Center

    Cheng, Yin Cheong; Mok, Magdalena Mo Ching

    2007-01-01

    Purpose: This paper aims to report empirical research investigating how school-based management (SBM) and paradigm shift (PS) in education are closely related to teachers' student-centered teaching and students' active learning in a sample of Hong Kong secondary schools. Design/methodology/approach: It is a cross-sectional survey research…

  18. Feasibility of an Empirically Based Program for Parents of Preschoolers with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Dababnah, Sarah; Parish, Susan L.

    2016-01-01

    This article reports on the feasibility of implementing an existing empirically based program, "The Incredible Years," tailored to parents of young children with autism spectrum disorder. Parents raising preschool-aged children (aged 3-6?years) with autism spectrum disorder (N?=?17) participated in a 15-week pilot trial of the…

  19. Sexual functioning and partner relationships in women with turner syndrome: some empirical data and theoretical considerations regarding sexual desire.

    PubMed

    Rolstad, Susanna Göthlin; Möller, Anders; Bryman, Inger; Boman, Ulla Wide

    2007-01-01

    The aim of this study was to describe marital status, sexual history, and sexual functioning in a group of women with Turner syndrome, and to compare the results with general Swedish population data. The sample consists of 57 women over 18 years of age. Data were collected from an interview, and using two self-report questionnaires: the McCoy Sexual Rating Scale and the Relationship Rating Scale (RS). Compared to population data, the women with Turner syndrome were less likely to have a partner and had had their sexual debut later. Single women differed more from the general population than did women with a partner, regarding sexual desire and sexual activity. Several women with a partner reported sexual problems, but unanimously reported being satisfied with their sex life and partner relationship. The level of sexual desire in women with Turner syndrome is discussed in relation to Levine's model of human sexual desire, where psychological and social motivational factors are considered in addition to a biologically based sexual drive (Levine, 1992). PMID:17454521

  20. Teaching the Rhythms of English: A New Theoretical Base.

    ERIC Educational Resources Information Center

    Faber, David

    1986-01-01

    Presents some reasons why more emphasis should be placed on the mastery of the rhythmic features of the target language in foreign language teaching. An account of an important recent theoretical contribution to the description of the principles underlying English speech rhythm is included. (SED)

  1. PDE-based Non-Linear Diffusion Techniques for Denoising Scientific and Industrial Images: An Empirical Study

    SciTech Connect

    Weeratunga, S K; Kamath, C

    2001-12-20

    Removing noise from data is often the first step in data analysis. Denoising techniques should not only reduce the noise, but do so without blurring or changing the location of the edges. Many approaches have been proposed to accomplish this; in this paper, they focus on one such approach, namely the use of non-linear diffusion operators. This approach has been studied extensively from a theoretical viewpoint ever since the 1987 work of Perona and Malik showed that non-linear filters outperformed the more traditional linear Canny edge detector. They complement this theoretical work by investigating the performance of several isotropic diffusion operators on test images from scientific domains. They explore the effects of various parameters such as the choice of diffusivity function, explicit and implicit methods for the discretization of the PDE, and approaches for the spatial discretization of the non-linear operator etc. They also compare these schemes with simple spatial filters and the more complex wavelet-based shrinkage techniques. The empirical results show that, with an appropriate choice of parameters, diffusion-based schemes can be as effective as competitive techniques.

  2. [Comorbidity of substance use and other psychiatric disorders--theoretical foundation and evidence based therapy].

    PubMed

    Gouzoulis-Mayfrank, E

    2008-05-01

    The coincidence of two or more psychiatric disorders in the same person (comorbidity or dual diagnosis) is no rare exception. It is rather common and therapeutically highly relevant. Comorbid patients exhibit frequently severe manifestations of the disorder(s) and they require intensive treatment to meet their special needs and the interdependencies of their disorders. The present overview deals with the theoretical foundations of comorbidity of substance use and other psychiatric disorders. We present data on the prevalence of different comorbidities and discuss the models, which have been proposed to explain how substance use and other disorders relate with each other. Furthermore, we describe the clinical characteristics and long-term course of comorbid patients, as well as some general therapeutic principles including the advantages of integrated therapeutic programmes. In addition, we carried out a systematic literature search on specific pharmaco- and psychotherapies for common comorbidities using the databases MEDLINE, EMBASE and PsycInfo (up to December 2007), and assessed the methodological quality of the identified trials. Based on this search we present the empirical evidence for the effectiveness of specific treatments and make therapeutic recommendations which are graded according to the strength of existing evidence. In conclusion, integrated treatment programs are more effective, provided they take into account the multiple deficits of comorbid patients, adjust and adapt the different therapeutic components to each other, and set realistic goals. The next step should be a broader application of integrated treatment programs and their adoption as standard treatment within the national health systems. PMID:18557218

  3. Theoretical magnetograms based on quantitative simulation of a magnetospheric substorm

    NASA Technical Reports Server (NTRS)

    Chen, C.-K.; Wolf, R. A.; Karty, J. L.; Harel, M.

    1982-01-01

    Substorm currents derived from the Rice University computer simulation of the September 19, 1976 substorm event are used to compute theoretical magnetograms as a function of universal time for various stations, integrating the Biot-Savart law over a maze of about 2700 wires and bands that carry the ring, Birkeland and horizontal ionospheric currents. A comparison of theoretical results with corresponding observations leads to a claim of general agreement, especially for stations at high and middle magnetic latitudes. Model results suggest that the ground magnetic field perturbations arise from complicated combinations of different kinds of currents, and that magnetic field disturbances due to different but related currents cancel each other out despite the inapplicability of Fukushima's (1973) theorem. It is also found that the dawn-dusk asymmetry in the horizontal magnetic field disturbance component at low latitudes is due to a net downward Birkeland current at noon, a net upward current at midnight, and, generally, antisunward-flowing electrojets.

  4. 78 FR 54464 - Premier Empire Energy, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-04

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Premier Empire Energy, LLC; Supplemental Notice That Initial Market-Based... above-referenced proceeding, of Premier Empire Energy, LLC's application for market-based rate...

  5. Theoretical Investigations of Plasma-Based Accelerators and Other Advanced Accelerator Concepts

    SciTech Connect

    Shuets, G.

    2004-05-21

    Theoretical investigations of plasma-based accelerators and other advanced accelerator concepts. The focus of the work was on the development of plasma based and structure based accelerating concepts, including laser-plasma, plasma channel, and microwave driven plasma accelerators.

  6. Theoretical magnetograms based on quantitative simulation of a magnetospheric substorm

    SciTech Connect

    Chen, C.; Wolf, R.A.; Harel, M.; Karty, J.L.

    1982-08-01

    Using substorm currents derived from the Rice computer simulation of the substorm event of September 19, 1976, we have computed theoretical magnetograms as a function of universal time for various stations. A theoretical Dst has also been computed. Our computed magnetograms were obtained by integrating the Biot-Savart law over a maze of approximately 2700 wires and bands that carry the ring currents, the Birkeland currents, and the horizontal ionospheric currents. Ground currents and dynamo currents were neglected. Computed contributions to the magnetic field perturbation from eleven different kinds of currents are displayed (e.g., ring currents, northern hemisphere Birkeland currents). First, overall agreement of theory and data is generally satisfactory, especially for stations at high and mid-magnetic latitudes. Second, model results suggest that the ground magnetic field perturbations arise from very complicated combinations of different kinds of currents and that the magnetic field disturbances due to different but related currents often cancel each other, despite the fact that complicated inhomogeneous conductivities in our model prevent rigorous application of Fukushima's theorem. Third, both the theoretical and observed Dst decrease during the expansion phase of the substorm, but data indicate that Dst relaxes back toward its initial value within about an hour after the peak of the substorm. Fourth, the dawn-dusk asymmetry in the horizontal component of magnetic field disturbance at low latitudes in a substorm is essentially due to a net downward Birkeland current at noon, net upward current at midnight, and generally antisunward flowing electrojets; it is not due to a physical partial ring current injected into the duskside of the inner magnetosphere.

  7. Empirically Based Psychosocial Therapies for Schizophrenia: The Disconnection between Science and Practice

    PubMed Central

    Shean, Glenn D.

    2013-01-01

    Empirically validated psychosocial therapies for individuals diagnosed with schizophrenia were described in the report of the Schizophrenia Patient Outcomes Research Team (PORT, 2009). The PORT team identified eight psychosocial treatments: assertive community treatment, supported employment, cognitive behavioral therapy, family-based services, token economy, skills training, psychosocial interventions for alcohol and substance use disorders, and psychosocial interventions for weight management. PORT listings of empirically validated psychosocial therapies provide a useful template for the design of effective recovery-oriented mental health care systems. Unfortunately, surveys indicate that PORT listings have not been implemented in clinical settings. Obstacles to the implementation of PORT psychosocial therapy listings and suggestions for changes needed to foster implementation are discussed. Limitations of PORT therapy listings that are based on therapy outcome efficacy studies are discussed, and cross-cultural and course and outcome studies of correlates of recovery are summarized. PMID:23738068

  8. Empirically Based Psychosocial Therapies for Schizophrenia: The Disconnection between Science and Practice.

    PubMed

    Shean, Glenn D

    2013-01-01

    Empirically validated psychosocial therapies for individuals diagnosed with schizophrenia were described in the report of the Schizophrenia Patient Outcomes Research Team (PORT, 2009). The PORT team identified eight psychosocial treatments: assertive community treatment, supported employment, cognitive behavioral therapy, family-based services, token economy, skills training, psychosocial interventions for alcohol and substance use disorders, and psychosocial interventions for weight management. PORT listings of empirically validated psychosocial therapies provide a useful template for the design of effective recovery-oriented mental health care systems. Unfortunately, surveys indicate that PORT listings have not been implemented in clinical settings. Obstacles to the implementation of PORT psychosocial therapy listings and suggestions for changes needed to foster implementation are discussed. Limitations of PORT therapy listings that are based on therapy outcome efficacy studies are discussed, and cross-cultural and course and outcome studies of correlates of recovery are summarized. PMID:23738068

  9. Effectiveness of a Theoretically-Based Judgment and Decision Making Intervention for Adolescents

    PubMed Central

    Knight, Danica K.; Dansereau, Donald F.; Becan, Jennifer E.; Rowan, Grace A.; Flynn, Patrick M.

    2014-01-01

    Although adolescents demonstrate capacity for rational decision making, their tendency to be impulsive, place emphasis on peers, and ignore potential consequences of their actions often translates into higher risk-taking including drug use, illegal activity, and physical harm. Problems with judgment and decision making contribute to risky behavior and are core issues for youth in treatment. Based on theoretical and empirical advances in cognitive science, the Treatment Readiness and Induction Program (TRIP) represents a curriculum-based decision making intervention that can be easily inserted into a variety of content-oriented modalities as well as administered as a separate therapeutic course. The current study examined the effectiveness of TRIP for promoting better judgment among 519 adolescents (37% female; primarily Hispanic and Caucasian) in residential substance abuse treatment. Change over time in decision making and premeditation (i.e., thinking before acting) was compared among youth receiving standard operating practice (n = 281) versus those receiving standard practice plus TRIP (n = 238). Change in TRIP-specific content knowledge was examined among clients receiving TRIP. Premeditation improved among youth in both groups; TRIP clients showed greater improvement in decision making. TRIP clients also reported significant increases over time in self-awareness, positive-focused thinking (e.g., positive self-talk, goal setting), and recognition of the negative effects of drug use. While both genders showed significant improvement, males showed greater gains in metacognitive strategies (i.e., awareness of one’s own cognitive process) and recognition of the negative effects of drug use. These results suggest that efforts to teach core thinking strategies and apply/practice them through independent intervention modules may benefit adolescents when used in conjunction with content-based programs designed to change problematic behaviors. PMID:24760288

  10. Scaling up explanation generation: Large-scale knowledge bases and empirical studies

    SciTech Connect

    Lester, J.C.; Porter, B.W.

    1996-12-31

    To explain complex phenomena, an explanation system must be able to select information from a formal representation of domain knowledge, organize the selected information into multisentential discourse plans, and realize the discourse plans in text. Although recent years have witnessed significant progress in the development of sophisticated computational mechanisms for explanation, empirical results have been limited. This paper reports on a seven year effort to empirically study explanation generation from semantically rich, large-scale knowledge bases. We first describe Knight, a robust explanation system that constructs multi-sentential and multi-paragraph explanations from the Biology Knowledge Base, a large-scale knowledge base in the domain of botanical anatomy, physiology, and development. We then introduce the Two Panel evaluation methodology and describe how Knight`s performance was assessed with this methodology in the most extensive empirical evaluation conducted on an explanation system. In this evaluation, Knight scored within {open_quotes}half a grade{close_quote} of domain experts, and its performance exceeded that of one of the domain experts.

  11. Population forecasts and confidence intervals for Sweden: a comparison of model-based and empirical approaches.

    PubMed

    Cohen, J E

    1986-02-01

    This paper compares several methods of generating confidence intervals for forecasts of population size. Two rest on a demographic model for age-structured populations with stochastic fluctuations in vital rates. Two rest on empirical analyses of past forecasts of population sizes of Sweden at five-year intervals from 1780 to 1980 inclusive. Confidence intervals produced by the different methods vary substantially. The relative sizes differ in the various historical periods. The narrowest intervals offer a lower bound on uncertainty about the future. Procedures for estimating a range of confidence intervals are tentatively recommended. A major lesson is that finitely many observations of the past and incomplete theoretical understanding of the present and future can justify at best a range of confidence intervals for population projections. Uncertainty attaches not only to the point forecasts of future population, but also to the estimates of those forecasts' uncertainty. PMID:3484356

  12. Measuring microscopic evolution processes of complex networks based on empirical data

    NASA Astrophysics Data System (ADS)

    Chi, Liping

    2015-04-01

    Aiming at understanding the microscopic mechanism of complex systems in real world, we perform the measurement that characterizes the evolution properties on two empirical data sets. In the Autonomous Systems Internet data, the network size keeps growing although the system suffers a high rate of node deletion (r = 0.4) and link deletion (q = 0.81). However, the average degree keeps almost unchanged during the whole time range. At each time step the external links attached to a new node are about c = 1.1 and the internal links added between existing nodes are approximately m = 8. For the Scientific Collaboration data, it is a cumulated result of all the authors from 1893 up to the considered year. There is no deletion of nodes and links, r = q = 0. The external and internal links at each time step are c = 1.04 and m = 0, correspondingly. The exponents of degree distribution p(k) ∼ k-γ of these two empirical datasets γdata are in good agreement with that obtained theoretically γtheory. The results indicate that these evolution quantities may provide an insight into capturing the microscopic dynamical processes that govern the network topology.

  13. Theoretic base of Edge Local Mode triggering by vertical displacements

    SciTech Connect

    Wang, Z. T.; He, Z. X.; Wang, Z. H.; Wu, N.; Tang, C. J.

    2015-05-15

    Vertical instability is studied with R-dependent displacement. For Solovev's configuration, the stability boundary of the vertical instability is calculated. The pressure gradient is a destabilizing factor which is contrary to Rebhan's result. Equilibrium parallel current density, j{sub //}, at plasma boundary is a drive of the vertical instability similar to Peeling-ballooning modes; however, the vertical instability cannot be stabilized by the magnetic shear which tends towards infinity near the separatrix. The induced current observed in the Edge Local Mode (ELM) triggering experiment by vertical modulation is derived. The theory provides some theoretic explanation for the mitigation of type-I ELMS on ASDEX Upgrade. The principle could be also used for ITER.

  14. [Theoretical analysis of recompression-based therapies of decompression illness].

    PubMed

    Nikolaev, V P; Sokolov, G M; Komarevtsev, V N

    2011-01-01

    Theoretical analysis is concerned with the benefits of oxygen, air and nitrogen-helium-oxygen recompression schedules used to treat decompression illness in divers. Mathematical modeling of tissue bubbles dynamics during diving shows that one-hour oxygen recompression to 200 kPa does not diminish essentially the size of bubble enclosed in a layer that reduces tenfold the intensity of gas diffusion from bubbles. However, these bubbles dissolve fully in all the body tissues equally after 2-hr. air compression to 800 kPa and ensuing 2-d decompression by the Russian navy tables, and 1.5-hr. N-He-O2 compression to this pressure followed by 5-day decompression. The overriding advantage of the gas mixture recompression is that it obviates the narcotic action of nitrogen at the peak of chamber pressure and does not create dangerous tissue supersaturation and conditions for emergence of large bubbles at the end of decompression. PMID:21970044

  15. Information Theoretic Similarity Measures for Content Based Image Retrieval.

    ERIC Educational Resources Information Center

    Zachary, John; Iyengar, S. S.

    2001-01-01

    Content-based image retrieval is based on the idea of extracting visual features from images and using them to index images in a database. Proposes similarity measures and an indexing algorithm based on information theory that permits an image to be represented as a single number. When used in conjunction with vectors, this method displays…

  16. A theoretical drought classification method for the multivariate drought index based on distribution properties of standardized drought indices

    NASA Astrophysics Data System (ADS)

    Hao, Zengchao; Hao, Fanghua; Singh, Vijay P.; Xia, Youlong; Ouyang, Wei; Shen, Xinyi

    2016-06-01

    Drought indices have been commonly used to characterize different properties of drought and the need to combine multiple drought indices for accurate drought monitoring has been well recognized. Based on linear combinations of multiple drought indices, a variety of multivariate drought indices have recently been developed for comprehensive drought monitoring to integrate drought information from various sources. For operational drought management, it is generally required to determine thresholds of drought severity for drought classification to trigger a mitigation response during a drought event to aid stakeholders and policy makers in decision making. Though the classification of drought categories based on the univariate drought indices has been well studied, drought classification method for the multivariate drought index has been less explored mainly due to the lack of information about its distribution property. In this study, a theoretical drought classification method is proposed for the multivariate drought index, based on a linear combination of multiple indices. Based on the distribution property of the standardized drought index, a theoretical distribution of the linear combined index (LDI) is derived, which can be used for classifying drought with the percentile approach. Application of the proposed method for drought classification of LDI, based on standardized precipitation index (SPI), standardized soil moisture index (SSI), and standardized runoff index (SRI) is illustrated with climate division data from California, United States. Results from comparison with the empirical methods show a satisfactory performance of the proposed method for drought classification.

  17. The Theoretical Astrophysical Observatory: Cloud-based Mock Galaxy Catalogs

    NASA Astrophysics Data System (ADS)

    Bernyk, Maksym; Croton, Darren J.; Tonini, Chiara; Hodkinson, Luke; Hassan, Amr H.; Garel, Thibault; Duffy, Alan R.; Mutch, Simon J.; Poole, Gregory B.; Hegarty, Sarah

    2016-03-01

    We introduce the Theoretical Astrophysical Observatory (TAO), an online virtual laboratory that houses mock observations of galaxy survey data. Such mocks have become an integral part of the modern analysis pipeline. However, building them requires expert knowledge of galaxy modeling and simulation techniques, significant investment in software development, and access to high performance computing. These requirements make it difficult for a small research team or individual to quickly build a mock catalog suited to their needs. To address this TAO offers access to multiple cosmological simulations and semi-analytic galaxy formation models from an intuitive and clean web interface. Results can be funnelled through science modules and sent to a dedicated supercomputer for further processing and manipulation. These modules include the ability to (1) construct custom observer light cones from the simulation data cubes; (2) generate the stellar emission from star formation histories, apply dust extinction, and compute absolute and/or apparent magnitudes; and (3) produce mock images of the sky. All of TAO’s features can be accessed without any programming requirements. The modular nature of TAO opens it up for further expansion in the future.

  18. Theoretical performance analysis for CMOS based high resolution detectors.

    PubMed

    Jain, Amit; Bednarek, Daniel R; Rudin, Stephen

    2013-03-01

    High resolution imaging capabilities are essential for accurately guiding successful endovascular interventional procedures. Present x-ray imaging detectors are not always adequate due to their inherent limitations. The newly-developed high-resolution micro-angiographic fluoroscope (MAF-CCD) detector has demonstrated excellent clinical image quality; however, further improvement in performance and physical design may be possible using CMOS sensors. We have thus calculated the theoretical performance of two proposed CMOS detectors which may be used as a successor to the MAF. The proposed detectors have a 300 μm thick HL-type CsI phosphor, a 50 μm-pixel CMOS sensor with and without a variable gain light image intensifier (LII), and are designated MAF-CMOS-LII and MAF-CMOS, respectively. For the performance evaluation, linear cascade modeling was used. The detector imaging chains were divided into individual stages characterized by one of the basic processes (quantum gain, binomial selection, stochastic and deterministic blurring, additive noise). Ranges of readout noise and exposure were used to calculate the detectors' MTF and DQE. The MAF-CMOS showed slightly better MTF than the MAF-CMOS-LII, but the MAF-CMOS-LII showed far better DQE, especially for lower exposures. The proposed detectors can have improved MTF and DQE compared with the present high resolution MAF detector. The performance of the MAF-CMOS is excellent for the angiography exposure range; however it is limited at fluoroscopic levels due to additive instrumentation noise. The MAF-CMOS-LII, having the advantage of the variable LII gain, can overcome the noise limitation and hence may perform exceptionally for the full range of required exposures; however, it is more complex and hence more expensive. PMID:24353390

  19. An Empirical Pixel-Based Correction for Imperfect CTE. I. HST's Advanced Camera for Surveys

    NASA Astrophysics Data System (ADS)

    Anderson, Jay; Bedin, Luigi

    2010-09-01

    We use an empirical approach to characterize the effect of charge-transfer efficiency (CTE) losses in images taken with the Wide-Field Channel of the Advanced Camera for Surveys (ACS). The study is based on profiles of warm pixels in 168 dark exposures taken between 2009 September and October. The dark exposures allow us to explore charge traps that affect electrons when the background is extremely low. We develop a model for the readout process that reproduces the observed trails out to 70 pixels. We then invert the model to convert the observed pixel values in an image into an estimate of the original pixel values. We find that when we apply this image-restoration process to science images with a variety of stars on a variety of background levels, it restores flux, position, and shape. This means that the observed trails contain essentially all of the flux lost to inefficient CTE. The Space Telescope Science Institute is currently evaluating this algorithm with the aim of optimizing it and eventually providing enhanced data products. The empirical procedure presented here should also work for other epochs (e.g., pre-SM4), though the parameters may have to be recomputed for the time when ACS was operated at a higher temperature than the current -81°C. Finally, this empirical approach may also hold promise for other instruments, such as WFPC2, STIS, the ASC's HRC, and even WFC3/UVIS.

  20. An Empirical Typology of Residential Care/Assisted Living Based on a Four-State Study

    ERIC Educational Resources Information Center

    Park, Nan Sook; Zimmerman, Sheryl; Sloane, Philip D.; Gruber-Baldini, Ann L.; Eckert, J. Kevin

    2006-01-01

    Purpose: Residential care/assisted living describes diverse facilities providing non-nursing home care to a heterogeneous group of primarily elderly residents. This article derives typologies of assisted living based on theoretically and practically grounded evidence. Design and Methods: We obtained data from the Collaborative Studies of Long-Term…

  1. Exploring multi/full polarised SAR imagery for understanding surface soil moisture and roughness by using semi-empirical and theoretical models and field experiments

    NASA Astrophysics Data System (ADS)

    Dong, Lu; Marzahn, Philip; Ludwig, Ralf

    2010-05-01

    -range digital photogrammetry for surface roughness retrieval. A semi-empirical model is tested and a theoretical model AIEM is utilised for further understanding. Results demonstrate that the semi-empirical soil moisture retrieval algorithm, which was developed in studies in humid climate conditions, must be carefully adapted to the drier Mediterranean environment. Modifying the approach by incorporating regional field data, led to a considerable improvement of the algorithms performance. In addition, it is found that the current representation of soil surface roughness in the AIEM is insufficient to account for the specific heterogeneities on the field scale. The findings in this study indicate the necessity for future research, which must be extended to a more integrated combination of current sensors, e.g. ENVISAT/ASAR, ALOS/PALSAR and Radarsat-2 imagery and advanced development of soil moisture retrieval model for multi/full polarised radar imagery.

  2. Theoretical Foundations of "Competitive Team-Based Learning"

    ERIC Educational Resources Information Center

    Hosseini, Seyed Mohammad Hassan

    2010-01-01

    This paper serves as a platform to precisely substantiate the success of "Competitive Team-Based Learning" (CTBL) as an effective and rational educational approach. To that end, it brings to the fore part of the (didactic) theories and hypotheses which in one way or another delineate and confirm the mechanisms under which successful…

  3. EXPERIMENTAL AND THEORETICAL EVALUATIONS OF OBSERVATIONAL-BASED TECHNIQUES

    EPA Science Inventory

    Observational Based Methods (OBMs) can be used by EPA and the States to develop reliable ozone controls approaches. OBMs use actual measured concentrations of ozone, its precursors, and other indicators to determine the most appropriate strategy for ozone control. The usual app...

  4. Why Problem-Based Learning Works: Theoretical Foundations

    ERIC Educational Resources Information Center

    Marra, Rose M.; Jonassen, David H.; Palmer, Betsy; Luft, Steve

    2014-01-01

    Problem-based learning (PBL) is an instructional method where student learning occurs in the context of solving an authentic problem. PBL was initially developed out of an instructional need to help medical school students learn their basic sciences knowledge in a way that would be more lasting while helping to develop clinical skills…

  5. Flavor symmetry based MSSM: Theoretical models and phenomenological analysis

    NASA Astrophysics Data System (ADS)

    Babu, K. S.; Gogoladze, Ilia; Raza, Shabbar; Shafi, Qaisar

    2014-09-01

    We present a class of supersymmetric models in which symmetry considerations alone dictate the form of the soft SUSY breaking Lagrangian. We develop a class of minimal models, denoted as sMSSM—for flavor symmetry-based minimal supersymmetric standard model—that respect a grand unified symmetry such as SO(10) and a non-Abelian flavor symmetry H which suppresses SUSY-induced flavor violation. Explicit examples are constructed with the flavor symmetry being gauged SU(2)H and SO(3)H with the three families transforming as 2+1 and 3 representations, respectively. A simple solution is found in the case of SU(2)H for suppressing the flavor violating D-terms based on an exchange symmetry. Explicit models based on SO(3)H without the D-term problem are developed. In addition, models based on discrete non-Abelian flavor groups are presented which are automatically free from D-term issues. The permutation group S3 with a 2+1 family assignment, as well as the tetrahedral group A4 with a 3 assignment are studied. In all cases, a simple solution to the SUSY CP problem is found, based on spontaneous CP violation leading to a complex quark mixing matrix. We develop the phenomenology of the resulting sMSSM, which is controlled by seven soft SUSY breaking parameters for both the 2+1 assignment and the 3 assignment of fermion families. These models are special cases of the phenomenological MSSM (pMSSM), but with symmetry restrictions. We discuss the parameter space of sMSSM compatible with LHC searches, B-physics constraints and dark matter relic abundance. Fine-tuning in these models is relatively mild, since all SUSY particles can have masses below about 3 TeV.

  6. Empirical and physics based mathematical models of uranium hydride decomposition kinetics with quantified uncertainties.

    SciTech Connect

    Salloum, Maher N.; Gharagozloo, Patricia E.

    2013-10-01

    Metal particle beds have recently become a major technique for hydrogen storage. In order to extract hydrogen from such beds, it is crucial to understand the decomposition kinetics of the metal hydride. We are interested in obtaining a a better understanding of the uranium hydride (UH3) decomposition kinetics. We first developed an empirical model by fitting data compiled from different experimental studies in the literature and quantified the uncertainty resulting from the scattered data. We found that the decomposition time range predicted by the obtained kinetics was in a good agreement with published experimental results. Secondly, we developed a physics based mathematical model to simulate the rate of hydrogen diffusion in a hydride particle during the decomposition. We used this model to simulate the decomposition of the particles for temperatures ranging from 300K to 1000K while propagating parametric uncertainty and evaluated the kinetics from the results. We compared the kinetics parameters derived from the empirical and physics based models and found that the uncertainty in the kinetics predicted by the physics based model covers the scattered experimental data. Finally, we used the physics-based kinetics parameters to simulate the effects of boundary resistances and powder morphological changes during decomposition in a continuum level model. We found that the species change within the bed occurring during the decomposition accelerates the hydrogen flow by increasing the bed permeability, while the pressure buildup and the thermal barrier forming at the wall significantly impede the hydrogen extraction.

  7. HIRS-AMTS satellite sounding system test - Theoretical and empirical vertical resolving power. [High resolution Infrared Radiation Sounder - Advanced Moisture and Temperature Sounder

    NASA Technical Reports Server (NTRS)

    Thompson, O. E.

    1982-01-01

    The present investigation is concerned with the vertical resolving power of satellite-borne temperature sounding instruments. Information is presented on the capabilities of the High Resolution Infrared Radiation Sounder (HIRS) and a proposed sounding instrument called the Advanced Moisture and Temperature Sounder (AMTS). Two quite different methods for assessing the vertical resolving power of satellite sounders are discussed. The first is the theoretical method of Conrath (1972) which was patterned after the work of Backus and Gilbert (1968) The Backus-Gilbert-Conrath (BGC) approach includes a formalism for deriving a retrieval algorithm for optimizing the vertical resolving power. However, a retrieval algorithm constructed in the BGC optimal fashion is not necessarily optimal as far as actual temperature retrievals are concerned. Thus, an independent criterion for vertical resolving power is discussed. The criterion is based on actual retrievals of signal structure in the temperature field.

  8. Theoretically predicted Fox-7 based new high energy density molecules

    NASA Astrophysics Data System (ADS)

    Ghanta, Susanta

    2016-08-01

    Computational investigation of CHNO based high energy density molecules (HEDM) are designed with FOX-7 (1, 1-dinitro 2, 2-diamino ethylene) skeleton. We report structures, stability and detonation properties of these new molecules. A systematic analysis is presented for the crystal density, activation energy for nitro to nitrite isomerisation and the C-NO2 bond dissociation energy of these molecules. The Atoms in molecules (AIM) calculations have been performed to interpret the intra-molecular weak H-bonding interactions and the stability of C-NO2 bonds. The structure optimization, frequency and bond dissociation energy calculations have been performed at B3LYP level of theory by using G03 quantum chemistry package. Some of the designed molecules are found to be more promising HEDM than FOX-7 molecule, and are proposed to be candidate for synthetic purpose.

  9. Empirical and theoretical investigation of the noise performance of indirect detection, active matrix flat-panel imagers (AMFPIs) for diagnostic radiology.

    PubMed

    Siewerdsen, J H; Antonuk, L E; el-Mohri, Y; Yorkston, J; Huang, W; Boudry, J M; Cunningham, I A

    1997-01-01

    Noise properties of active matrix, flat-panel imagers under conditions relevant to diagnostic radiology are investigated. These studies focus on imagers based upon arrays with pixels incorporating a discrete photodiode coupled to a thin-film transistor, both fabricated from hydrogenated amorphous silicon. These optically sensitive arrays are operated with an overlying x-ray converter to allow indirect detection of incident x rays. External electronics, including gate driver circuits and preamplification circuits, are also required to operate the arrays. A theoretical model describing the signal and noise transfer properties of the imagers under conditions relevant to diagnostic radiography, fluoroscopy, and mammography is developed. This frequency-dependent model is based upon a cascaded systems analysis wherein the imager is conceptually divided into a series of stages having intrinsic gain and spreading properties. Predictions from the model are compared with x-ray sensitivity and noise measurements obtained from individual pixels from an imager with a pixel format of 1536 x 1920 pixels at a pixel pitch of 127 microns. The model is shown to be in excellent agreement with measurements obtained with diagnostic x rays using various phosphor screens. The model is used to explore the potential performance of existing and hypothetical imagers for application in radiography, fluoroscopy, and mammography as a function of exposure, additive noise, and fill factor. These theoretical predictions suggest that imagers of this general design incorporating a CsI: Tl intensifying screen can be optimized to provide detective quantum efficiency (DQE) superior to existing screen-film and storage phosphor systems for general radiography and mammography. For fluoroscopy, the model predicts that with further optimization of a-Si:H imagers, DQE performance approaching that of the best x-ray image intensifier systems may be possible. The results of this analysis suggest strategies for

  10. An Empirical Pixel-Based Correction for Imperfect CTE. I. HST's Advanced Camera for Surveys

    NASA Astrophysics Data System (ADS)

    Anderson, Jay; Bedin, Luigi R.

    2010-09-01

    We use an empirical approach to characterize the effect of charge-transfer efficiency (CTE) losses in images taken with the Wide-Field Channel of the Advanced Camera for Surveys (ACS). The study is based on profiles of warm pixels in 168 dark exposures taken between 2009 September and October. The dark exposures allow us to explore charge traps that affect electrons when the background is extremely low. We develop a model for the readout process that reproduces the observed trails out to 70 pixels. We then invert the model to convert the observed pixel values in an image into an estimate of the original pixel values. We find that when we apply this image-restoration process to science images with a variety of stars on a variety of background levels, it restores flux, position, and shape. This means that the observed trails contain essentially all of the flux lost to inefficient CTE. The Space Telescope Science Institute is currently evaluating this algorithm with the aim of optimizing it and eventually providing enhanced data products. The empirical procedure presented here should also work for other epochs (e.g., pre-SM4), though the parameters may have to be recomputed for the time when ACS was operated at a higher temperature than the current -81°C. Finally, this empirical approach may also hold promise for other instruments, such as WFPC2, STIS, the ACS's HRC, and even WFC3/UVIS. Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555.

  11. Advances on Empirical Mode Decomposition-based Time-Frequency Analysis Methods in Hydrocarbon Detection

    NASA Astrophysics Data System (ADS)

    Chen, H. X.; Xue, Y. J.; Cao, J.

    2015-12-01

    Empirical mode decomposition (EMD), which is a data-driven adaptive decomposition method and is not limited by time-frequency uncertainty spreading, is proved to be more suitable for seismic signals which are nonlinear and non-stationary. Compared with other Fourier-based and wavelet-based time-frequency methods, EMD-based time-frequency methods have higher temporal and spatial resolution and yield hydrocarbon interpretations with more statistical significance. Empirical mode decomposition algorithm has now evolved from EMD to Ensemble EMD (EEMD) to Complete Ensemble EMD (CEEMD). Even though EMD-based time-frequency methods offer many promising features for analyzing and processing geophysical data, there are some limitations or defects in EMD-based time-frequency methods. This presentation will present a comparative study on hydrocarbon detection using seven EMD-based time-frequency analysis methods, which include: (1) first, EMD combined with Hilbert transform (HT) as a time-frequency analysis method is used for hydrocarbon detection; and (2) second, Normalized Hilbert transform (NHT) and HU Methods respectively combined with HT as improved time-frequency analysis methods are applied for hydrocarbon detection; and (3) three, EMD combined with Teager-Kaiser energy (EMD/TK) is investigated for hydrocarbon detection; and (4) four, EMD combined with wavelet transform (EMDWave) as a seismic attenuation estimation method is comparatively studied; and (5) EEMD- and CEEMD- based time-frequency analysis methods used as highlight volumes technology are studied. The differences between these methods in hydrocarbon detection will be discussed. The question of getting a meaningful instantaneous frequency by HT and mode-mixing issues in EMD will be analysed. The work was supported by NSFC under grant Nos. 41430323, 41404102 and 41274128.

  12. Experimental and Theoretical Study of Microturbine-Based BCHP System

    SciTech Connect

    Fairchild, P.D.

    2001-07-12

    On-site and near-site distributed power generation (DG), as part of a Buildings Cooling, Heating and Power (BCHP) system, brings both electricity and waste heat from the DG sources closer to the end user's electric and thermal loads. Consequently, the waste heat can be used as input power for heat-activated air conditioners, chillers, and desiccant dehumidification systems; to generate steam for space heating; or to provide hot water for laundry, kitchen, cleaning services and/or rest rooms. By making use of what is normally waste heat, BCHP systems meet a building's electrical and thermal loads with a lower input of fossil fuel, yielding resource efficiencies of 40 to 70% or more. To ensure the success of BCHP systems, interactions of a DG system-such as a microturbine and thermal heat recovery units under steady-state modes of operation with various exhaust back pressures-must be considered. This article studies the performance and emissions of a 30-kW microturbine over a range of design and off-design conditions in steady-state operating mode with various back pressures. In parallel with the experimental part of the project, a BCHP mathematical model was developed describing basic thermodynamic and hydraulic processes in the system, heat and material balances, and the relationship of the balances. to the system configuration. The model can determine the efficiency of energy conversion both for an individual microturbine unit and for the entire BCHP system for various system configurations and external loads. Based on actual data Tom a 30-kW microturbine, linear analysis was used to obtain an analytical relationship between the changes in the thermodynamic and hydraulic parameters of the system. The actual data show that, when the backpressure at the microturbine exhaust outlet is increased to the maximum of 7 in. WC (0.017 atm), the microturbine's useful power output decreases by from 3.5 % at a full power setting of 30 kW to 5.5 % at a one-third power setting

  13. Imitative Modeling as a Theoretical Base for Instructing Language-Disordered Children

    ERIC Educational Resources Information Center

    Courtright, John A.; Courtright, Illene C.

    1976-01-01

    A modification of A. Bandura's social learning theory (imitative modeling) was employed as a theoretical base for language instruction with eight language disordered children (5 to 10 years old). (Author/SBH)

  14. Theoretical study of impurity effects in iron-based superconductors

    NASA Astrophysics Data System (ADS)

    Navarro Gastiasoro, Maria; Hirschfeld, Peter; Andersen, Brian

    2013-03-01

    Several open questions remain unanswered for the iron-based superconductors (FeSC), including the importance of electronic correlations and the symmetry of the superconducting order parameter. Motivated by recent STM experiments which show a fascinating variety of resonant defect states in FeSC, we adopt a realistic five-band model including electronic Coulomb correlations to study local effects of disorder in the FeSC. In order to minimize the number of free parameters, we use the pairing interactions obtained from spin-fluctuation exchange to determine the homogeneous superconducting state. The ability of local impurity potentials to induce resonant states depends on their scattering strength Vimp; in addition, for appropriate Vimp, such states are associated with local orbital- and magnetic order. We investigate the density of states near such impurities and show how tunneling experiments may be used to probe local induced order. In the SDW phase, we show how C2 symmetry-breaking dimers are naturally formed around impurities which also form cigar-like (pi,pi) structures embedded in the (pi,0) magnetic bulk phase. Such electronic dimers have been shown to be candidates for explaining the so-called nematogens observed previously by QPI in Co-doped CaFe2As2.

  15. Awareness-based game-theoretic space resource management

    NASA Astrophysics Data System (ADS)

    Chen, Genshe; Chen, Huimin; Pham, Khanh; Blasch, Erik; Cruz, Jose B., Jr.

    2009-05-01

    Over recent decades, the space environment becomes more complex with a significant increase in space debris and a greater density of spacecraft, which poses great difficulties to efficient and reliable space operations. In this paper we present a Hierarchical Sensor Management (HSM) method to space operations by (a) accommodating awareness modeling and updating and (b) collaborative search and tracking space objects. The basic approach is described as follows. Firstly, partition the relevant region of interest into district cells. Second, initialize and model the dynamics of each cell with awareness and object covariance according to prior information. Secondly, explicitly assign sensing resources to objects with user specified requirements. Note that when an object has intelligent response to the sensing event, the sensor assigned to observe an intelligent object may switch from time-to-time between a strong, active signal mode and a passive mode to maximize the total amount of information to be obtained over a multi-step time horizon and avoid risks. Thirdly, if all explicitly specified requirements are satisfied and there are still more sensing resources available, we assign the additional sensing resources to objects without explicitly specified requirements via an information based approach. Finally, sensor scheduling is applied to each sensor-object or sensor-cell pair according to the object type. We demonstrate our method with realistic space resources management scenario using NASA's General Mission Analysis Tool (GMAT) for space object search and track with multiple space borne observers.

  16. A theoretically based determination of bowen-ratio fetch requirements

    USGS Publications Warehouse

    Stannard, D.I.

    1997-01-01

    Determination of fetch requirements for accurate Bowen-ratio measurements of latent- and sensible-heat fluxes is more involved than for eddy-correlation measurements because Bowen-ratio sensors are located at two heights, rather than just one. A simple solution to the diffusion equation is used to derive an expression for Bowen-ratio fetch requirements, downwind of a step change in surface fluxes. These requirements are then compared to eddy-correlation fetch requirements based on the same diffusion equation solution. When the eddy-correlation and upper Bowen-ratio sensor heights are equal, and the available energy upwind and downwind of the step change is constant, the Bowen-ratio method requires less fetch than does eddy correlation. Differences in fetch requirements between the two methods are greatest over relatively smooth surfaces. Bowen-ratio fetch can be reduced significantly by lowering the lower sensor, as well as the upper sensor. The Bowen-ratio fetch model was tested using data from a field experiment where multiple Bowen-ratio systems were deployed simultaneously at various fetches and heights above a field of bermudagrass. Initial comparisons were poor, but improved greatly when the model was modified (and operated numerically) to account for the large roughness of the upwind cotton field.

  17. Rare-earth element based permanent magnets: a theoretical investigation

    NASA Astrophysics Data System (ADS)

    Chouhan, Rajiv K.; Paudyal, Durga

    Permanent magnetic materials with large magnetization and high magnetocrystalline anisotropy are important for technical applications. In this context rare-earth (R) element based materials are good candidates because of their localized 4 f electrons. The 4 f crystal field splitting provides large part of magnetic anisotropy depending upon the crystal environment. The d spin orbit coupling of alloyed transition metal component provides additional anisotropy. RCo5 and its derivative R2Co17 are known compounds for large magnetic anisotropy. Here we have performed electronic structure calculations to predict new materials in this class by employing site substitutions. In these investigations, we have performed density functional theory including on-site electron correlation (DFT +U) and L-S coupling calculations. The results show that the abundant Ce substitution in R sites and Ti/Zr substitutions in some of the Co sites help reduce criticality without substantially affecting the magnetic moment and magnetic anisotropy in these materials. This work is supported by the Critical Materials Institute, an Energy Innovation Hub funded by the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, Advanced Manufacturing Office.

  18. Implementation of an empirically based drug and violence prevention and intervention program in public school settings.

    PubMed

    Cunningham, P B; Henggeler, S W

    2001-06-01

    Describes the implementation of a collaborative preventive intervention project (Healthy Schools) designed to reduce levels of bullying and related antisocial behaviors in children attending two urban middle schools serving primarily African American students. These schools have high rates of juvenile violence, as reflected by suspensions and expulsions for behavioral problems. Using a quasi-experimental design, empirically based drug and violence prevention programs, Bullying Prevention and Project ALERT, are being implemented at each middle school. In addition, an intensive evidence-based intervention, multisystemic therapy, is being used to target students at high risk of expulsion and court referral. Hence, the proposed project integrates both universal approaches to prevention and a model that focuses on indicated cases. Targeted outcomes, by which the effectiveness of this comprehensive school-based program will be measured, are reduced youth violence, reduced drug use, and improved psychosocial functioning of participating youth. PMID:11393922

  19. Dip-separated structural filtering using seislet transform and adaptive empirical mode decomposition based dip filter

    NASA Astrophysics Data System (ADS)

    Chen, Yangkang

    2016-07-01

    The seislet transform has been demonstrated to have a better compression performance for seismic data compared with other well-known sparsity promoting transforms, thus it can be used to remove random noise by simply applying a thresholding operator in the seislet domain. Since the seislet transform compresses the seismic data along the local structures, the seislet thresholding can be viewed as a simple structural filtering approach. Because of the dependence on a precise local slope estimation, the seislet transform usually suffers from low compression ratio and high reconstruction error for seismic profiles that have dip conflicts. In order to remove the limitation of seislet thresholding in dealing with conflicting-dip data, I propose a dip-separated filtering strategy. In this method, I first use an adaptive empirical mode decomposition based dip filter to separate the seismic data into several dip bands (5 or 6). Next, I apply seislet thresholding to each separated dip component to remove random noise. Then I combine all the denoised components to form the final denoised data. Compared with other dip filters, the empirical mode decomposition based dip filter is data-adaptive. One only needs to specify the number of dip components to be separated. Both complicated synthetic and field data examples show superior performance of my proposed approach than the traditional alternatives. The dip-separated structural filtering is not limited to seislet thresholding, and can also be extended to all those methods that require slope information.

  20. Determination of knock characteristics in spark ignition engines: an approach based on ensemble empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Li, Ning; Yang, Jianguo; Zhou, Rui; Liang, Caiping

    2016-04-01

    Knock is one of the major constraints to improve the performance and thermal efficiency of spark ignition (SI) engines. It can also result in severe permanent engine damage under certain operating conditions. Based on the ensemble empirical mode decomposition (EEMD), this paper proposes a new approach to determine the knock characteristics in SI engines. By adding a uniformly distributed and finite white Gaussian noise, the EEMD can preserve signal continuity in different scales and therefore alleviates the mode-mixing problem occurring in the classic empirical mode decomposition (EMD). The feasibilities of applying the EEMD to detect the knock signatures of a test SI engine via the pressure signal measured from combustion chamber and the vibration signal measured from cylinder head are investigated. Experimental results show that the EEMD-based method is able to detect the knock signatures from both the pressure signal and vibration signal, even in initial stage of knock. Finally, by comparing the application results with those obtained by short-time Fourier transform (STFT), Wigner-Ville distribution (WVD) and discrete wavelet transform (DWT), the superiority of the EEMD method in determining knock characteristics is demonstrated.

  1. Dip-separated structural filtering using seislet transform and adaptive empirical mode decomposition based dip filter

    NASA Astrophysics Data System (ADS)

    Chen, Yangkang

    2016-04-01

    The seislet transform has been demonstrated to have a better compression performance for seismic data compared with other well-known sparsity promoting transforms, thus it can be used to remove random noise by simply applying a thresholding operator in the seislet domain. Since the seislet transform compresses the seismic data along the local structures, the seislet thresholding can be viewed as a simple structural filtering approach. Because of the dependence on a precise local slope estimation, the seislet transform usually suffers from low compression ratio and high reconstruction error for seismic profiles that have dip conflicts. In order to remove the limitation of seislet thresholding in dealing with conflicting-dip data, I propose a dip-separated filtering strategy. In this method, I first use an adaptive empirical mode decomposition based dip filter to separate the seismic data into several dip bands (5 or 6). Next, I apply seislet thresholding to each separated dip component to remove random noise. Then I combine all the denoised components to form the final denoised data. Compared with other dip filters, the empirical mode decomposition based dip filter is data-adaptive. One only need to specify the number of dip components to be separated. Both complicated synthetic and field data examples show superior performance of my proposed approach than the traditional alternatives. The dip-separated structural filtering is not limited to seislet thresholding, and can also be extended to all those methods that require slope information.

  2. Dynamics of bloggers’ communities: Bipartite networks from empirical data and agent-based modeling

    NASA Astrophysics Data System (ADS)

    Mitrović, Marija; Tadić, Bosiljka

    2012-11-01

    We present an analysis of the empirical data and the agent-based modeling of the emotional behavior of users on the Web portals where the user interaction is mediated by posted comments, like Blogs and Diggs. We consider the dataset of discussion-driven popular Diggs, in which all comments are screened by machine-learning emotion detection in the text, to determine positive and negative valence (attractiveness and aversiveness) of each comment. By mapping the data onto a suitable bipartite network, we perform an analysis of the network topology and the related time-series of the emotional comments. The agent-based model is then introduced to simulate the dynamics and to capture the emergence of the emotional behaviors and communities. The agents are linked to posts on a bipartite network, whose structure evolves through their actions on the posts. The emotional states (arousal and valence) of each agent fluctuate in time, subject to the current contents of the posts to which the agent is exposed. By an agent’s action on a post its current emotions are transferred to the post. The model rules and the key parameters are inferred from the considered empirical data to ensure their realistic values and mutual consistency. The model assumes that the emotional arousal over posts drives the agent’s action. The simulations are preformed for the case of constant flux of agents and the results are analyzed in full analogy with the empirical data. The main conclusions are that the emotion-driven dynamics leads to long-range temporal correlations and emergent networks with community structure, that are comparable with the ones in the empirical system of popular posts. In view of pure emotion-driven agents actions, this type of comparisons provide a quantitative measure for the role of emotions in the dynamics on real blogs. Furthermore, the model reveals the underlying mechanisms which relate the post popularity with the emotion dynamics and the prevalence of negative

  3. Consistent climate-driven spatial patterns of terrestrial ecosystem carbon fluxes in the northern hemisphere: a theoretical framework and synthesis of empirical evidence

    NASA Astrophysics Data System (ADS)

    Yu, G.; Niu, S.; Chen, Z.; Zhu, X.

    2013-12-01

    A predictive understanding of the terrestrial ecosystem carbon fluxes has been developed slowly, largely owing to lack of broad generalizations and a theoretical framework as well as clearly defined hypotheses. We synthesized Eddy flux data in different regions of northern hemisphere and previously published papers, then developed a framework for the climate controls on the geoecological patterns of terrestrial ecosystem C fluxes, and proposed the underlying mechanisms. Based on the case studies and synthesis, we found that the spatial patterns of ecosystem C fluxes in China, Asia, three continents of the northern hemisphere all had general patterns: predominately controlled by temperature and precipitation, supporting and further developing the traditional theory of 'climate controls on the spatial patterns of ecosystem productivity' in Miami and other models. Five hypotheses were proposed to explain the ecological mechanisms and processes that attribute to the climate-driven spatial patterns of C fluxes. (1) Two key processes determining gross primary productivity (GPP), i.e. growing season length and carbon uptake capacity, are jointly controlled by temperature and precipitation; (2) Ecosystem respiration (ER) is predominately determined also by temperature and precipitation, as well as substrate supply; (3) Components of ecosystem C fluxes are closely coupled with each other in response to climate change; (4) Vegetation types and soil nutrients in particular area are fundamentally determined by environmental factors, which may impact C fluxes within a certain range, but couldn't change the climate-driven pattern of C fluxes at large scale, (5) Land use only changes the magnitude of C fluxes, but doesn't change the spatial patterns and their climate dependence. All of these hypotheses were well validated by the evidences of data synthesis, which could provide the foundation for a theoretical framework for better understanding and predicting geoecological

  4. Empirical calibrations of optical absorption-line indices based on the stellar library MILES

    NASA Astrophysics Data System (ADS)

    Johansson, Jonas; Thomas, Daniel; Maraston, Claudia

    2010-07-01

    Stellar population models of absorption-line indices are an important tool for the analysis of stellar population spectra. They are most accurately modelled through empirical calibrations of absorption-line indices with the stellar parameters such as effective temperature, metallicity and surface gravity, which are the so-called fitting functions. Here we present new empirical fitting functions for the 25 optical Lick absorption-line indices based on the new stellar library Medium resolution INT Library of Empirical Spectra (MILES). The major improvements with respect to the Lick/IDS library are the better sampling of stellar parameter space, a generally higher signal-to-noise ratio and a careful flux calibration. In fact, we find that errors on individual index measurements in MILES are considerably smaller than in Lick/IDS. Instead, we find the rms of the residuals between the final fitting functions and the data to be dominated by errors in the stellar parameters. We provide fitting functions for both Lick/IDS and MILES spectral resolutions and compare our results with other fitting functions in the literature. A FORTRAN 90 code is available online in order to simplify the implementation in stellar population models. We further calculate the offsets in index measurements between the Lick/IDS system to a flux-calibrated system. For this purpose, we use the three libraries MILES, ELODIE and STELIB. We find that offsets are negligible in some cases, most notably for the widely used indices Hβ, Mgb, Fe5270 and Fe5335. In a number of cases, however, the difference between the flux-calibrated library and Lick/IDS is significant with the offsets depending on index strengths. Interestingly, there is no general agreement between the three libraries for a large number of indices, which hampers the derivation of a universal offset between the Lick/IDS and flux-calibrated systems.

  5. Towards high performing hospital enterprise systems: an empirical and literature based design framework

    NASA Astrophysics Data System (ADS)

    dos Santos Fradinho, Jorge Miguel

    2014-05-01

    Our understanding of enterprise systems (ES) is gradually evolving towards a sense of design which leverages multidisciplinary bodies of knowledge that may bolster hybrid research designs and together further the characterisation of ES operation and performance. This article aims to contribute towards ES design theory with its hospital enterprise systems design (HESD) framework, which reflects a rich multidisciplinary literature and two in-depth hospital empirical cases from the US and UK. In doing so it leverages systems thinking principles and traditionally disparate bodies of knowledge to bolster the theoretical evolution and foundation of ES. A total of seven core ES design elements are identified and characterised with 24 main categories and 53 subcategories. In addition, it builds on recent work which suggests that hospital enterprises are comprised of multiple internal ES configurations which may generate different levels of performance. Multiple sources of evidence were collected including electronic medical records, 54 recorded interviews, observation, and internal documents. Both in-depth cases compare and contrast higher and lower performing ES configurations. Following literal replication across in-depth cases, this article concludes that hospital performance can be improved through an enriched understanding of hospital ES design.

  6. A theoretical model of drumlin formation based on observations at Múlajökull, Iceland

    NASA Astrophysics Data System (ADS)

    Iverson, Neal R.; McCracken, Reba; Zoet, Lucas; Benediktsson, Ívar; Schomacker, Anders; Johnson, Mark; Finlayson, Andrew; Phillips, Emrys; Everest, Jeremy

    2016-04-01

    Theoretical models of drumlin formation have generally been developed in isolation from observations in modern drumlin forming environments - a major limitation on the empiricism necessary to confidently formulate models and test them. Observations at a rare modern drumlin field exposed by the recession of the Icelandic surge-type glacier, Múlajökull, allow an empirically-grounded and physically-based model of drumlin formation to be formulated and tested. Till fabrics based on anisotropy of magnetic susceptibility and clast orientations, along with stratigraphic observations and results of ground penetrating radar, indicate that drumlin relief results from basal till deposition on drumlins and erosion between them. These data also indicate that surges cause till deposition both on and between drumlins and provide no evidence of the longitudinally compressive or extensional strain in till that would be expected if flux divergence in a deforming bed were significant. Over 2000 measurements of till density, together with consolidation tests on the till, indicate that effective stresses on the bed were higher between drumlins than within them. This observation agrees with evidence that subglacial water drainage during normal flow of the glacier is through channels in low areas between drumlins and that crevasse swarms, which reduce total normal stresses on the bed, are coincident with drumlins. In the new model slip of ice over a bed with a sinusoidal perturbation, crevasse swarms, and flow of subglacial water toward R-channels that bound the bed undulation during periods of normal flow result in effective stresses that increase toward channels and decrease from the stoss to the lee sides of the undulation. This effective-stress pattern causes till entrainment and erosion by regelation infiltration (Rempel, 2008, JGR, 113) that peaks at the heads of incipient drumlins and near R-channels, while bed shear is inhibited by effective stresses too high to allow

  7. Polarizable Empirical Force Field for Hexopyranose Monosaccharides Based on the Classical Drude Oscillator

    PubMed Central

    2015-01-01

    A polarizable empirical force field based on the classical Drude oscillator is presented for the hexopyranose form of selected monosaccharides. Parameter optimization targeted quantum mechanical (QM) dipole moments, solute–water interaction energies, vibrational frequencies, and conformational energies. Validation of the model was based on experimental data on crystals, densities of aqueous-sugar solutions, diffusion constants of glucose, and rotational preferences of the exocylic hydroxymethyl of d-glucose and d-galactose in aqueous solution as well as additional QM data. Notably, the final model involves a single electrostatic model for all sixteen diastereomers of the monosaccharides, indicating the transferability of the polarizable model. The presented parameters are anticipated to lay the foundation for a comprehensive polarizable force field for saccharides that will be compatible with the polarizable Drude parameters for lipids and proteins, allowing for simulations of glycolipids and glycoproteins. PMID:24564643

  8. Empirical analysis of web-based user-object bipartite networks

    NASA Astrophysics Data System (ADS)

    Shang, Ming-Sheng; Lü, Linyuan; Zhang, Yi-Cheng; Zhou, Tao

    2010-05-01

    Understanding the structure and evolution of web-based user-object networks is a significant task since they play a crucial role in e-commerce nowadays. This letter reports the empirical analysis on two large-scale web sites, audioscrobbler.com and del.icio.us, where users are connected with music groups and bookmarks, respectively. The degree distributions and degree-degree correlations for both users and objects are reported. We propose a new index, named collaborative similarity, to quantify the diversity of tastes based on the collaborative selection. Accordingly, the correlation between degree and selection diversity is investigated. We report some novel phenomena well characterizing the selection mechanism of web users and outline the relevance of these phenomena to the information recommendation problem.

  9. STEAM: a software tool based on empirical analysis for micro electro mechanical systems

    NASA Astrophysics Data System (ADS)

    Devasia, Archana; Pasupuleti, Ajay; Sahin, Ferat

    2006-03-01

    In this research a generalized software framework that enables accurate computer aided design of MEMS devices is developed. The proposed simulation engine utilizes a novel material property estimation technique that generates effective material properties at the microscopic level. The material property models were developed based on empirical analysis and the behavior extraction of standard test structures. A literature review is provided on the physical phenomena that govern the mechanical behavior of thin films materials. This survey indicates that the present day models operate under a wide range of assumptions that may not be applicable to the micro-world. Thus, this methodology is foreseen to be an essential tool for MEMS designers as it would develop empirical models that relate the loading parameters, material properties, and the geometry of the microstructures with its performance characteristics. This process involves learning the relationship between the above parameters using non-parametric learning algorithms such as radial basis function networks and genetic algorithms. The proposed simulation engine has a graphical user interface (GUI) which is very adaptable, flexible, and transparent. The GUI is able to encompass all parameters associated with the determination of the desired material property so as to create models that provide an accurate estimation of the desired property. This technique was verified by fabricating and simulating bilayer cantilevers consisting of aluminum and glass (TEOS oxide) in our previous work. The results obtained were found to be very encouraging.

  10. Polarizable Empirical Force Field for Acyclic Poly-Alcohols Based on the Classical Drude Oscillator

    PubMed Central

    He, Xibing; Lopes, Pedro E. M.; MacKerell, Alexander D.

    2014-01-01

    A polarizable empirical force field for acyclic polyalcohols based on the classical Drude oscillator is presented. The model is optimized with an emphasis on the transferability of the developed parameters among molecules of different sizes in this series and on the condensed-phase properties validated against experimental data. The importance of the explicit treatment of electronic polarizability in empirical force fields is demonstrated in the cases of this series of molecules with vicinal hydroxyl groups that can form cooperative intra- and intermolecular hydrogen bonds. Compared to the CHARMM additive force field, improved treatment of the electrostatic interactions avoids overestimation of the gas-phase dipole moments, results in significant improvement in the treatment of the conformational energies, and leads to the correct balance of intra- and intermolecular hydrogen bonding of glycerol as evidenced by calculated heat of vaporization being in excellent agreement with experiment. Computed condensed phase data, including crystal lattice parameters and volumes and densities of aqueous solutions are in better agreement with experimental data as compared to the corresponding additive model. Such improvements are anticipated to significantly improve the treatment of polymers in general, including biological macromolecules. PMID:23703219

  11. An Empirical Evaluation of Puzzle-Based Learning as an Interest Approach for Teaching Introductory Computer Science

    ERIC Educational Resources Information Center

    Merrick, K. E.

    2010-01-01

    This correspondence describes an adaptation of puzzle-based learning to teaching an introductory computer programming course. Students from two offerings of the course--with and without the puzzle-based learning--were surveyed over a two-year period. Empirical results show that the synthesis of puzzle-based learning concepts with existing course…

  12. Conformational polymorphism in a Schiff-base macrocyclic organic ligand: an experimental and theoretical study.

    PubMed

    Lo Presti, Leonardo; Soave, Raffaella; Longhi, Mariangela; Ortoleva, Emanuele

    2010-10-01

    Polymorphism in the highly flexible organic Schiff-base macrocycle ligand 3,6,9,17,20,23-hexa-azapentacyclo(23.3.1.1(11,15).0(2,6).0(16,20))triaconta-1(29),9,11,13,15(30),23,25,27-octaene (DIEN, C(24)H(30)N(6)) has been studied by single-crystal X-ray diffraction and both solid-state and gas-phase density functional theory (DFT) calculations. In the literature, only solvated structures of the title compound are known. Two new polymorphs and a new solvated form of DIEN, all obtained from the same solvent with different crystallization conditions, are presented for the first time. They all have P\\bar 1 symmetry, with the macrocycle positioned on inversion centres. The two unsolvated polymorphic forms differ in the number of molecules in the asymmetric unit Z', density and cohesive energy. Theoretical results confirm that the most stable form is (II°), with Z' = 1.5. Two distinct molecular conformations have been found, named `endo' or `exo' according to the orientation of the imine N atoms, which can be directed towards the interior or the exterior of the macrocycle. The endo arrangement is ubiquitous in the solid state and is shared by two independent molecules which constitute an invariant supramolecular synthon in all the known crystal forms of DIEN. It is also the most stable arrangement in the gas phase. The exo form, on the other hand, appears only in phase (II°), which contains both the conformers. Similarities and differences among the occurring packing motifs, as well as solvent effects, are discussed with the aid of Hirshfeld surface fingerprint plots and correlated to the results of the energy analysis. A possible interconversion path in the gas phase between the endo and the exo conformers has been found by DFT calculations; it consists of a two-step mechanism with activation energies of the order of 30-40 kJ mol(-1). These findings have been related to the empirical evidence that the most stable phase (II°) is also the last appearing one, in

  13. Developing Empirically Based, Culturally Grounded Drug Prevention Interventions for Indigenous Youth Populations

    PubMed Central

    Okamoto, Scott K.; Helm, Susana; Pel, Suzanne; McClain, Latoya L.; Hill, Amber P.; Hayashida, Janai K. P.

    2012-01-01

    This article describes the relevance of a culturally grounded approach toward drug prevention development for indigenous youth populations. This approach builds drug prevention from the “ground up” (ie, from the values, beliefs, and worldviews of the youth that are the intended consumers of the program), and is contrasted with efforts that focus on adapting existing drug prevention interventions to fit the norms of different youth ethnocultural groups. The development of an empirically based drug prevention program focused on rural Native Hawaiian youth is described as a case example of culturally grounded drug prevention development for indigenous youth, and the impact of this effort on the validity of the intervention and on community engagement and investment in the development of the program are discussed. Finally, implications of this approach for behavioral health services and the development of an indigenous prevention science are discussed. PMID:23188485

  14. A Human ECG Identification System Based on Ensemble Empirical Mode Decomposition

    PubMed Central

    Zhao, Zhidong; Yang, Lei; Chen, Diandian; Luo, Yi

    2013-01-01

    In this paper, a human electrocardiogram (ECG) identification system based on ensemble empirical mode decomposition (EEMD) is designed. A robust preprocessing method comprising noise elimination, heartbeat normalization and quality measurement is proposed to eliminate the effects of noise and heart rate variability. The system is independent of the heart rate. The ECG signal is decomposed into a number of intrinsic mode functions (IMFs) and Welch spectral analysis is used to extract the significant heartbeat signal features. Principal component analysis is used reduce the dimensionality of the feature space, and the K-nearest neighbors (K-NN) method is applied as the classifier tool. The proposed human ECG identification system was tested on standard MIT-BIH ECG databases: the ST change database, the long-term ST database, and the PTB database. The system achieved an identification accuracy of 95% for 90 subjects, demonstrating the effectiveness of the proposed method in terms of accuracy and robustness. PMID:23698274

  15. Empirical likelihood based detection procedure for change point in mean residual life functions under random censorship.

    PubMed

    Chen, Ying-Ju; Ning, Wei; Gupta, Arjun K

    2016-05-01

    The mean residual life (MRL) function is one of the basic parameters of interest in survival analysis that describes the expected remaining time of an individual after a certain age. The study of changes in the MRL function is practical and interesting because it may help us to identify some factors such as age and gender that may influence the remaining lifetimes of patients after receiving a certain surgery. In this paper, we propose a detection procedure based on the empirical likelihood for the changes in MRL functions with right censored data. Two real examples are also given: Veterans' administration lung cancer study and Stanford heart transplant to illustrate the detecting procedure. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26936529

  16. Conformational studies of (2'-5') polynucleotides: theoretical computations of energy, base morphology, helical structure, and duplex formation.

    PubMed Central

    Srinivasan, A R; Olson, W K

    1986-01-01

    A detailed theoretical analysis has been carried out to probe the conformational characteristics of (2'-5') polynucleotide chains. Semi-empirical energy calculations are used to estimate the preferred torsional combinations of the monomeric repeating unit. The resulting morphology of adjacent bases and the tendency to form regular single-stranded structures are determined by standard computational procedures. The torsional preferences are in agreement with available nmr measurements on model compounds. The tendencies to adopt base stacked and intercalative geometries are markedly depressed compared to those in (3'-5') chains. Very limited families of regular monomerically repeating single-stranded (2'-5') helices are found. Base stacking, however, can be enhanced (but helix formation is at the same time depressed) in mixed puckered chains. Constrained (2'-5') duplex structures have been constructed from a search of all intervening glycosyl and sugar conformations that form geometrically feasible phosphodiester linkages. Both A- and B-type base stacking are found to generate non-standard backbone torsions and mixed glycosyl/sugar combinations. The 2'- and 5'-residues are locked in totally different arrangements and are thereby prevented from generating long helical structures. PMID:2426656

  17. An Empirical Study on Washback Effects of the Internet-Based College English Test Band 4 in China

    ERIC Educational Resources Information Center

    Wang, Chao; Yan, Jiaolan; Liu, Bao

    2014-01-01

    Based on Bailey's washback model, in respect of participants, process and products, the present empirical study was conducted to find the actual washback effects of the internet-based College English Test Band 4 (IB CET-4). The methods adopted are questionnaires, class observation, interview and the analysis of both the CET-4 teaching and testing…

  18. How "Does" the Comforting Process Work? An Empirical Test of an Appraisal-Based Model of Comforting

    ERIC Educational Resources Information Center

    Jones, Susanne M.; Wirtz, John G.

    2006-01-01

    Burleson and Goldsmith's (1998) comforting model suggests an appraisal-based mechanism through which comforting messages can bring about a positive change in emotional states. This study is a first empirical test of three causal linkages implied by the appraisal-based comforting model. Participants (N=258) talked about an upsetting event with a…

  19. Written institutional ethics policies on euthanasia: an empirical-based organizational-ethical framework.

    PubMed

    Lemiengre, Joke; Dierckx de Casterlé, Bernadette; Schotsmans, Paul; Gastmans, Chris

    2014-05-01

    As euthanasia has become a widely debated issue in many Western countries, hospitals and nursing homes especially are increasingly being confronted with this ethically sensitive societal issue. The focus of this paper is how healthcare institutions can deal with euthanasia requests on an organizational level by means of a written institutional ethics policy. The general aim is to make a critical analysis whether these policies can be considered as organizational-ethical instruments that support healthcare institutions to take their institutional responsibility for dealing with euthanasia requests. By means of an interpretative analysis, we conducted a process of reinterpretation of results of former Belgian empirical studies on written institutional ethics policies on euthanasia in dialogue with the existing international literature. The study findings revealed that legal regulations, ethical and care-oriented aspects strongly affected the development, the content, and the impact of written institutional ethics policies on euthanasia. Hence, these three cornerstones-law, care and ethics-constituted the basis for the empirical-based organizational-ethical framework for written institutional ethics policies on euthanasia that is presented in this paper. However, having a euthanasia policy does not automatically lead to more legal transparency, or to a more professional and ethical care practice. The study findings suggest that the development and implementation of an ethics policy on euthanasia as an organizational-ethical instrument should be considered as a dynamic process. Administrators and ethics committees must take responsibility to actively create an ethical climate supporting care providers who have to deal with ethical dilemmas in their practice. PMID:24420744

  20. Empirical mode decomposition based background removal and de-noising in polarization interference imaging spectrometer.

    PubMed

    Zhang, Chunmin; Ren, Wenyi; Mu, Tingkui; Fu, Lili; Jia, Chenling

    2013-02-11

    Based on empirical mode decomposition (EMD), the background removal and de-noising procedures of the data taken by polarization interference imaging interferometer (PIIS) are implemented. Through numerical simulation, it is discovered that the data processing methods are effective. The assumption that the noise mostly exists in the first intrinsic mode function is verified, and the parameters in the EMD thresholding de-noising methods is determined. In comparison, the wavelet and windowed Fourier transform based thresholding de-noising methods are introduced. The de-noised results are evaluated by the SNR, spectral resolution and peak value of the de-noised spectrums. All the methods are used to suppress the effect from the Gaussian and Poisson noise. The de-noising efficiency is higher for the spectrum contaminated by Gaussian noise. The interferogram obtained by the PIIS is processed by the proposed methods. Both the interferogram without background and noise free spectrum are obtained effectively. The adaptive and robust EMD based methods are effective to the background removal and de-noising in PIIS. PMID:23481716

  1. Target detection for low cost uncooled MWIR cameras based on empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Piñeiro-Ave, José; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando; Artés-Rodríguez, Antonio

    2014-03-01

    In this work, a novel method for detecting low intensity fast moving objects with low cost Medium Wavelength Infrared (MWIR) cameras is proposed. The method is based on background subtraction in a video sequence obtained with a low density Focal Plane Array (FPA) of the newly available uncooled lead selenide (PbSe) detectors. Thermal instability along with the lack of specific electronics and mechanical devices for canceling the effect of distortion make background image identification very difficult. As a result, the identification of targets is performed in low signal to noise ratio (SNR) conditions, which may considerably restrict the sensitivity of the detection algorithm. These problems are addressed in this work by means of a new technique based on the empirical mode decomposition, which accomplishes drift estimation and target detection. Given that background estimation is the most important stage for detecting, a previous denoising step enabling a better drift estimation is designed. Comparisons are conducted against a denoising technique based on the wavelet transform and also with traditional drift estimation methods such as Kalman filtering and running average. The results reported by the simulations show that the proposed scheme has superior performance.

  2. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User's Head Movement.

    PubMed

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-01-01

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest. PMID:27589768

  3. Empirical Evaluation Indicators in Thai Higher Education: Theory-Based Multidimensional Learners' Assessment

    ERIC Educational Resources Information Center

    Sritanyarat, Dawisa; Russ-Eft, Darlene

    2016-01-01

    This study proposed empirical indicators which can be validated and adopted in higher education institutions to evaluate quality of teaching and learning, and to serve as an evaluation criteria for human resource management and development of higher institutions in Thailand. The main purpose of this study was to develop empirical indicators of a…

  4. Biological clues on neuronal degeneration based on theoretical fits of decay patterns: towards a mathematical neuropathology.

    PubMed

    Triarhou, Lazaros C

    2010-01-01

    The application of the best mathematical fit to quantitative data on cell death over time in models of nervous abiotrophies can yield useful clues as to the cellular properties of degenerative processes. We review data obtained in two neurogenetic models of movement disorders in the laboratory mouse, the 'Purkinje cell degeneration' (pcd) mutant, a model of cerebellar ataxia, and the 'weaver' (wv) mutant, a combined degeneration of multiple systems including the mesostriatal dopaminergic pathway. In the cerebellum of pcd mice, analyses of transsynaptic granule cell death subsequent to the genetically-determined degeneration of Purkinje cells show that granule neuron fallout follows a typical pattern of exponential decay. In the midbrain of weaver mice, regression fits show that dopaminergic neuron fallout combines two independent components, an initial exponential decay, superceded by a linear regression, with a threshold around 100 days. The biological connotations of such analyses are discussed in light of the empirical observations and the theoretical simulation models. The theoretical connotations may link neuron loss to specific cellular idiosyncracies in elucidating the pathogenesis of chronic neurodegenerative disorders, including Parkinson's disease. PMID:20383806

  5. Evaluation of Physically and Empirically Based Models for the Estimation of Green Roof Evapotranspiration

    NASA Astrophysics Data System (ADS)

    Digiovanni, K. A.; Montalto, F. A.; Gaffin, S.; Rosenzweig, C.

    2010-12-01

    Green roofs and other urban green spaces can provide a variety of valuable benefits including reduction of the urban heat island effect, reduction of stormwater runoff, carbon sequestration, oxygen generation, air pollution mitigation etc. As many of these benefits are directly linked to the processes of evaporation and transpiration, accurate and representative estimation of urban evapotranspiration (ET) is a necessary tool for predicting and quantifying such benefits. However, many common ET estimation procedures were developed for agricultural applications, and thus carry inherent assumptions that may only be rarely applicable to urban green spaces. Various researchers have identified the estimation of expected urban ET rates as critical, yet poorly studied components of urban green space performance prediction and cite that further evaluation is needed to reconcile differences in predictions from varying ET modeling approaches. A small scale green roof lysimeter setup situated on the green roof of the Ethical Culture Fieldston School in the Bronx, NY has been the focus of ongoing monitoring initiated in June 2009. The experimental setup includes a 0.6 m by 1.2 m Lysimeter replicating the anatomy of the 500 m2 green roof of the building, with a roof membrane, drainage layer, 10 cm media depth, and planted with a variety of Sedum species. Soil moisture sensors and qualitative runoff measurements are also recorded in the Lysimeter, while a weather station situated on the rooftop records climatologic data. Direct quantification of actual evapotranspiration (AET) from the green roof weighing lysimeter was achieved through a mass balance approaches during periods absent of precipitation and drainage. A comparison of AET to estimates of potential evapotranspiration (PET) calculated from empirically and physically based ET models was performed in order to evaluate the applicability of conventional ET equations for the estimation of ET from green roofs. Results have

  6. Impact of Inadequate Empirical Therapy on the Mortality of Patients with Bloodstream Infections: a Propensity Score-Based Analysis

    PubMed Central

    Retamar, Pilar; Portillo, María M.; López-Prieto, María Dolores; Rodríguez-López, Fernando; de Cueto, Marina; García, María V.; Gómez, María J.; del Arco, Alfonso; Muñoz, Angel; Sánchez-Porto, Antonio; Torres-Tortosa, Manuel; Martín-Aspas, Andrés; Arroyo, Ascensión; García-Figueras, Carolina; Acosta, Federico; Corzo, Juan E.; León-Ruiz, Laura; Escobar-Lara, Trinidad

    2012-01-01

    The impact of the adequacy of empirical therapy on outcome for patients with bloodstream infections (BSI) is key for determining whether adequate empirical coverage should be prioritized over other, more conservative approaches. Recent systematic reviews outlined the need for new studies in the field, using improved methodologies. We assessed the impact of inadequate empirical treatment on the mortality of patients with BSI in the present-day context, incorporating recent methodological recommendations. A prospective multicenter cohort including all BSI episodes in adult patients was performed in 15 hospitals in Andalucía, Spain, over a 2-month period in 2006 to 2007. The main outcome variables were 14- and 30-day mortality. Adjusted analyses were performed by multivariate analysis and propensity score-based matching. Eight hundred one episodes were included. Inadequate empirical therapy was administered in 199 (24.8%) episodes; mortality at days 14 and 30 was 18.55% and 22.6%, respectively. After controlling for age, Charlson index, Pitt score, neutropenia, source, etiology, and presentation with severe sepsis or shock, inadequate empirical treatment was associated with increased mortality at days 14 and 30 (odds ratios [ORs], 2.12 and 1.56; 95% confidence intervals [95% CI], 1.34 to 3.34 and 1.01 to 2.40, respectively). The adjusted ORs after a propensity score-based matched analysis were 3.03 and 1.70 (95% CI, 1.60 to 5.74 and 0.98 to 2.98, respectively). In conclusion, inadequate empirical therapy is independently associated with increased mortality in patients with BSI. Programs to improve the quality of empirical therapy in patients with suspicion of BSI and optimization of definitive therapy should be implemented. PMID:22005999

  7. Is Project Based Learning More Effective than Direct Instruction in School Science Classrooms? An Analysis of the Empirical Research Evidence

    NASA Astrophysics Data System (ADS)

    Dann, Clifford

    An increasingly loud call by parents, school administrators, teachers, and even business leaders for "authentic learning", emphasizing both group-work and problem solving, has led to growing enthusiasm for inquiry-based learning over the past decade. Although "inquiry" can be defined in many ways, a curriculum called "project-based learning" has recently emerged as the inquiry practice-of-choice with roots in the educational constructivism that emerged in the mid-twentieth century. Often, project-based learning is framed as an alternative instructional strategy to direct instruction for maximizing student content knowledge. This study investigates the empirical evidence for such a comparison while also evaluating the overall quality of the available studies in the light of accepted standards for educational research. Specifically, this thesis investigates what the body of quantitative research says about the efficacy of project-based learning vs. direct instruction when considering student acquisition of content knowledge in science classrooms. Further, existing limitations of the research pertaining to project based learning and secondary school education are explored. The thesis concludes with a discussion of where and how we should focus our empirical efforts in the future. The research revealed that the available empirical research contains flaws in both design and instrumentation. In particular, randomization is poor amongst all the studies considered. The empirical evidence indicates that project-based learning curricula improved student content knowledge but that, while the results were statistically significant, increases in raw test scores were marginal.

  8. Feasibility of an empirically based program for parents of preschoolers with autism spectrum disorder.

    PubMed

    Dababnah, Sarah; Parish, Susan L

    2016-01-01

    This article reports on the feasibility of implementing an existing empirically based program, The Incredible Years, tailored to parents of young children with autism spectrum disorder. Parents raising preschool-aged children (aged 3-6 years) with autism spectrum disorder (N = 17) participated in a 15-week pilot trial of the intervention. Quantitative assessments of the program revealed fidelity was generally maintained, with the exception of program-specific videos. Qualitative data from individual post-intervention interviews reported parents benefited most from child emotion regulation strategies, play-based child behavior skills, parent stress management, social support, and visual resources. More work is needed to further refine the program to address parent self-care, partner relationships, and the diverse behavioral and communication challenges of children across the autism spectrum. Furthermore, parent access and retention could potentially be increased by providing in-home childcare vouchers and a range of times and locations in which to offer the program. The findings suggest The Incredible Years is a feasible intervention for parents seeking additional support for child- and family-related challenges and offers guidance to those communities currently using The Incredible Years or other related parenting programs with families of children with autism spectrum disorder. PMID:25717131

  9. A hybrid filtering method based on a novel empirical mode decomposition for friction signals

    NASA Astrophysics Data System (ADS)

    Li, Chengwei; Zhan, Liwei

    2015-12-01

    During a measurement, the measured signal usually contains noise. To remove the noise and preserve the important feature of the signal, we introduce a hybrid filtering method that uses a new intrinsic mode function (NIMF) and a modified Hausdorff distance. The NIMF is defined as the difference between the noisy signal and each intrinsic mode function (IMF), which is obtained by empirical mode decomposition (EMD), ensemble EMD, complementary ensemble EMD, or complete ensemble EMD with adaptive noise (CEEMDAN). The relevant mode selecting is based on the similarity between the first NIMF and the rest of the NIMFs. With this filtering method, the EMD and improved versions are used to filter the simulation and friction signals. The friction signal between an airplane tire and the runaway is recorded during a simulated airplane touchdown and features spikes of various amplitudes and noise. The filtering effectiveness of the four hybrid filtering methods are compared and discussed. The results show that the filtering method based on CEEMDAN outperforms other signal filtering methods.

  10. Empirical mode decomposition-based motion artifact correction method for functional near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Gu, Yue; Han, Junxia; Liang, Zhenhu; Yan, Jiaqing; Li, Zheng; Li, Xiaoli

    2016-01-01

    Functional near-infrared spectroscopy (fNIRS) is a promising technique for monitoring brain activity. However, it is sensitive to motion artifacts. Many methods have been developed for motion correction, such as spline interpolation, wavelet filtering, and kurtosis-based wavelet filtering. We propose a motion correction method based on empirical mode decomposition (EMD), which is applied to segments of data identified as having motion artifacts. The EMD method is adaptive, data-driven, and well suited for nonstationary data. To test the performance of the proposed EMD method and to compare it with other motion correction methods, we used simulated hemodynamic responses added to real resting-state fNIRS data. The EMD method reduced mean squared error in 79% of channels and increased signal-to-noise ratio in 78% of channels. Moreover, it produced the highest Pearson's correlation coefficient between the recovered signal and the original signal, significantly better than the comparison methods (p<0.01, paired t-test). These results indicate that the proposed EMD method is a first choice method for motion artifact correction in fNIRS.

  11. An empirically based steady state friction law and implications for fault stability

    NASA Astrophysics Data System (ADS)

    Spagnuolo, E.; Nielsen, S.; Violay, M.; Di Toro, G.

    2016-04-01

    Empirically based rate-and-state friction laws (RSFLs) have been proposed to model the dependence of friction forces with slip and time. The relevance of the RSFL for earthquake mechanics is that few constitutive parameters define critical conditions for fault stability (i.e., critical stiffness and frictional fault behavior). However, the RSFLs were determined from experiments conducted at subseismic slip rates (V < 1 cm/s), and their extrapolation to earthquake deformation conditions (V > 0.1 m/s) remains questionable on the basis of the experimental evidence of (1) large dynamic weakening and (2) activation of particular fault lubrication processes at seismic slip rates. Here we propose a modified RSFL (MFL) based on the review of a large published and unpublished data set of rock friction experiments performed with different testing machines. The MFL, valid at steady state conditions from subseismic to seismic slip rates (0.1 µm/s < V < 3 m/s), describes the initiation of a substantial velocity weakening in the 1-20 cm/s range resulting in a critical stiffness increase that creates a peak of potential instability in that velocity regime. The MFL leads to a new definition of fault frictional stability with implications for slip event styles and relevance for models of seismic rupture nucleation, propagation, and arrest.

  12. Study on the Theoretical Foundation of Business English Curriculum Design Based on ESP and Needs Analysis

    ERIC Educational Resources Information Center

    Zhu, Wenzhong; Liu, Dan

    2014-01-01

    Based on a review of the literature on ESP and needs analysis, this paper is intended to offer some theoretical supports and inspirations for BE instructors to develop BE curricula for business contexts. It discusses how the theory of need analysis can be used in Business English curriculum design, and proposes some principles of BE curriculum…

  13. Evaluating the compatibility of physics-based deterministic synthetic ground motion with empirical GMPE

    NASA Astrophysics Data System (ADS)

    Baumann, C.; Dalguer, L. A.

    2012-12-01

    Recent development of deterministic physics-based numerical simulations of earthquakes has contributed to substantial advances in our understanding of different aspects related to the earthquake mechanism and near source ground motion. These models have greater potential for identifying and predicting the variability of near-source ground motions dominated by the source and/or geological effects. These advances have led to increased interest in using suite of physics-based models for reliable prediction of ground motion of future earthquakes for seismic hazard assessment and risk mitigation, particularly in areas where there are few recorded ground motions. But before using synthetic ground motion, it is important to evaluate the reliability of deterministic synthetic ground motions, particularly the upper frequency limit. Current engineering practice usually use ground motion quantities estimated from empirical Ground Motion Predicting Equations (GMPE) such as peak ground acceleration (PGA), peak ground velocity (PGV), peak ground displacement (PGD), and spectral ordinates as input to assess building response for seismic safety of future and existing structures. Therefore it is intuitive and evident to verify the compatibility of synthetic ground motions with current empirical GMPE. In this study we attempt to do it so, to a suite of deterministic ground motion simulation generated by earthquake dynamic rupture models. We focus mainly on determining the upper frequency limit in which the synthetic ground motions are compatible to GMPE. For that purpose we have generated suite of earthquake rupture dynamic models in a layered 1D velocity structure. The simulations include 360 rupture dynamic models with moment magnitudes in the range of 5.5-7, for three styles of faulting (reverse, normal and strike slip), for both buried faults and surface rupturing faults. Normal stress and frictional strength are depth and non-depth dependent. Initial stress distribution follows

  14. The Role of Social Network Technologies in Online Health Promotion: A Narrative Review of Theoretical and Empirical Factors Influencing Intervention Effectiveness

    PubMed Central

    Kennedy, Catriona M; Buchan, Iain; Powell, John; Ainsworth, John

    2015-01-01

    Background Social network technologies have become part of health education and wider health promotion—either by design or happenstance. Social support, peer pressure, and information sharing in online communities may affect health behaviors. If there are positive and sustained effects, then social network technologies could increase the effectiveness and efficiency of many public health campaigns. Social media alone, however, may be insufficient to promote health. Furthermore, there may be unintended and potentially harmful consequences of inaccurate or misleading health information. Given these uncertainties, there is a need to understand and synthesize the evidence base for the use of online social networking as part of health promoting interventions to inform future research and practice. Objective Our aim was to review the research on the integration of expert-led health promotion interventions with online social networking in order to determine the extent to which the complementary benefits of each are understood and used. We asked, in particular, (1) How is effectiveness being measured and what are the specific problems in effecting health behavior change?, and (2) To what extent is the designated role of social networking grounded in theory? Methods The narrative synthesis approach to literature review was used to analyze the existing evidence. We searched the indexed scientific literature using keywords associated with health promotion and social networking. The papers included were only those making substantial study of both social networking and health promotion—either reporting the results of the intervention or detailing evidence-based plans. General papers about social networking and health were not included. Results The search identified 162 potentially relevant documents after review of titles and abstracts. Of these, 42 satisfied the inclusion criteria after full-text review. Six studies described randomized controlled trials (RCTs) evaluating

  15. Synthesis, characterization, theoretical prediction of activities and evaluation of biological activities of some sulfacetamide based hydroxytriazenes.

    PubMed

    Agarwal, Shilpa; Baroliya, Prabhat K; Bhargava, Amit; Tripathi, I P; Goswami, A K

    2016-06-15

    Six new N [(4-aminophenyl)sulfonyl]acetamide based hydroxytriazenes have been synthesized and characterized using elemental analysis, IR, 1H NMR, 13C NMR and MASS spectral analysis. Further, their theoretical predictions for probable activities have been taken using PASS (Prediction of Activity Spectra for Substance). Although a number of activities have been predicted but specifically anti-inflammatory, antiradical, anti-diabetic activities have been experimentally validated which proves that theoretical predictions agree with the experimental results. The object of the Letter is to establish Computer Aided Drug Design (CADD) using our compounds. PMID:27136718

  16. PowerPoint-Based Lectures in Business Education: An Empirical Investigation of Student-Perceived Novelty and Effectiveness

    ERIC Educational Resources Information Center

    Burke, Lisa A.; James, Karen E.

    2008-01-01

    The use of PowerPoint (PPT)-based lectures in business classes is prevalent, yet it remains empirically understudied in business education research. The authors investigate whether students in the contemporary business classroom view PPT as a novel stimulus and whether these perceptions of novelty are related to students' self-assessment of…

  17. Empirical Differences in Omission Tendency and Reading Ability in PISA: An Application of Tree-Based Item Response Models

    ERIC Educational Resources Information Center

    Okumura, Taichi

    2014-01-01

    This study examined the empirical differences between the tendency to omit items and reading ability by applying tree-based item response (IRTree) models to the Japanese data of the Programme for International Student Assessment (PISA) held in 2009. For this purpose, existing IRTree models were expanded to contain predictors and to handle…

  18. An Empirical Introduction to the Concept of Chemical Element Based on Van Hiele's Theory of Level Transitions

    ERIC Educational Resources Information Center

    Vogelezang, Michiel; Van Berkel, Berry; Verdonk, Adri

    2015-01-01

    Between 1970 and 1990, the Dutch working group "Empirical Introduction to Chemistry" developed a secondary school chemistry education curriculum based on the educational vision of the mathematicians van Hiele and van Hiele-Geldof. This approach viewed learning as a process in which students must go through discontinuous level transitions…

  19. Empirically Based Phenotypic Profiles of Children with Pervasive Developmental Disorders: Interpretation in the Light of the DSM-5

    ERIC Educational Resources Information Center

    Greaves-Lord, Kirstin; Eussen, Mart L. J. M.; Verhulst, Frank C.; Minderaa, Ruud B.; Mandy, William; Hudziak, James J.; Steenhuis, Mark Peter; de Nijs, Pieter F.; Hartman, Catharina A.

    2013-01-01

    This study aimed to contribute to the Diagnostic and Statistical Manual (DSM) debates on the conceptualization of autism by investigating (1) whether empirically based distinct phenotypic profiles could be distinguished within a sample of mainly cognitively able children with pervasive developmental disorder (PDD), and (2) how profiles related to…

  20. Comparison of subset-based local and FE-based global digital image correlation: Theoretical error analysis and validation

    NASA Astrophysics Data System (ADS)

    Pan, B.; Wang, B.; Lubineau, G.

    2016-07-01

    Subset-based local and finite-element-based (FE-based) global digital image correlation (DIC) approaches are the two primary image matching algorithms widely used for full-field displacement mapping. Very recently, the performances of these different DIC approaches have been experimentally investigated using numerical and real-world experimental tests. The results have shown that in typical cases, where the subset (element) size is no less than a few pixels and the local deformation within a subset (element) can be well approximated by the adopted shape functions, the subset-based local DIC outperforms FE-based global DIC approaches because the former provides slightly smaller root-mean-square errors and offers much higher computation efficiency. Here we investigate the theoretical origin and lay a solid theoretical basis for the previous comparison. We assume that systematic errors due to imperfect intensity interpolation and undermatched shape functions are negligibly small, and perform a theoretical analysis of the random errors or standard deviation (SD) errors in the displacements measured by two local DIC approaches (i.e., a subset-based local DIC and an element-based local DIC) and two FE-based global DIC approaches (i.e., Q4-DIC and Q8-DIC). The equations that govern the random errors in the displacements measured by these local and global DIC approaches are theoretically derived. The correctness of the theoretically predicted SD errors is validated through numerical translation tests under various noise levels. We demonstrate that the SD errors induced by the Q4-element-based local DIC, the global Q4-DIC and the global Q8-DIC are 4, 1.8-2.2 and 1.2-1.6 times greater, respectively, than that associated with the subset-based local DIC, which is consistent with our conclusions from previous work.

  1. Acute traumatic brain injury: is current management evidence based? An empirical analysis of systematic reviews.

    PubMed

    Lei, Jin; Gao, Guoyi; Jiang, Jiyao

    2013-04-01

    Traumatic brain injury (TBI) is a major health and socioeconomic problem worldwide with a high rate of death and long-term disability. Previous studies have summarized evidence from large-scale randomized trials, finding no intervention showing convincing efficacy for acute TBI management. The present empirical study set out to assess another crucial component of evidence base-systematic review, which contributes a lot to evidence-based health care, in terms of clinical issues, methodological aspects, and implication for practice and research. A total of 44 systematic reviews pertaining to therapeutic interventions for acute TBI were identified through electronic database searching, clinical guideline retrieval, and expert consultation, of which 21 were published in Cochrane Library and 23 in peer-reviewed journals. Their methodological quality was generally satisfactory, with the median Overview Quality Assessment Questionnaire score of 5.5 (interquartile range 2-7). Cochrane reviews are of better quality than regular journal reviews. Twenty-nine high-quality reviews provided no conclusive evidence for the investigated 22 interventions except for an adverse effect of corticosteroids. Less than one-third of the component trials were reported with adequate allocation concealment. Additionally other methodological flaws in design-for example, ignoring heterogeneity among the TBI population-also contributed to the failure of past clinical research. Based on the above findings, evidence from both systematic reviews and clinical trials does not fully support current management of acute TBI. Translating from laboratory success to clinical effect remains an unique challenge. Accordingly it may be the time to rethink the way in future practice and clinical research in TBI. PMID:23151044

  2. Polarizable Empirical Force Field for Aromatic Compounds Based on the Classical Drude Oscillator

    PubMed Central

    Lopes, Pedro E. M.; Lamoureux, Guillaume; Roux, Benoit; MacKerell, Alexander D.

    2008-01-01

    The polarizable empirical CHARMM force field based on the classical Drude oscillator has been extended to the aromatic compounds benzene and toluene. Parameters were optimized for benzene and then transferred directly to toluene, with parameters for the methyl moiety of toluene taken from the previously published work on the alkanes. Optimization of all parameters was performed against an extensive set of quantum mechanical and experimental data. Ab initio data was used for determination of the electrostatic parameters, the vibrational analysis, and in the optimization of the relative magnitudes of the Lennard-Jones parameters. The absolute values of the Lennard-Jones parameters were determined by comparing computed and experimental heats of vaporization, molecular volumes, free energies of hydration and dielectric constants. The newly developed parameter set was extensively tested against additional experimental data such as vibrational spectra in the condensed phase, diffusion constants, heat capacities at constant pressure and isothermal compressibilities including data as a function of temperature. Moreover, the structure of liquid benzene, liquid toluene and of solutions of each in water were studied. In the case of benzene, the computed and experimental total distribution function were compared, with the developed model shown to be in excellent agreement with experiment. PMID:17388420

  3. Empirical Study of User Preferences Based on Rating Data of Movies

    PubMed Central

    Zhao, YingSi; Shen, Bo

    2016-01-01

    User preference plays a prominent role in many fields, including electronic commerce, social opinion, and Internet search engines. Particularly in recommender systems, it directly influences the accuracy of the recommendation. Though many methods have been presented, most of these have only focused on how to improve the recommendation results. In this paper, we introduce an empirical study of user preferences based on a set of rating data about movies. We develop a simple statistical method to investigate the characteristics of user preferences. We find that the movies have potential characteristics of closure, which results in the formation of numerous cliques with a power-law size distribution. We also find that a user related to a small clique always has similar opinions on the movies in this clique. Then, we suggest a user preference model, which can eliminate the predictions that are considered to be impracticable. Numerical results show that the model can reflect user preference with remarkable accuracy when data elimination is allowed, and random factors in the rating data make prediction error inevitable. In further research, we will investigate many other rating data sets to examine the universality of our findings. PMID:26735847

  4. Ship classification using nonlinear features of radiated sound: an approach based on empirical mode decomposition.

    PubMed

    Bao, Fei; Li, Chen; Wang, Xinlong; Wang, Qingfu; Du, Shuanping

    2010-07-01

    Classification for ship-radiated underwater sound is one of the most important and challenging subjects in underwater acoustical signal processing. An approach to ship classification is proposed in this work based on analysis of ship-radiated acoustical noise in subspaces of intrinsic mode functions attained via the ensemble empirical mode decomposition. It is shown that detection and acquisition of stable and reliable nonlinear features become practically feasible by nonlinear analysis of the time series of individual decomposed components, each of which is simple enough and well represents an oscillatory mode of ship dynamics. Surrogate and nonlinear predictability analysis are conducted to probe and measure the nonlinearity and regularity. The results of both methods, which verify each other, substantiate that ship-radiated noises contain components with deterministic nonlinear features well serving for efficient classification of ships. The approach perhaps opens an alternative avenue in the direction toward object classification and identification. It may also import a new view of signals as complex as ship-radiated sound. PMID:20649216

  5. Percentile-based Empirical Distribution Function Estimates for Performance Evaluation of Healthcare Providers

    PubMed Central

    Paddock, Susan M.; Louis, Thomas A.

    2010-01-01

    Summary Hierarchical models are widely-used to characterize the performance of individual healthcare providers. However, little attention has been devoted to system-wide performance evaluations, the goals of which include identifying extreme (e.g., top 10%) provider performance and developing statistical benchmarks to define high-quality care. Obtaining optimal estimates of these quantities requires estimating the empirical distribution function (EDF) of provider-specific parameters that generate the dataset under consideration. However, the difficulty of obtaining uncertainty bounds for a square-error loss minimizing EDF estimate has hindered its use in system-wide performance evaluations. We therefore develop and study a percentile-based EDF estimate for univariate provider-specific parameters. We compute order statistics of samples drawn from the posterior distribution of provider-specific parameters to obtain relevant uncertainty assessments of an EDF estimate and its features, such as thresholds and percentiles. We apply our method to data from the Medicare End Stage Renal Disease (ESRD) Program, a health insurance program for people with irreversible kidney failure. We highlight the risk of misclassifying providers as exceptionally good or poor performers when uncertainty in statistical benchmark estimates is ignored. Given the high stakes of performance evaluations, statistical benchmarks should be accompanied by precision estimates. PMID:21918583

  6. Pseudo-empirical Likelihood-Based Method Using Calibration for Longitudinal Data with Drop-Out

    PubMed Central

    Chen, Baojiang; Zhou, Xiao-Hua; Chan, Kwun Chuen Gary

    2014-01-01

    Summary In observational studies, interest mainly lies in estimation of the population-level relationship between the explanatory variables and dependent variables, and the estimation is often undertaken using a sample of longitudinal data. In some situations, the longitudinal data sample features biases and loss of estimation efficiency due to non-random drop-out. However, inclusion of population-level information can increase estimation efficiency. In this paper we propose an empirical likelihood-based method to incorporate population-level information in a longitudinal study with drop-out. The population-level information is incorporated via constraints on functions of the parameters, and non-random drop-out bias is corrected by using a weighted generalized estimating equations method. We provide a three-step estimation procedure that makes computation easier. Some commonly used methods are compared in simulation studies, which demonstrate that our proposed method can correct the non-random drop-out bias and increase the estimation efficiency, especially for small sample size or when the missing proportion is high. In some situations, the efficiency improvement is substantial. Finally, we apply this method to an Alzheimer’s disease study. PMID:25587200

  7. Empirical Study of User Preferences Based on Rating Data of Movies.

    PubMed

    Zhao, YingSi; Shen, Bo

    2016-01-01

    User preference plays a prominent role in many fields, including electronic commerce, social opinion, and Internet search engines. Particularly in recommender systems, it directly influences the accuracy of the recommendation. Though many methods have been presented, most of these have only focused on how to improve the recommendation results. In this paper, we introduce an empirical study of user preferences based on a set of rating data about movies. We develop a simple statistical method to investigate the characteristics of user preferences. We find that the movies have potential characteristics of closure, which results in the formation of numerous cliques with a power-law size distribution. We also find that a user related to a small clique always has similar opinions on the movies in this clique. Then, we suggest a user preference model, which can eliminate the predictions that are considered to be impracticable. Numerical results show that the model can reflect user preference with remarkable accuracy when data elimination is allowed, and random factors in the rating data make prediction error inevitable. In further research, we will investigate many other rating data sets to examine the universality of our findings. PMID:26735847

  8. Satellite-based empirical models linking river plume dynamics with hypoxic area and volume

    NASA Astrophysics Data System (ADS)

    Le, Chengfeng; Lehrter, John C.; Hu, Chuanmin; Obenour, Daniel R.

    2016-03-01

    Satellite-based empirical models explaining hypoxic area and volume variation were developed for the seasonally hypoxic (O2 < 2 mg L-1) northern Gulf of Mexico adjacent to the Mississippi River. Annual variations in midsummer hypoxic area and volume were related to Moderate Resolution Imaging Spectroradiometer-derived monthly estimates of river plume area (km2) and average, inner shelf chlorophyll a concentration (Chl a, mg m-3). River plume area in June was negatively related with midsummer hypoxic area (km2) and volume (km3), while July inner shelf Chl a was positively related to hypoxic area and volume. Multiple regression models using river plume area and Chl a as independent variables accounted for most of the variability in hypoxic area (R2 = 0.92) or volume (R2 = 0.89). These models explain more variation in hypoxic area than models using Mississippi River nutrient loads as independent variables. The results here also support a hypothesis that confinement of the river plume to the inner shelf is an important mechanism controlling hypoxia area and volume in this region.

  9. Empirical prediction of Indian summer monsoon rainfall with different lead periods based on global SST anomalies

    NASA Astrophysics Data System (ADS)

    Pai, D. S.; Rajeevan, M.

    2006-02-01

    The main objective of this study was to develop empirical models with different seasonal lead time periods for the long range prediction of seasonal (June to September) Indian summer monsoon rainfall (ISMR). For this purpose, 13 predictors having significant and stable relationships with ISMR were derived by the correlation analysis of global grid point seasonal Sea-Surface Temperature (SST) anomalies and the tendency in the SST anomalies. The time lags of the seasonal SST anomalies were varied from 1 season to 4 years behind the reference monsoon season. The basic SST data set used was the monthly NOAA Extended Reconstructed Global SST (ERSST) data at 2° × 2° spatial grid for the period 1951 2003. The time lags of the 13 predictors derived from various areas of all three tropical ocean basins (Indian, Pacific and Atlantic Oceans) varied from 1 season to 3 years. Based on these inter-correlated predictors, 3 predictor sub sets A, B and C were formed with prediction lead time periods of 0, 1 and 2 seasons, respectively, from the beginning of the monsoon season. The selected principal components (PCs) of these predictor sets were used as the input parameters for the models A, B and C, respectively. The model development period was 1955 1984. The correct model size was derived using all-possible regressions procedure and Mallow’s “Cp” statistics.

  10. The mature minor: some critical psychological reflections on the empirical bases.

    PubMed

    Partridge, Brian C

    2013-06-01

    Moral and legal notions engaged in clinical ethics should not only possess analytic clarity but a sound basis in empirical findings. The latter condition brings into question the expansion of the mature minor exception. The mature minor exception in the healthcare law of the United States has served to enable those under the legal age to consent to medical treatment. Although originally developed primarily for minors in emergency or quasi-emergency need for health care, it was expanded especially from the 1970s in order to cover unemancipated minors older than 14 years. This expansion initially appeared plausible, given psychological data that showed the intellectual capacity of minors over 14 to recognize the causal connection between their choices and the consequences of their choices. However, subsequent psychological studies have shown that minors generally fail to have realistic affective and evaluative appreciations of the consequences of their decisions, because they tend to over-emphasize short-term benefits and underestimate long-term risks. Also, unlike most decisionmakers over 21, the decisions of minors are more often marked by the lack of adequate impulse control, all of which is reflected in the far higher involvement of adolescents in acts of violence, intentional injury, and serious automobile accidents. These effects are more evident in circumstances that elicit elevated affective responses. The advent of brain imaging has allowed the actual visualization of qualitative differences between how minors versus persons over the age of 21 generally assess risks and benefits and make decisions. In the case of most under the age of 21, subcortical systems fail adequately to be checked by the prefrontal systems that are involved in adult executive decisions. The neuroanatomical and psychological model developed by Casey, Jones, and Summerville offers an empirical insight into the qualitative differences in the neuroanatomical and neuropsychological bases

  11. How much does participatory flood management contribute to stakeholders' social capacity building? Empirical findings based on a triangulation of three evaluation approaches

    NASA Astrophysics Data System (ADS)

    Buchecker, M.; Menzel, S.; Home, R.

    2013-06-01

    Recent literature suggests that dialogic forms of risk communication are more effective to build stakeholders' hazard-related social capacities. In spite of the high theoretical expectations, there is a lack of univocal empirical evidence on the relevance of these effects. This is mainly due to the methodological limitations of the existing evaluation approaches. In our paper we aim at eliciting the contribution of participatory river revitalisation projects on stakeholders' social capacity building by triangulating the findings of three evaluation studies that were based on different approaches: a field-experimental, a qualitative long-term ex-post and a cross-sectional household survey approach. The results revealed that social learning and avoiding the loss of trust were more relevant benefits of participatory flood management than acceptance building. The results suggest that stakeholder involvements should be more explicitly designed as tools for long-term social learning.

  12. Study of network resource allocation based on market and game theoretic mechanism

    NASA Astrophysics Data System (ADS)

    Liu, Yingmei; Wang, Hongwei; Wang, Gang

    2004-04-01

    We work on the network resource allocation issue concerning network management system function based on market-oriented mechanism. The scheme is to model the telecommunication network resources as trading goods in which the various network components could be owned by different competitive, real-world entities. This is a multidisciplinary framework concentrating on the similarity between resource allocation in network environment and the market mechanism in economic theory. By taking an economic (market-based and game theoretic) approach in routing of communication network, we study the dynamic behavior under game-theoretic framework in allocating network resources. Based on the prior work of Gibney and Jennings, we apply concepts of utility and fitness to the market mechanism with an intention to close the gap between experiment environment and real world situation.

  13. Theoretical results on the tandem junction solar cell based on its Ebers-Moll transistor model

    NASA Technical Reports Server (NTRS)

    Goradia, C.; Vaughn, J.; Baraona, C. R.

    1980-01-01

    A one-dimensional theoretical model of the tandem junction solar cell (TJC) with base resistivity greater than about 1 ohm-cm and under low level injection has been derived. This model extends a previously published conceptual model which treats the TJC as an npn transistor. The model gives theoretical expressions for each of the Ebers-Moll type currents of the illuminated TJC and allows for the calculation of the spectral response, I(sc), V(oc), FF and eta under variation of one or more of the geometrical and material parameters and 1MeV electron fluence. Results of computer calculations based on this model are presented and discussed. These results indicate that for space applications, both a high beginning of life efficiency, greater than 15% AM0, and a high radiation tolerance can be achieved only with thin (less than 50 microns) TJC's with high base resistivity (greater than 10 ohm-cm).

  14. A novel signal compression method based on optimal ensemble empirical mode decomposition for bearing vibration signals

    NASA Astrophysics Data System (ADS)

    Guo, Wei; Tse, Peter W.

    2013-01-01

    Today, remote machine condition monitoring is popular due to the continuous advancement in wireless communication. Bearing is the most frequently and easily failed component in many rotating machines. To accurately identify the type of bearing fault, large amounts of vibration data need to be collected. However, the volume of transmitted data cannot be too high because the bandwidth of wireless communication is limited. To solve this problem, the data are usually compressed before transmitting to a remote maintenance center. This paper proposes a novel signal compression method that can substantially reduce the amount of data that need to be transmitted without sacrificing the accuracy of fault identification. The proposed signal compression method is based on ensemble empirical mode decomposition (EEMD), which is an effective method for adaptively decomposing the vibration signal into different bands of signal components, termed intrinsic mode functions (IMFs). An optimization method was designed to automatically select appropriate EEMD parameters for the analyzed signal, and in particular to select the appropriate level of the added white noise in the EEMD method. An index termed the relative root-mean-square error was used to evaluate the decomposition performances under different noise levels to find the optimal level. After applying the optimal EEMD method to a vibration signal, the IMF relating to the bearing fault can be extracted from the original vibration signal. Compressing this signal component obtains a much smaller proportion of data samples to be retained for transmission and further reconstruction. The proposed compression method were also compared with the popular wavelet compression method. Experimental results demonstrate that the optimization of EEMD parameters can automatically find appropriate EEMD parameters for the analyzed signals, and the IMF-based compression method provides a higher compression ratio, while retaining the bearing defect

  15. Pseudopotential-based electron quantum transport: Theoretical formulation and application to nanometer-scale silicon nanowire transistors

    NASA Astrophysics Data System (ADS)

    Fang, Jingtian; Vandenberghe, William G.; Fu, Bo; Fischetti, Massimo V.

    2016-01-01

    We present a formalism to treat quantum electronic transport at the nanometer scale based on empirical pseudopotentials. This formalism offers explicit atomistic wavefunctions and an accurate band structure, enabling a detailed study of the characteristics of devices with a nanometer-scale channel and body. Assuming externally applied potentials that change slowly along the electron-transport direction, we invoke the envelope-wavefunction approximation to apply the open boundary conditions and to develop the transport equations. We construct the full-band open boundary conditions (self-energies of device contacts) from the complex band structure of the contacts. We solve the transport equations and present the expressions required to calculate the device characteristics, such as device current and charge density. We apply this formalism to study ballistic transport in a gate-all-around (GAA) silicon nanowire field-effect transistor with a body-size of 0.39 nm, a gate length of 6.52 nm, and an effective oxide thickness of 0.43 nm. Simulation results show that this device exhibits a subthreshold slope (SS) of ˜66 mV/decade and a drain-induced barrier-lowering of ˜2.5 mV/V. Our theoretical calculations predict that low-dimensionality channels in a 3D GAA architecture are able to meet the performance requirements of future devices in terms of SS swing and electrostatic control.

  16. Empirically Supported Family-Based Treatments for Conduct Disorder and Delinquency in Adolescents

    PubMed Central

    Henggeler, Scott W.; Sheidow, Ashli J.

    2011-01-01

    Several family-based treatments of conduct disorder and delinquency in adolescents have emerged as evidence-based and, in recent years, have been transported to more than 800 community practice settings. These models include multisystemic therapy, functional family therapy, multidimensional treatment foster care, and, to a lesser extent, brief strategic family therapy. In addition to summarizing the theoretical and clinical bases of these treatments, their results in efficacy and effectiveness trials are examined with particular emphasis on any demonstrated capacity to achieve favorable outcomes when implemented by real world practitioners in community practice settings. Special attention is also devoted to research on purported mechanisms of change as well as the long-term sustainability of outcomes achieved by these treatment models. Importantly, we note that the developers of each of the models have developed quality assurance systems to support treatment fidelity and youth and family outcomes; and the developers have formed purveyor organizations to facilitate the large scale transport of their respective treatments to community settings nationally and internationally. PMID:22283380

  17. Theoretical studies of organotin(IV) complexes derived from ONO-donor type schiff base ligands.

    PubMed

    Şirikci, Gökhan; Ancın, Nilgün Ataünal; Öztaş, Selma Gül

    2015-09-01

    In this work a molecular modeling study was carried out based on a series of organotin(IV) derivatives which were complexed with ONO-Donor type Schiff base ligands to build up a statistical data pool for researchers. For this purpose, various properties of the selected complexes such as energies, band gaps, chemical reactivity descriptors, polarizabilities, geometric parameters, (1)H-NMR, (13)C-NMR chemical shifting values were obtained through density functional theory using B3LYP, CAM-B3LYP, TPSSTPSS, TPSSh, HCTH, wB97XD, and MN12SX functionals. Empirical dispersion corrections were incorporated for some functionals and solvent effects were also taken into account through applying polarizable continuum model (PCM). (1)H-NMR, (13)C-NMR chemical shifts were calculated via linear regression analysis using either gauge invariant atomic orbital (GIAO) or continuous set of gauge transformations (CSGT) methods. While structural properties were being explored, quantitative effects of utilized functionals and empirical dispersion corrections over calculated properties were shown in detail. PMID:26245450

  18. Sci—Thur AM: YIS - 09: Validation of a General Empirically-Based Beam Model for kV X-ray Sources

    SciTech Connect

    Poirier, Y.; Sommerville, M.; Johnstone, C.D.; Gräfe, J.; Nygren, I.; Jacso, F.; Khan, R.; Villareal-Barajas, J.E.; Tambasco, M.

    2014-08-15

    Purpose: To present an empirically-based beam model for computing dose deposited by kilovoltage (kV) x-rays and validate it for radiographic, CT, CBCT, superficial, and orthovoltage kV sources. Method and Materials: We modeled a wide variety of imaging (radiographic, CT, CBCT) and therapeutic (superficial, orthovoltage) kV x-ray sources. The model characterizes spatial variations of the fluence and spectrum independently. The spectrum is derived by matching measured values of the half value layer (HVL) and nominal peak potential (kVp) to computationally-derived spectra while the fluence is derived from in-air relative dose measurements. This model relies only on empirical values and requires no knowledge of proprietary source specifications or other theoretical aspects of the kV x-ray source. To validate the model, we compared measured doses to values computed using our previously validated in-house kV dose computation software, kVDoseCalc. The dose was measured in homogeneous and anthropomorphic phantoms using ionization chambers and LiF thermoluminescent detectors (TLDs), respectively. Results: The maximum difference between measured and computed dose measurements was within 2.6%, 3.6%, 2.0%, 4.8%, and 4.0% for the modeled radiographic, CT, CBCT, superficial, and the orthovoltage sources, respectively. In the anthropomorphic phantom, the computed CBCT dose generally agreed with TLD measurements, with an average difference and standard deviation ranging from 2.4 ± 6.0% to 5.7 ± 10.3% depending on the imaging technique. Most (42/62) measured TLD doses were within 10% of computed values. Conclusions: The proposed model can be used to accurately characterize a wide variety of kV x-ray sources using only empirical values.

  19. BODIPY based colorimetric fluorescent probe for selective thiophenol detection: theoretical and experimental studies.

    PubMed

    Kand, Dnyaneshwar; Mishra, Pratyush Kumar; Saha, Tanmoy; Lahiri, Mayurika; Talukdar, Pinaki

    2012-09-01

    A BODIPY-based selective thiophenol probe capable of discriminating aliphatic thiols is reported. The fluorescence off-on effect upon reaction with thiol is elucidated with theoretical calculations. The sensing of thiophenol is associated with a color change from red to yellow and 63-fold enhancement in green fluorescence. Application of the probe for selective thiophenol detection is demonstrated by live cell imaging. PMID:22751002

  20. Assessment of diffuse trace metal inputs into surface waters - Combining empirical estimates with process based simulations

    NASA Astrophysics Data System (ADS)

    Schindewolf, Marcus; Steinz, André; Schmidt, Jürgen

    2015-04-01

    As a result of mining activities since the 13th century, surface waters of the German Mulde catchment suffer from deleterious dissolved and sediment attached lead (Pb) and zinc (Zn) inputs. The leaching rate of trace metals with drainage water is a significant criterion for assessing trace metal concentrations of soils and associated risks of ground water pollution. However, the vertical transport rates of trace metals in soils are difficult to quantify. Monitoring is restricted to small lysimeter plots, which limits the transferability of results. Additionally the solid-liquid-transfer conditions in soils are highly variable, primarily due to the fluctuating retention time of percolating soil water. In contrast, lateral sediment attached trace metal inputs are mostly associated with soil erosion and resulting sediment inputs into surface waters. Since soil erosion by water is related to rare single events, monitoring and empirical estimates reveal visible shortcomings. This gap in knowledge can only be closed by process based model calculations. Concerning these calculations it has to be considered, that Pb and Zn are predominantly attached to the fine-grained soil particles (<0.063 mm). The selective nature of soil erosion causes a preferential transport of these fine particles, while less contaminated larger particles remain on site. Consequently trace metals are enriched in the eroded sediment compared to the origin soil. This paper aims to introduce both, a new method that allows the assessment of trace metal leaching rates from contaminated top soils for standardised transfer conditions and a process based modelling approach for sediment attached trace metal inputs into surface waters. Pb and Zn leaching rates amounts to 20 Mg ha-1 yr-1 resp. 114 Mg ha-1 yr-1. Deviations to observed dissolved trace metal yields at the Bad Düben gauging station are caused by plant uptake and subsoil retention. Sediment attached Pb and Zn input rates amounts to 114 Mg ha-1 yr

  1. Theoretical Determination of the pK a Values of Betalamic Acid Related to the Free Radical Scavenger Capacity: Comparison Between Empirical and Quantum Chemical Methods.

    PubMed

    Tutone, Marco; Lauria, Antonino; Almerico, Anna Maria

    2016-06-01

    Health benefits of dietary phytochemicals have been suggested in recent years. Among 1000s of different compounds, Betalains, which occur in vegetables of the Cariophyllalae order (cactus pear fruits and red beet), have been considered because of reducing power and potential to affect redox-modulated cellular processes. The antioxidant power of Betalains is strictly due to the dissociation rate of the acid moieties present in all the molecules of this family of phytochemicals. Experimentally, only the pK a values of betanin were determined. Recently, it was evidenced it was evidenced as the acid dissociation, at different environmental pHs, affects on its electron-donating capacity, and further on its free radical scavenging power. The identical correlation was studied on another Betalains family compound, Betalamic Acid. Experimental evidences showed that the free radical scavenging capacity of this compound drastically decreases at pH > 5, but pK a values were experimentally not measured. With the aim to justify the Betalamic Acid behavior as free radical scavenger, in this paper we tried to predict in silico the pK a values by means different approaches. Starting from the known experimental pK as of acid compounds, both phytochemicals and small organic, two empirical approaches and quantum-mechanical calculation were compared to give reliable prediction of the pK as of Betalamic Acid. Results by means these computational approaches are consistent with the experimental evidences. As shown herein, in silico, the totally dissociated species, at the experimental pH > 5 in solution, is predominant, exploiting the higher electron-donating capability (HOMO energy). Therefore, the computational estimated pK a values of Betalamic Acid resulted very reliable. PMID:26253717

  2. Empirical population and public health ethics: A review and critical analysis to advance robust empirical-normative inquiry.

    PubMed

    Knight, Rod

    2016-05-01

    The field of population and public health ethics (PPHE) has yet to fully embrace the generation of evidence as an important project. This article reviews the philosophical debates related to the 'empirical turn' in clinical bioethics, and critically analyses how PPHE has and can engage with the philosophical implications of generating empirical data within the task of normative inquiry. A set of five conceptual and theoretical issues pertaining to population health that are unresolved and could potentially benefit from empirical PPHE approaches to normative inquiry are discussed. Each issue differs from traditional empirical bioethical approaches, in that they emphasize (1) concerns related to the population, (2) 'upstream' policy-relevant health interventions - within and outside of the health care system and (3) the prevention of illness and disease. Within each theoretical issue, a conceptual example from population and public health approaches to HIV prevention and health promotion is interrogated. Based on the review and critical analysis, this article concludes that empirical-normative approaches to population and public health ethics would be most usefully pursued as an iterative project (rather than as a linear project), in which the normative informs the empirical questions to be asked and new empirical evidence constantly directs conceptualizations of what constitutes morally robust public health practices. Finally, a conceptualization of an empirical population and public health ethics is advanced in order to open up new interdisciplinary 'spaces', in which empirical and normative approaches to ethical inquiry are transparently (and ethically) integrated. PMID:25956917

  3. Simulation of Long Lived Tracers Using an Improved Empirically Based Two-Dimensional Model Transport Algorithm

    NASA Technical Reports Server (NTRS)

    Fleming, E. L.; Jackman, C. H.; Stolarski, R. S.; Considine, D. B.

    1998-01-01

    We have developed a new empirically-based transport algorithm for use in our GSFC two-dimensional transport and chemistry model. The new algorithm contains planetary wave statistics, and parameterizations to account for the effects due to gravity waves and equatorial Kelvin waves. As such, this scheme utilizes significantly more information compared to our previous algorithm which was based only on zonal mean temperatures and heating rates. The new model transport captures much of the qualitative structure and seasonal variability observed in long lived tracers, such as: isolation of the tropics and the southern hemisphere winter polar vortex; the well mixed surf-zone region of the winter sub-tropics and mid-latitudes; the latitudinal and seasonal variations of total ozone; and the seasonal variations of mesospheric H2O. The model also indicates a double peaked structure in methane associated with the semiannual oscillation in the tropical upper stratosphere. This feature is similar in phase but is significantly weaker in amplitude compared to the observations. The model simulations of carbon-14 and strontium-90 are in good agreement with observations, both in simulating the peak in mixing ratio at 20-25 km, and the decrease with altitude in mixing ratio above 25 km. We also find mostly good agreement between modeled and observed age of air determined from SF6 outside of the northern hemisphere polar vortex. However, observations inside the vortex reveal significantly older air compared to the model. This is consistent with the model deficiencies in simulating CH4 in the northern hemisphere winter high latitudes and illustrates the limitations of the current climatological zonal mean model formulation. The propagation of seasonal signals in water vapor and CO2 in the lower stratosphere showed general agreement in phase, and the model qualitatively captured the observed amplitude decrease in CO2 from the tropics to midlatitudes. However, the simulated seasonal

  4. Model Selection for Equating Testlet-Based Tests in the NEAT Design: An Empirical Study

    ERIC Educational Resources Information Center

    He, Wei; Li, Feifei; Wolfe, Edward W.; Mao, Xia

    2012-01-01

    For those tests solely composed of testlets, local item independency assumption tends to be violated. This study, by using empirical data from a large-scale state assessment program, was interested in investigates the effects of using different models on equating results under the non-equivalent group anchor-test (NEAT) design. Specifically, the…

  5. Universal Design for Instruction in Postsecondary Education: A Systematic Review of Empirically Based Articles

    ERIC Educational Resources Information Center

    Roberts, Kelly D.; Park, Hye Jin; Brown, Steven; Cook, Bryan

    2011-01-01

    Universal Design for Instruction (UDI) in postsecondary education is a relatively new concept/framework that has generated significant support. The purpose of this literature review was to examine existing empirical research, including qualitative, quantitative, and mixed methods, on the use of UDI (and related terms) in postsecondary education.…

  6. Comparisons of experiment with cellulose models based on electronic structure and empirical force field theories

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Studies of cellobiose conformations with HF/6-31G* and B3LYP/6-31+G*quantum theory [1] gave a reference for studies with the much faster empirical methods such as MM3, MM4, CHARMM and AMBER. The quantum studies also enable a substantial reduction in the number of exo-cyclic group orientations that...

  7. Monitoring of Qualifications and Employment in Austria: An Empirical Approach Based on the Labour Force Survey

    ERIC Educational Resources Information Center

    Lassnigg, Lorenz; Vogtenhuber, Stefan

    2011-01-01

    The empirical approach referred to in this article describes the relationship between education and training (ET) supply and employment in Austria; the use of the new ISCED (International Standard Classification of Education) fields of study variable makes this approach applicable abroad. The purpose is to explore a system that produces timely…

  8. A Model of the Regulation of Nitrogenase Electron Allocation in Legume Nodules (II. Comparison of Empirical and Theoretical Studies in Soybean).

    PubMed Central

    Moloney, A. H.; Guy, R. D.; Layzell, D. B.

    1994-01-01

    In N2-fixing legumes, the proportion of total electron flow through nitrogenase (total nitrogenase activity, TNA) that is used for N2 fixation is called the electron allocation coefficient (EAC). Previous studies have proposed that EAC is regulated by the competitive inhibition of H2 on N2 fixation and that the degree of H2 inhibition can be affected by a nodule's permeability to gas diffusion. To test this hypothesis, EAC was measured in soybean (Glycine max L. Merr.) nodules exposed to various partial pressures of H2 and N2, with or without changes in TNA or nodule permeability to gas diffusion, and the results were compared with the predictions of a mathematical model that combined equations for gas diffusion and competitive inhibition of N2 fixation (A. Moloney and D.B. Layzell [1993] Plant Physiol 103: 421-428). The empirical data clearly showed that decreases in EAC were associated with increases in external pH2, decreases in external pN2, and decreases in nodule permeability to O2 diffusion. The model predicted similar trends in EAC, and the small deviations that occurred between measured and predicted values could be readily accounted for by altering one or more of the following model assumptions: K1(H2) of nitrogenase (range from 2-4% H2), Km(N2) of nitrogenase (range from 4-5% N2), the allocation of less than 100% of whole-nodule respiration to tissues within the diffusion barrier, and the presence of a diffusion pathway that is open pore versus closed pore. The differences in the open-pore and closed-pore versions of the model suggest that it may be possible to use EAC measurements as a tool for the study of legume nodule diffusion barrier structure and function. The ability of the model to predict EAC provided strong support for the hypothesis that H2 inhibition of N2 fixation plays a major role in the in vivo control of EAC and that the presence of a variable barrier to gas diffusion affects the H2 and N2 concentration in the infected cell and

  9. Shape of the self-concept clarity change during group psychotherapy predicts the outcome: an empirical validation of the theoretical model of the self-concept change

    PubMed Central

    Styła, Rafał

    2015-01-01

    Background: Self-Concept Clarity (SCC) describes the extent to which the schemas of the self are internally integrated, well defined, and temporally stable. This article presents a theoretical model that describes how different shapes of SCC change (especially stable increase and “V” shape) observed in the course of psychotherapy are related to the therapy outcome. Linking the concept of Jean Piaget and the dynamic systems theory, the study postulates that a stable SCC increase is needed for the participants with a rather healthy personality structure, while SCC change characterized by a “V” shape or fluctuations is optimal for more disturbed patients. Method: Correlational study in a naturalistic setting with repeated measurements (M = 5.8) was conducted on the sample of 85 patients diagnosed with neurosis and personality disorders receiving intensive eclectic group psychotherapy under routine inpatient conditions. Participants filled in the Self-Concept Clarity Scale (SCCS), Symptoms' Questionnaire KS-II, and Neurotic Personality Questionnaire KON-2006 at the beginning and at the end of the course of psychotherapy. The SCCS was also administered every 2 weeks during psychotherapy. Results: As hypothesized, among the relatively healthiest group of patients the stable SCC increase was related to positive treatment outcome, while more disturbed patients benefited from the fluctuations and “V” shape of SCC change. Conclusions: The findings support the idea that for different personality dispositions either a monotonic increase or transient destabilization of SCC is a sign of a good treatment prognosis. PMID:26579001

  10. A game-theoretic framework for landmark-based image segmentation.

    PubMed

    Ibragimov, Bulat; Likar, Boštjan; Pernus, Franjo; Vrtovec, Tomaz

    2012-09-01

    A novel game-theoretic framework for landmark-based image segmentation is presented. Landmark detection is formulated as a game, in which landmarks are players, landmark candidate points are strategies, and likelihoods that candidate points represent landmarks are payoffs, determined according to the similarity of image intensities and spatial relationships between the candidate points in the target image and their corresponding landmarks in images from the training set. The solution of the formulated game-theoretic problem is the equilibrium of candidate points that represent landmarks in the target image and is obtained by a novel iterative scheme that solves the segmentation problem in polynomial time. The object boundaries are finally extracted by applying dynamic programming to the optimal path searching problem between the obtained adjacent landmarks. The performance of the proposed framework was evaluated for segmentation of lung fields from chest radiographs and heart ventricles from cardiac magnetic resonance cross sections. The comparison to other landmark-based segmentation techniques shows that the results obtained by the proposed game-theoretic framework are highly accurate and precise in terms of mean boundary distance and area overlap. Moreover, the framework overcomes several shortcomings of the existing techniques, such as sensitivity to initialization and convergence to local optima. PMID:22692901

  11. Density functional theoretical and NMR study of Hammett bases in acidic zeolites

    SciTech Connect

    Nicholas, J.B.; Haw, J.F.; Beck, L.W.; Krawietz, T.R.; Ferguson, D.B.

    1995-12-13

    We demonstrate here that theoretical calculations using density functional theory (DFT) accurately model proton transfer reactions between Bronsted sites in zeolites (the archetypal solid acids) and Hammett bases. The validity of the theoretical results is verified by NMR measurements of key nuclei of the same Hammett bases in zeolites HZSM-5 (MFI) and HY (FAU), the first such experiments. The accuracy of the predictions of the DFT calculations for the HZSM-5 zeolite model suggests that they may be extended to other zeolite cluster models, including those which have not yet been realized experimentally and hence are not available for NMR study. We optimized the adsorbate zeolite complexes with this angle constrained to larger values; to our surprise, the SVWN/DNP calculations resulted in the proton being transferred from p-fluoronitrobenzene back to the zeolite, even if the Si-O-Al angle was held fixed at 180{degree}. Further tests at higher levels of theory are in progress. This investigation used a choice of indicators that necessarily resulted in wide limits on zeolite acid strength, but the theoretical and experimental methodologies have been established. 21 refs., 3 figs.

  12. Security Analysis of Selected AMI Failure Scenarios Using Agent Based Game Theoretic Simulation

    SciTech Connect

    Abercrombie, Robert K; Schlicher, Bob G; Sheldon, Frederick T

    2014-01-01

    Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. We concentrated our analysis on the Advanced Metering Infrastructure (AMI) functional domain which the National Electric Sector Cyber security Organization Resource (NESCOR) working group has currently documented 29 failure scenarios. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain. From these five selected scenarios, we characterize them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrates how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA.

  13. A comprehensive theoretical model for on-chip microring-based photonic fractional differentiators

    PubMed Central

    Jin, Boyuan; Yuan, Jinhui; Wang, Kuiru; Sang, Xinzhu; Yan, Binbin; Wu, Qiang; Li, Feng; Zhou, Xian; Zhou, Guiyao; Yu, Chongxiu; Lu, Chao; Yaw Tam, Hwa; Wai, P. K. A.

    2015-01-01

    Microring-based photonic fractional differentiators play an important role in the on-chip all-optical signal processing. Unfortunately, the previous works do not consider the time-reversal and the time delay characteristics of the microring-based fractional differentiator. They also do not include the effect of input pulse width on the output. In particular, it cannot explain why the microring-based differentiator with the differentiation order n > 1 has larger output deviation than that with n < 1, and why the microring-based differentiator cannot reproduce the three-peak output waveform of an ideal differentiator with n > 1. In this paper, a comprehensive theoretical model is proposed. The critically-coupled microring resonator is modeled as an ideal first-order differentiator, while the under-coupled and over-coupled resonators are modeled as the time-reversed ideal fractional differentiators. Traditionally, the over-coupled microring resonators are used to form the differentiators with 1 < n < 2. However, we demonstrate that smaller fitting error can be obtained if the over-coupled microring resonator is fitted by an ideal differentiator with n < 1. The time delay of the differentiator is also considered. Finally, the influences of some key factors on the output waveform and deviation are discussed. The proposed theoretical model is beneficial for the design and application of the microring-based fractional differentiators. PMID:26381934

  14. Empirical model of equatorial electrojet based on ground-based magnetometer data during solar minimum in fall

    NASA Astrophysics Data System (ADS)

    Hamid, Nurul Shazana Abdul; Liu, Huixin; Uozumi, Teiji; Yoshikawa, Akimasa

    2015-12-01

    In this study, we constructed an empirical model of the equatorial electrojet (EEJ), including local time and longitudinal dependence, based on simultaneous data from 12 magnetometer stations located in six longitude sectors. An analysis was carried out using the equatorial electrojet index, EUEL, calculated from the geomagnetic northward H component. The magnetic EEJ strength is calculated as the difference between the normalized EUEL index of the magnetic dip equator station and the normalized EUEL index of the off-dip equator station located beyond the EEJ band. Analysis showed that this current is always strongest in the South American sector, regardless of local time (LT), and weakest in the Indian sector during 0900 and 1000 LT, but shifted to the African sector during 1100 to 1400 LT. These longitude variations of EEJ roughly follow variations of the inversed main field strength along the dip equator, except for the Indian and Southeast Asian sectors. The result showed that the EEJ component derived from the model exhibits a similar pattern with measured EEJ from ground data during noontime, mainly before 1300 LT.

  15. [DGRW-update: neurology--from empirical strategies towards evidence based interventions].

    PubMed

    Schupp, W

    2011-12-01

    Stroke, Multiple Sclerosis (MS), traumatic brain injuries (TBI) and neuropathies are the most important diseases in neurological rehabilitation financed by the German Pension Insurance. The primary goal is vocational (re)integration. Driven by multiple findings of neuroscience research the traditional holistic approach with mainly empirically derived strategies was developed further and improved by new evidence-based interventions. This process had been, and continues to be, necessary to meet the health-economic pressures for ever shorter and more efficient rehab measures. Evidence-based interventions refer to symptom-oriented measures, to team-management concepts, as well as to education and psychosocial interventions. Drug therapy and/or neurophysiological measures can be added to increase neuroregeneration and neuroplasticity. Evidence-based aftercare concepts support sustainability and steadiness of rehab results.Mirror therapy, robot-assisted training, mental training, task-specific training, and above all constraint-induced movement therapy (CIMT) can restore motor arm and hand functions. Treadmill training and robot-assisted training improve stance and gait. Botulinum toxine injections in combination with physical and redressing methods are superior in managing spasticity. Guideline-oriented management of associated pain syndromes (myofascial, neuropathic, complex-regional=dystrophic) improve primary outcome and quality of life. Drug therapy with so-called co-analgetics and physical therapy play an important role in pain management. Swallowing disorders lead to higher mortality and morbidity in the acute phase; stepwise diagnostics (screening, endoscopy, radiology) and specific swallowing therapy can reduce these risks and frequently can restore normal eating und drinking.In our modern industrial societies communicative and cognitive disturbances are more impairing than the above mentioned disorders. Speech and language therapy (SLT) is dominant in

  16. Upscaling Empirically Based Conceptualisations to Model Tropical Dominant Hydrological Processes for Historical Land Use Change

    NASA Astrophysics Data System (ADS)

    Toohey, R.; Boll, J.; Brooks, E.; Jones, J.

    2009-12-01

    Surface runoff and percolation to ground water are two hydrological processes of concern to the Atlantic slope of Costa Rica because of their impacts on flooding and drinking water contamination. As per legislation, the Costa Rican Government funds land use management from the farm to the regional scale to improve or conserve hydrological ecosystem services. In this study, we examined how land use (e.g., forest, coffee, sugar cane, and pasture) affects hydrological response at the point, plot (1 m2), and the field scale (1-6ha) to empirically conceptualize the dominant hydrological processes in each land use. Using our field data, we upscaled these conceptual processes into a physically-based distributed hydrological model at the field, watershed (130 km2), and regional (1500 km2) scales. At the point and plot scales, the presence of macropores and large roots promoted greater vertical percolation and subsurface connectivity in the forest and coffee field sites. The lack of macropores and large roots, plus the addition of management artifacts (e.g., surface compaction and a plough layer), altered the dominant hydrological processes by increasing lateral flow and surface runoff in the pasture and sugar cane field sites. Macropores and topography were major influences on runoff generation at the field scale. Also at the field scale, antecedent moisture conditions suggest a threshold behavior as a temporal control on surface runoff generation. However, in this tropical climate with very intense rainstorms, annual surface runoff was less than 10% of annual precipitation at the field scale. Significant differences in soil and hydrological characteristics observed at the point and plot scales appear to have less significance when upscaled to the field scale. At the point and plot scales, percolation acted as the dominant hydrological process in this tropical environment. However, at the field scale for sugar cane and pasture sites, saturation-excess runoff increased as

  17. Meta-Theoretical Contributions to the Constitution of a Model-Based Didactics of Science

    NASA Astrophysics Data System (ADS)

    Ariza, Yefrin; Lorenzano, Pablo; Adúriz-Bravo, Agustín

    2016-07-01

    There is nowadays consensus in the community of didactics of science (i.e. science education understood as an academic discipline) regarding the need to include the philosophy of science in didactical research, science teacher education, curriculum design, and the practice of science education in all educational levels. Some authors have identified an ever-increasing use of the concept of `theoretical model', stemming from the so-called semantic view of scientific theories. However, it can be recognised that, in didactics of science, there are over-simplified transpositions of the idea of model (and of other meta-theoretical ideas). In this sense, contemporary philosophy of science is often blurred or distorted in the science education literature. In this paper, we address the discussion around some meta-theoretical concepts that are introduced into didactics of science due to their perceived educational value. We argue for the existence of a `semantic family', and we characterise four different versions of semantic views existing within the family. In particular, we seek to contribute to establishing a model-based didactics of science mainly supported in this semantic family.

  18. A Theoretical and Empirical Investigation of Professional Development's Impact on Self- and Collective Efficacy by School Accountability Status

    ERIC Educational Resources Information Center

    Moon, Gail S.

    2012-01-01

    This quantitative study used the Schools and Staffing Survey of 2007-2008, a school-based stratified probability-proportionate-to-size sample of all American schools, to explore the relationships of professional development to teachers' self- and collective efficacy by school accountability status as measured by adequate yearly progress…

  19. The Hedonic Wage Technique as a Tool for Estimating the Costs of School Personnel: A Theoretical Exposition with Implications for Empirical Analysis.

    ERIC Educational Resources Information Center

    Chambers, Jay G.

    Present systems for the apportionment of grants from the state or federal level to local public school districts are based primarily on measures of district wealth as modified by weightings for the characteristics of the student population. Until recently little attention has been given to differences among districts in the costs of providing…

  20. Organizational Learning, Strategic Flexibility and Business Model Innovation: An Empirical Research Based on Logistics Enterprises

    NASA Astrophysics Data System (ADS)

    Bao, Yaodong; Cheng, Lin; Zhang, Jian

    Using the data of 237 Jiangsu logistics firms, this paper empirically studies the relationship among organizational learning capability, business model innovation, strategic flexibility. The results show as follows; organizational learning capability has positive impacts on business model innovation performance; strategic flexibility plays mediating roles on the relationship between organizational learning capability and business model innovation; interaction among strategic flexibility, explorative learning and exploitative learning play significant roles in radical business model innovation and incremental business model innovation.

  1. Deep in Data: Empirical Data Based Software Accuracy Testing Using the Building America Field Data Repository: Preprint

    SciTech Connect

    Neymark, J.; Roberts, D.

    2013-06-01

    An opportunity is available for using home energy consumption and building description data to develop a standardized accuracy test for residential energy analysis tools. That is, to test the ability of uncalibrated simulations to match real utility bills. Empirical data collected from around the United States have been translated into a uniform Home Performance Extensible Markup Language format that may enable software developers to create translators to their input schemes for efficient access to the data. This may facilitate the possibility of modeling many homes expediently, and thus implementing software accuracy test cases by applying the translated data. This paper describes progress toward, and issues related to, developing a usable, standardized, empirical data-based software accuracy test suite.

  2. [Migration between rural peripheral areas and urban central areas in Africa: a theoretical and empirical study of migration using the example of Nairobi].

    PubMed

    Vorlaufer, K

    1984-01-01

    Migration between rural peripheral areas and urban central areas is analyzed using the city of Nairobi, Kenya, as an example. The study is based on official Kenyan data for 1969-1979. The role of Nairobi as a focal point for both centripetal and centrifugal migration is discussed, and the volume, intensity, and direction of migration streams are examined. An attempt is also made to evaluate this migration in terms of modernization and dependency theories. PMID:12340512

  3. Theoretical Study of Surface Plasmon Resonance-based Fiber Optic Sensor Utilizing Cobalt and Nickel Films

    NASA Astrophysics Data System (ADS)

    Shukla, Sarika; Sharma, Navneet K.; Sajal, Vivek

    2016-06-01

    A surface plasmon resonance (SPR) based fiber optic sensor with cobalt (Co) and nickel (Ni) layers (one layer at a time) is theoretically analyzed. The sensitivity of sensor increases linearly with increase in refractive index of sensing medium for all thicknesses of Co and Ni layers. Besides it, SPR sensor with Co layer has been shown to demonstrate higher sensitivity than that of Ni layer. The usage of Co in place of noble metals (such as gold and silver) curtails the cost of SPR sensor. Optimized thicknesses of Co and Ni layers are found to be 80 nm and 60 nm, respectively.

  4. Patient centredness in integrated care: results of a qualitative study based on a systems theoretical framework

    PubMed Central

    Lüdecke, Daniel

    2014-01-01

    Introduction Health care providers seek to improve patient-centred care. Due to fragmentation of services, this can only be achieved by establishing integrated care partnerships. The challenge is both to control costs while enhancing the quality of care and to coordinate this process in a setting with many organisations involved. The problem is to establish control mechanisms, which ensure sufficiently consideration of patient centredness. Theory and methods Seventeen qualitative interviews have been conducted in hospitals of metropolitan areas in northern Germany. The documentary method, embedded into a systems theoretical framework, was used to describe and analyse the data and to provide an insight into the specific perception of organisational behaviour in integrated care. Results The findings suggest that integrated care partnerships rely on networks based on professional autonomy in the context of reliability. The relationships of network partners are heavily based on informality. This correlates with a systems theoretical conception of organisations, which are assumed autonomous in their decision-making. Conclusion and discussion Networks based on formal contracts may restrict professional autonomy and competition. Contractual bindings that suppress the competitive environment have negative consequences for patient-centred care. Drawbacks remain due to missing self-regulation of the network. To conclude, less regimentation of integrated care partnerships is recommended. PMID:25411573

  5. The neural mediators of kindness-based meditation: a theoretical model.

    PubMed

    Mascaro, Jennifer S; Darcher, Alana; Negi, Lobsang T; Raison, Charles L

    2015-01-01

    Although kindness-based contemplative practices are increasingly employed by clinicians and cognitive researchers to enhance prosocial emotions, social cognitive skills, and well-being, and as a tool to understand the basic workings of the social mind, we lack a coherent theoretical model with which to test the mechanisms by which kindness-based meditation may alter the brain and body. Here, we link contemplative accounts of compassion and loving-kindness practices with research from social cognitive neuroscience and social psychology to generate predictions about how diverse practices may alter brain structure and function and related aspects of social cognition. Contingent on the nuances of the practice, kindness-based meditation may enhance the neural systems related to faster and more basic perceptual or motor simulation processes, simulation of another's affective body state, slower and higher-level perspective-taking, modulatory processes such as emotion regulation and self/other discrimination, and combinations thereof. This theoretical model will be discussed alongside best practices for testing such a model and potential implications and applications of future work. PMID:25729374

  6. The neural mediators of kindness-based meditation: a theoretical model

    PubMed Central

    Mascaro, Jennifer S.; Darcher, Alana; Negi, Lobsang T.; Raison, Charles L.

    2015-01-01

    Although kindness-based contemplative practices are increasingly employed by clinicians and cognitive researchers to enhance prosocial emotions, social cognitive skills, and well-being, and as a tool to understand the basic workings of the social mind, we lack a coherent theoretical model with which to test the mechanisms by which kindness-based meditation may alter the brain and body. Here, we link contemplative accounts of compassion and loving-kindness practices with research from social cognitive neuroscience and social psychology to generate predictions about how diverse practices may alter brain structure and function and related aspects of social cognition. Contingent on the nuances of the practice, kindness-based meditation may enhance the neural systems related to faster and more basic perceptual or motor simulation processes, simulation of another’s affective body state, slower and higher-level perspective-taking, modulatory processes such as emotion regulation and self/other discrimination, and combinations thereof. This theoretical model will be discussed alongside best practices for testing such a model and potential implications and applications of future work. PMID:25729374

  7. Using a Process-Based Numerical Model and Simple Empirical Relationships to Evaluate CO2 Fluxes from Agricultural Soils.

    NASA Astrophysics Data System (ADS)

    Buchner, J.; Simunek, J.; Dane, J. H.; King, A. P.; Lee, J.; Rolston, D. E.; Hopmans, J. W.

    2007-12-01

    Carbon dioxide emissions from an agricultural field in the Sacramento Valley, California, were evaluated using the process-based SOILCO2 module of the HYDRUS-1D software package and a simple empirical model. CO2 fluxes, meteorological variables, soil temperatures, and water contents were measured during years 2004-2006 at multiple locations in an agricultural field, half of which had been subjected to standard tillage and the other half to minimum tillage. Furrow irrigation was applied on a regular basis. While HYDRUS-1D simulates dynamic interactions between soil water contents, temperatures, soil CO2 concentrations, and soil respiration by numerically solving partially-differential water flow (Richards), and heat and CO2 transport (convection- dispersion) equations, an empirical model is based on simple reduction functions, closely resembling the CO2 production function of SOILCO2. It is assumed in this function that overall CO2 production in the soil profile is the sum of the soil and plant respiration, optimal values of which are affected by time, depth, water contents, temperatures, soil salinity, and CO2 concentrations in the soil profile. The effect of these environmental factors is introduced using various reduction functions that multiply the optimal soil CO2 production. While in the SOILCO2 module it is assumed that CO2 is produced in the soil profile and then transported, depending mainly on water contents, toward the soil surface, an empirical model relates CO2 emissions directly to various environmental factors. It was shown that both the numerical model and the simple reduction functions could reasonably well predict the CO2 fluxes across the soil surface. Regression coefficients between measured CO2 emissions and those predicted by the numerical and simple empirical models are compared.

  8. Smartphone-Based, Self-Administered Intervention System for Alcohol Use Disorders: Theory and Empirical Evidence Basis

    PubMed Central

    Dulin, Patrick L.; Gonzalez, Vivian M.; King, Diane K.; Giroux, Danielle; Bacon, Samantha

    2013-01-01

    Advances in mobile technology provide an opportunity to deliver in-the-moment interventions to individuals with alcohol use disorders, yet availability of effective “apps” that deliver evidence-based interventions is scarce. We developed an immediately available, portable, smartphone-based intervention system whose purpose is to provide stand-alone, self-administered assessment and intervention. In this paper, we describe how theory and empirical evidence, combined with smartphone functionality contributed to the construction of a user-friendly, engaging alcohol intervention. With translation in mind, we discuss how we selected appropriate intervention components including assessments, feedback and tools, that work together to produce the hypothesized outcomes. PMID:24347811

  9. The successful merger of theoretical thermochemistry with fragment-based methods in quantum chemistry.

    PubMed

    Ramabhadran, Raghunath O; Raghavachari, Krishnan

    2014-12-16

    CONSPECTUS: Quantum chemistry and electronic structure theory have proven to be essential tools to the experimental chemist, in terms of both a priori predictions that pave the way for designing new experiments and rationalizing experimental observations a posteriori. Translating the well-established success of electronic structure theory in obtaining the structures and energies of small chemical systems to increasingly larger molecules is an exciting and ongoing central theme of research in quantum chemistry. However, the prohibitive computational scaling of highly accurate ab initio electronic structure methods poses a fundamental challenge to this research endeavor. This scenario necessitates an indirect fragment-based approach wherein a large molecule is divided into small fragments and is subsequently reassembled to compute its energy accurately. In our quest to further reduce the computational expense associated with the fragment-based methods and overall enhance the applicability of electronic structure methods to large molecules, we realized that the broad ideas involved in a different area, theoretical thermochemistry, are transferable to the area of fragment-based methods. This Account focuses on the effective merger of these two disparate frontiers in quantum chemistry and how new concepts inspired by theoretical thermochemistry significantly reduce the total number of electronic structure calculations needed to be performed as part of a fragment-based method without any appreciable loss of accuracy. Throughout, the generalized connectivity based hierarchy (CBH), which we developed to solve a long-standing problem in theoretical thermochemistry, serves as the linchpin in this merger. The accuracy of our method is based on two strong foundations: (a) the apt utilization of systematic and sophisticated error-canceling schemes via CBH that result in an optimal cutting scheme at any given level of fragmentation and (b) the use of a less expensive second

  10. Comparison of ensemble post-processing approaches, based on empirical and dynamical error modelisation of rainfall-runoff model forecasts

    NASA Astrophysics Data System (ADS)

    Chardon, J.; Mathevet, T.; Le Lay, M.; Gailhard, J.

    2012-04-01

    In the context of a national energy company (EDF : Electricité de France), hydro-meteorological forecasts are necessary to ensure safety and security of installations, meet environmental standards and improve water ressources management and decision making. Hydrological ensemble forecasts allow a better representation of meteorological and hydrological forecasts uncertainties and improve human expertise of hydrological forecasts, which is essential to synthesize available informations, coming from different meteorological and hydrological models and human experience. An operational hydrological ensemble forecasting chain has been developed at EDF since 2008 and is being used since 2010 on more than 30 watersheds in France. This ensemble forecasting chain is characterized ensemble pre-processing (rainfall and temperature) and post-processing (streamflow), where a large human expertise is solicited. The aim of this paper is to compare 2 hydrological ensemble post-processing methods developed at EDF in order improve ensemble forecasts reliability (similar to Monatanari &Brath, 2004; Schaefli et al., 2007). The aim of the post-processing methods is to dress hydrological ensemble forecasts with hydrological model uncertainties, based on perfect forecasts. The first method (called empirical approach) is based on a statistical modelisation of empirical error of perfect forecasts, by streamflow sub-samples of quantile class and lead-time. The second method (called dynamical approach) is based on streamflow sub-samples of quantile class and streamflow variation, and lead-time. On a set of 20 watersheds used for operational forecasts, results show that both approaches are necessary to ensure a good post-processing of hydrological ensemble, allowing a good improvement of reliability, skill and sharpness of ensemble forecasts. The comparison of the empirical and dynamical approaches shows the limits of the empirical approach which is not able to take into account hydrological

  11. Theoretical studies on CO2 capture behavior of quaternary ammonium-based polymeric ionic liquids.

    PubMed

    Wang, Tao; Ge, Kun; Chen, Kexian; Hou, Chenglong; Fang, Mengxiang

    2016-05-14

    Quaternary ammonium-based polymeric ionic liquids (PILs) are novel CO2 sorbents as they have high capacity, high stability and high binding energy. Moreover, the binding energy of ionic pairs to CO2 is tunable by changing the hydration state so that the sorbent can be regenerated through humidity adjustment. In this study, theoretical calculations were conducted to reveal the mechanism of the humidity swing CO2 adsorption, based on model compounds of quaternary ammonium cation and carbonate anions. The electrostatic potential map demonstrates the anion, rather than the cation, is chemically preferential for CO2 adsorption. Further, the proton transfer process from water to carbonate at the sorbent interface is successfully depicted with an intermediate which has a higher energy state. By determining the CO2 adsorption energy and activation energy at different hydration states, it is discovered that water could promote CO2 adsorption by reducing the energy barrier of proton transfer. The adsorption/desorption equilibrium would shift to desorption by adding water, which constitutes the theoretical basis for humidity swing. By analyzing the hydrogen bonding and structure of the water molecules, it is interesting to find that the CO2 adsorption weakens the hydrophilicity of the sorbent and results in release of water. The requirement of latent heat for the phase change of water could significantly reduce the heat of adsorption. The special "self-cooling" effect during gas adsorption can lower the temperature of the sorbent and benefit the adsorption isotherms. PMID:27115032

  12. Theoretical model-based quantitative optimisation of numerical modelling for eddy current NDT

    NASA Astrophysics Data System (ADS)

    Yu, Yating; Li, Xinhua; Simm, Anthony; Tian, Guiyun

    2011-06-01

    Eddy current (EC) nondestructive testing (NDT) is one of the most widely used NDT methods. Numerical modelling of NDT methods has been used as an important investigative approach alongside experimental and theoretical studies. This paper investigates the set-up of numerical modelling using finite-element method in terms of the optimal selection of element mesh size in different regions within the model based on theoretical analysis of EC NDT. The modelling set-up is refined and evaluated through numerical simulation, balancing both computation time and simulation accuracy. A case study in the optimisation of the modelling set-up of the EC NDT system with a cylindrical probe coil is carried out to verify the proposed optimisation approach. Here, the mesh size of the simulation model is set based on the geometries of the coil and the magnetic sensor, as well as on the skin depth in the sample; so the optimised modelling set-up can be useful even when the geometry of EC system, the excitation frequency or the pulsed width is changed in multi-frequency EC, sweep-frequency EC or system and pulsed EC. Furthermore, this optimisation approach can be used to improve the trade-off between accuracy and the computation time in other more complex EC NDT simulations.

  13. Experimental and theoretical performance analysis for a CMOS-based high resolution image detector

    NASA Astrophysics Data System (ADS)

    Jain, Amit; Bednarek, Daniel R.; Rudin, Stephen

    2014-03-01

    Increasing complexity of endovascular interventional procedures requires superior x-ray imaging quality. Present stateof- the-art x-ray imaging detectors may not be adequate due to their inherent noise and resolution limitations. With recent developments, CMOS based detectors are presenting an option to fulfill the need for better image quality. For this work, a new CMOS detector has been analyzed experimentally and theoretically in terms of sensitivity, MTF and DQE. The detector (Dexela Model 1207, Perkin-Elmer Co., London, UK) features 14-bit image acquisition, a CsI phosphor, 75 μm pixels and an active area of 12 cm x 7 cm with over 30 fps frame rate. This detector has two modes of operations with two different full-well capacities: high and low sensitivity. The sensitivity and instrumentation noise equivalent exposure (INEE) were calculated for both modes. The detector modulation-transfer function (MTF), noise-power spectra (NPS) and detective quantum efficiency (DQE) were measured using an RQA5 spectrum. For the theoretical performance evaluation, a linear cascade model with an added aliasing stage was used. The detector showed excellent linearity in both modes. The sensitivity and the INEE of the detector were found to be 31.55 DN/μR and 0.55 μR in high sensitivity mode, while they were 9.87 DN/μR and 2.77 μR in low sensitivity mode. The theoretical and experimental values for the MTF and DQE showed close agreement with good DQE even at fluoroscopic exposure levels. In summary, the Dexela detector's imaging performance in terms of sensitivity, linear system metrics, and INEE demonstrates that it can overcome the noise and resolution limitations of present state-of-the-art x-ray detectors.

  14. Density-based empirical likelihood procedures for testing symmetry of data distributions and K-sample comparisons

    PubMed Central

    Vexler, Albert; Tanajian, Hovig; Hutson, Alan D.

    2016-01-01

    In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K-sample distributions. Recognizing that recent statistical software packages do not sufficiently address K-sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p-values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p-value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p-value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.

  15. Empirical evaluation of H.265/HEVC-based dynamic adaptive video streaming over HTTP (HEVC-DASH)

    NASA Astrophysics Data System (ADS)

    Irondi, Iheanyi; Wang, Qi; Grecos, Christos

    2014-05-01

    Real-time HTTP streaming has gained global popularity for delivering video content over Internet. In particular, the recent MPEG-DASH (Dynamic Adaptive Streaming over HTTP) standard enables on-demand, live, and adaptive Internet streaming in response to network bandwidth fluctuations. Meanwhile, emerging is the new-generation video coding standard, H.265/HEVC (High Efficiency Video Coding) promises to reduce the bandwidth requirement by 50% at the same video quality when compared with the current H.264/AVC standard. However, little existing work has addressed the integration of the DASH and HEVC standards, let alone empirical performance evaluation of such systems. This paper presents an experimental HEVC-DASH system, which is a pull-based adaptive streaming solution that delivers HEVC-coded video content through conventional HTTP servers where the client switches to its desired quality, resolution or bitrate based on the available network bandwidth. Previous studies in DASH have focused on H.264/AVC, whereas we present an empirical evaluation of the HEVC-DASH system by implementing a real-world test bed, which consists of an Apache HTTP Server with GPAC, an MP4Client (GPAC) with open HEVC-based DASH client and a NETEM box in the middle emulating different network conditions. We investigate and analyze the performance of HEVC-DASH by exploring the impact of various network conditions such as packet loss, bandwidth and delay on video quality. Furthermore, we compare the Intra and Random Access profiles of HEVC coding with the Intra profile of H.264/AVC when the correspondingly encoded video is streamed with DASH. Finally, we explore the correlation among the quality metrics and network conditions, and empirically establish under which conditions the different codecs can provide satisfactory performance.

  16. Comparisons of ground motions from the 1999 Chi-Chi, earthquake with empirical predictions largely based on data from California

    USGS Publications Warehouse

    Boore, D.M.

    2001-01-01

    This article has the modest goal of comparing the ground motions recorded during the 1999 Chi-Chi, Taiwan, mainshock with predictions from four empirical-based equations commonly used for western North America; these empirical predictions are largely based on data from California. Comparisons are made for peak acceleration and 5%-damped response spectra at periods between 0.1 and 4 sec. The general finding is that the Chi-Chi ground motions are smaller than those predicted from the empirically based equations for periods less than about 1 sec by factors averaging about 0.4 but as small as 0.26 (depending on period, on which equation is used, and on whether the sites are assumed to be rock or soil). There is a trend for the observed motions to approach or even exceed the predicted motions for longer periods. Motions at similar distances (30-60 km) to the east and to the west of the fault differ dramatically at periods between about 2 and 20 sec: Long-duration wave trains are present on the motions to the west, and when normalized to similar amplitudes at short periods, the response spectra of the motions at the western stations are as much as five times larger than those of motions from eastern stations. The explanation for the difference is probably related to site and propagation effects; the western stations are on the Coastal Plain, whereas the eastern stations are at the foot of young and steep mountains, either in the relatively narrow Longitudinal Valley or along the eastern coast-the sediments underlying the eastern stations are probably shallower and have higher velocity than those under the western stations.

  17. Empirical rainfall thresholds and copula based IDF curves for shallow landslides and flash floods

    NASA Astrophysics Data System (ADS)

    Bezak, Nejc; Šraj, Mojca; Brilly, Mitja; Mikoš, Matjaž

    2015-04-01

    Large mass movements, like deep-seated landslides or large debris flows, and flash floods can endanger human lives and cause huge environmental and economic damage in hazard areas. The main objective of the study was to investigate the characteristics of selected extreme rainfall events, which triggered landslides and caused flash floods, in Slovenia in the last 25 years. Seven extreme events, which occurred in Slovenia (Europe) in the last 25 years (1990-2014) and caused 17 casualties and about 500 million Euros of economic loss, were analysed in this study. Post-event analyses showed that rainfall characteristics triggering flash floods and landslides are different where landslides were triggered by longer duration (up to one or few weeks) rainfall events and flash floods by short duration (few hours to one or two days) rainfall events. The sensitivity analysis results indicate that inter-event time variable, which is defined as the minimum duration of the period without rain between two consecutive rainfall events, and sample definition methodology can have significant influence on the position of rainfall events in the intensity-duration space, on the constructed intensity-duration-frequency (IDF) curves and on the relationship between the empirical rainfall threshold curves and IDF curves constructed using copula approach. The empirical rainfall threshold curves (ID curves) were also evaluated for the selected extreme events. The results indicate that a combination of several empirical rainfall thresholds with appropriate high density of rainfall measuring network can be used as part of the early warning system for initiation of landslides and debris flows. However, different rainfall threshold curves should be used for lowland and mountainous areas in Slovenia. Furthermore, the intensity-duration-frequency (IDF) relationship was constructed using the Frank copula functions for 16 pluviographic meteorological stations in Slovenia using the high resolution

  18. EEG-fMRI Based Information Theoretic Characterization of the Human Perceptual Decision System

    PubMed Central

    Ostwald, Dirk; Porcaro, Camillo; Mayhew, Stephen D.; Bagshaw, Andrew P.

    2012-01-01

    The modern metaphor of the brain is that of a dynamic information processing device. In the current study we investigate how a core cognitive network of the human brain, the perceptual decision system, can be characterized regarding its spatiotemporal representation of task-relevant information. We capitalize on a recently developed information theoretic framework for the analysis of simultaneously acquired electroencephalography (EEG) and functional magnetic resonance imaging data (fMRI) (Ostwald et al. (2010), NeuroImage 49: 498–516). We show how this framework naturally extends from previous validations in the sensory to the cognitive domain and how it enables the economic description of neural spatiotemporal information encoding. Specifically, based on simultaneous EEG-fMRI data features from n = 13 observers performing a visual perceptual decision task, we demonstrate how the information theoretic framework is able to reproduce earlier findings on the neurobiological underpinnings of perceptual decisions from the response signal features' marginal distributions. Furthermore, using the joint EEG-fMRI feature distribution, we provide novel evidence for a highly distributed and dynamic encoding of task-relevant information in the human brain. PMID:22485152

  19. Inclusion of persistence length-based secondary structure in replica field theoretic models of heteropolymer freezing

    NASA Astrophysics Data System (ADS)

    Weber, Jeffrey K.; Pande, Vijay S.

    2013-09-01

    The protein folding problem has long represented a "holy grail" in statistical physics due to its physical complexity and its relevance to many human diseases. While past theoretical work has yielded apt descriptions of protein folding landscapes, recent large-scale simulations have provided insights into protein folding that were impractical to obtain from early theories. In particular, the role that non-native contacts play in protein folding, and their relation to the existence of misfolded, β-sheet rich trap states on folding landscapes, has emerged as a topic of interest in the field. In this paper, we present a modified model of heteropolymer freezing that includes explicit secondary structural characteristics which allow observations of "intramolecular amyloid" states to be probed from a theoretical perspective. We introduce a variable persistence length-based energy penalty to a model Hamiltonian, and we illustrate how this modification alters the phase transitions present in the theory. We find, in particular, that inclusion of this variable persistence length increases both generic freezing and folding temperatures in the model, allowing both folding and glass transitions to occur in a more highly optimized fashion. We go on to discuss how these changes might relate to protein evolution, misfolding, and the emergence of intramolecular amyloid states.

  20. A theoretical and experimental evaluation of imidazolium-based ionic liquids for atmospheric mercury capture.

    PubMed

    Iuga, Cristina; Solís, Corina; Alvarez-Idaboy, J Raúl; Martínez, Miguel Angel; Mondragón, Ma Antonieta; Vivier-Bunge, Annik

    2014-05-01

    In this work, the capacity of three different imidazolium-based ionic liquids (ILs) for atmospheric mercury capture has been evaluated. Theoretical calculations using monomer and dimer models of ILs showed that [BMIM]⁺[SCN]⁻ and [BMIM]⁺[Cl]⁻ ionic liquids capture gaseous Hg⁰, while [BMIM]⁺[PF₆]⁻ shows no ability for this purpose. These findings are supported by experimental data obtained using particle induced X-ray emission (PIXE) trace element analysis. Experimental and theoretical infrared data of the ILs were obtained before and after exposure to Hg. In all cases, no displacement of the bands was observed, indicating that the interaction does not significantly affect the force constants of substrate bonds. This suggests that van der Waals forces are the main forces responsible for mercury capture. Since the anion-absorbate is the driving force of the interaction, the largest charge-volume ratio of [Cl]⁻ could explain the higher affinity for mercury sequestration of the [BMIM]⁺[Cl]⁻ salt. PMID:24781855

  1. Information theoretic discrepancy-based iterative reconstruction (IDIR) algorithm for limited angle tomography

    NASA Astrophysics Data System (ADS)

    Jang, Kwang Eun; Lee, Jongha; Lee, Kangui; Sung, Younghun; Lee, SeungDeok

    2012-03-01

    The X-ray tomosynthesis that measures several low dose projections over a limited angular range has been investigated as an alternative method of X-ray mammography for breast cancer screening. An extension of the scan coverage increases the vertical resolution by mitigating the interplane blurring. The implementation of a wide angle tomosynthesis equipment, however, may not be straightforward, mainly due to the image deterioration from the statistical noise in exterior projections. In this paper, we adopt the voltage modulation scheme to enlarge the coverage of the tomosynthesis scan. The higher tube voltages are used for outer angles, which offers the sufficient penetrating power for outlying frames in which the pathway of X-ray photons is elongated. To reconstruct 3D information from voltage modulated projections, we propose a novel algorithm, named information theoretic discrepancy based iterative reconstruction (IDIR) algorithm, which allows to account for the polychromatic acquisition model. The generalized information theoretic discrepancy (GID) is newly employed as the objective function. Using particular features of the GID, the cost function is derived in terms of imaginary variables with energy dependency, which leads to a tractable optimization problem without using the monochromatic approximation. In preliminary experiments using simulated and experimental equipment, the proposed imaging architecture and IDIR algorithm showed superior performances over conventional approaches.

  2. Theoretical model of adaptive fiber tip positioner based on flexible hinges and levers

    NASA Astrophysics Data System (ADS)

    Zhi, Dong; Ma, Yan-xing; Wang, Xiao-lin; Zhou, Pu; Si, Lei

    2015-10-01

    In this manuscript, we establish a model and theoretically investigate the novel structure of AFTP designed by ourselves. We analyze each sub-structure of the new type of AFTP and firstly use the software of ANSYS to simulate the deformation of the flexible hinge under the external force. The result shows that the deformation of the flexible hinge is mainly from and almost linear to the middle part. Further, after considering the influence of the levers and piezoelectric actuators, we setup the theoretical model in which the displacement is only relative to the ratio of the lever R. With the optimal value of R, we can get the relative largest displacement of the end cap when the other parameters are confirmed. As the maximal voltage applied on the piezoelectric stacks actuators (PSA) is finite, the largest displacement of the end cap is restricted. Neglecting the influence of the effective friction force (Ff) of inner-system, the relationship between the largest displacement of the end cap and the ratio (R) is derived numerically. From the calculated results, we get the largest displacement is about 67 μm with R of 6.9. This work provides a reference for structure optimization of AFTP based on flexible hinges and levers.

  3. Metamaterial-based theoretical description of light scattering by metallic nano-hole array structures

    SciTech Connect

    Singh, Mahi R.; Najiminaini, Mohamadreza; Carson, Jeffrey J. L.; Balakrishnan, Shankar

    2015-05-14

    We have experimentally and theoretically investigated the light-matter interaction in metallic nano-hole array structures. The scattering cross section spectrum was measured for three samples each having a unique nano-hole array radius and periodicity. Each measured spectrum had several peaks due to surface plasmon polaritons. The dispersion relation and the effective dielectric constant of the structure were calculated using transmission line theory and Bloch's theorem. Using the effective dielectric constant and the transfer matrix method, the surface plasmon polariton energies were calculated and found to be quantized. Using these quantized energies, a Hamiltonian for the surface plasmon polaritons was written in the second quantized form. Working with the Hamiltonian, a theory of scattering cross section was developed based on the quantum scattering theory and Green's function method. For both theory and experiment, the location of the surface plasmon polariton spectral peaks was dependant on the array periodicity and radii of the nano-holes. Good agreement was observed between the experimental and theoretical results. It is proposed that the newly developed theory can be used to facilitate optimization of nanosensors for medical and engineering applications.

  4. Investigation of an empirical probability measure based test for multivariate normality

    SciTech Connect

    Booker, J.M.; Johnson, M.E.; Beckman, R.J.

    1984-01-01

    Foutz (1980) derived a goodness of fit test for a hypothesis specifying a continuous, p-variate distribution. The test statistic is both distribution-free and independent of p. In adapting the Foutz test for multivariate normality, we consider using chi/sup 2/ and rescaled beta variates in constructing statistically equivalent blocks. The Foutz test is compared to other multivariate normality tests developed by Hawkins (1981) and Malkovich and Afifi (1973). The set of alternative distributions tested include Pearson type II and type VII, Johnson translations, Plackett, and distributions arising from Khintchine's theorem. Univariate alternatives from the general class developed by Johnson et al. (1980) were also used. An empirical study confirms the independence of the test statistic on p even when parameters are estimated. In general, the Foutz test is less conservative under the null hypothesis but has poorer power under most alternatives than the other tests.

  5. Cyclone optimization based on a new empirical model for pressure drop

    SciTech Connect

    Ramachandran, G.; Leith, D. ); Dirgo, J. ); Feldman, H. )

    1991-01-01

    An empirical model for predicting pressure drop across a cyclone, developed by Dirgo is presented. The model was developed through a statistical analysis of pressure drop data for 98 cyclone designs. The model is shown to perform better than the pressure drop models of Shepherd and Lapple, Alexander, First, Stairmand, and Barth. This model is used with the efficiency model of Iozia and Leith to develop an optimization curve which predicts the minimum pressure drop and the dimension ratios of the optimized cyclone for a given aerodynamic cut diameter, d{sub 50}. The effect of variation in cyclone height, cyclone diameter, and flow on the optimization is determined. The optimization results are used to develop a design procedure for optimized cyclones.

  6. Empirical mode decomposition-based facial pose estimation inside video sequences

    NASA Astrophysics Data System (ADS)

    Qing, Chunmei; Jiang, Jianmin; Yang, Zhijing

    2010-03-01

    We describe a new pose-estimation algorithm via integration of the strength in both empirical mode decomposition (EMD) and mutual information. While mutual information is exploited to measure the similarity between facial images to estimate poses, EMD is exploited to decompose input facial images into a number of intrinsic mode function (IMF) components, which redistribute the effect of noise, expression changes, and illumination variations as such that, when the input facial image is described by the selected IMF components, all the negative effects can be minimized. Extensive experiments were carried out in comparisons to existing representative techniques, and the results show that the proposed algorithm achieves better pose-estimation performances with robustness to noise corruption, illumination variation, and facial expressions.

  7. Inferring causal molecular networks: empirical assessment through a community-based effort.

    PubMed

    Hill, Steven M; Heiser, Laura M; Cokelaer, Thomas; Unger, Michael; Nesser, Nicole K; Carlin, Daniel E; Zhang, Yang; Sokolov, Artem; Paull, Evan O; Wong, Chris K; Graim, Kiley; Bivol, Adrian; Wang, Haizhou; Zhu, Fan; Afsari, Bahman; Danilova, Ludmila V; Favorov, Alexander V; Lee, Wai Shing; Taylor, Dane; Hu, Chenyue W; Long, Byron L; Noren, David P; Bisberg, Alexander J; Mills, Gordon B; Gray, Joe W; Kellen, Michael; Norman, Thea; Friend, Stephen; Qutub, Amina A; Fertig, Elana J; Guan, Yuanfang; Song, Mingzhou; Stuart, Joshua M; Spellman, Paul T; Koeppl, Heinz; Stolovitzky, Gustavo; Saez-Rodriguez, Julio; Mukherjee, Sach

    2016-04-01

    It remains unclear whether causal, rather than merely correlational, relationships in molecular networks can be inferred in complex biological settings. Here we describe the HPN-DREAM network inference challenge, which focused on learning causal influences in signaling networks. We used phosphoprotein data from cancer cell lines as well as in silico data from a nonlinear dynamical model. Using the phosphoprotein data, we scored more than 2,000 networks submitted by challenge participants. The networks spanned 32 biological contexts and were scored in terms of causal validity with respect to unseen interventional data. A number of approaches were effective, and incorporating known biology was generally advantageous. Additional sub-challenges considered time-course prediction and visualization. Our results suggest that learning causal relationships may be feasible in complex settings such as disease states. Furthermore, our scoring approach provides a practical way to empirically assess inferred molecular networks in a causal sense. PMID:26901648

  8. Measuring Modernism: Theoretical and Empirical Explorations

    ERIC Educational Resources Information Center

    Schnaiberg, Allan

    1970-01-01

    Using data from married Turkish women in Ankara city and four villages, it appears that each of the (6) measures of modernism represents a distinct behavioral sphere. A common denominator appears to lie in an emancipation" complex. (Author)

  9. "vocd": A Theoretical and Empirical Evaluation

    ERIC Educational Resources Information Center

    McCarthy, Philip M.; Jarvis, Scott

    2007-01-01

    A reliable index of lexical diversity (LD) has remained stubbornly elusive for over 60 years. Meanwhile, researchers in fields as varied as "stylistics," "neuropathology," "language acquisition," and even "forensics" continue to use flawed LD indices--often ignorant that their results are questionable and in some cases potentially dangerous.…

  10. [Theoretical and empirical foundations of transpersonal psychology].

    PubMed

    Grof, S

    1994-04-01

    In this lecture, the new insights and strategies that transpersonal psychology offers will be discussed in relation to the global crisis. Western academic psychiatry, psychology, and psychotherapy are ethnocentric; they tend to see their point of view as being superior to the perspectives of all the other cultural groups. They are also pragmacentric in that they take into considerations only experiences and observations made in the ordinary state of consciousness (with the exception of dreams). Such an approach makes no distinction between mysticism and psychosis and pathogizes spiritual and healing practices of ancient and aboriginal cultures. In this lecture, the results of serious study of the entire spectrum of human experience will be discussed, including non-ordinary states of consciousness. Such research logically leads to transpersonal psychology, a system that includes and honors the specific contributions of all cultures throughout ages and sees spirituality as an essential dimension of the human psyche and existence. PMID:8004686

  11. Teacher Authenticity: A Theoretical and Empirical Investigation

    ERIC Educational Resources Information Center

    Akoury, Paul N.

    2013-01-01

    This study builds on a small, under-acknowledged body of educational works that speak to the problem of an overly technical focus on teaching, which negates a more authentic consideration of what it means to teach, including an exploration of the spiritual and moral dimensions. A need for educational change and the teacher's authentic way of…

  12. An Empirical Framework for Implementing Lifelong Learning Systems.

    ERIC Educational Resources Information Center

    Law, Song Seng; Low, Sock Hwee

    Based on a literature review of factors that affect the provision of learning opportunities for adults and the experiences of Singapore's Institute of Technical Education (ITE), this paper proposes an empirical framework for developing and implementing lifelong learning systems. Following an introduction, the theoretical foundation for the…

  13. Scholastic Effort: An Empirical Test of Student Choice Models.

    ERIC Educational Resources Information Center

    Prince, Raymond; And Others

    1981-01-01

    This article presents a report on student effort in economics courses based on time and the efficiency of its use. Information is presented on explanatory variables in each of the several learning models tested, theoretical basis for recent empirical student learning, definitions of student input, and the student-choice model proposed by Richard…

  14. Gold price analysis based on ensemble empirical model decomposition and independent component analysis

    NASA Astrophysics Data System (ADS)

    Xian, Lu; He, Kaijian; Lai, Kin Keung

    2016-07-01

    In recent years, the increasing level of volatility of the gold price has received the increasing level of attention from the academia and industry alike. Due to the complexity and significant fluctuations observed in the gold market, however, most of current approaches have failed to produce robust and consistent modeling and forecasting results. Ensemble Empirical Model Decomposition (EEMD) and Independent Component Analysis (ICA) are novel data analysis methods that can deal with nonlinear and non-stationary time series. This study introduces a new methodology which combines the two methods and applies it to gold price analysis. This includes three steps: firstly, the original gold price series is decomposed into several Intrinsic Mode Functions (IMFs) by EEMD. Secondly, IMFs are further processed with unimportant ones re-grouped. Then a new set of data called Virtual Intrinsic Mode Functions (VIMFs) is reconstructed. Finally, ICA is used to decompose VIMFs into statistically Independent Components (ICs). The decomposition results reveal that the gold price series can be represented by the linear combination of ICs. Furthermore, the economic meanings of ICs are analyzed and discussed in detail, according to the change trend and ICs' transformation coefficients. The analyses not only explain the inner driving factors and their impacts but also conduct in-depth analysis on how these factors affect gold price. At the same time, regression analysis has been conducted to verify our analysis. Results from the empirical studies in the gold markets show that the EEMD-ICA serve as an effective technique for gold price analysis from a new perspective.

  15. Theoretical study of the nontraditional enol-based photoacidity of firefly oxyluciferin.

    PubMed

    Pinto da Silva, Luís; Esteves da Silva, Joaquim C G

    2015-02-01

    A theoretical analysis of the enol-based photoacidity of oxyluciferin in water is presented. The basis for this phenomenon is found to be the hydrogen-bonding network that involves the conjugated photobase of oxyluciferin. The hydrogen-bonding network involving the enolate thiazole moiety is stronger than that of the benzothiazole phenolate moiety. Therefore, enolate oxyluciferin should be stabilized versus the phenolate anion. This difference in strength is attributed to the fact that the thiazole moiety has more potential hydrogen-bond acceptors near the proton donor atom than the benzothiazole moiety. Moreover, the phenol-based excited-state proton transfer leads to a decrease in the hydrogen-bond acceptor potential of the thiazole atoms. The ground-state enol-based acidity of oxyluciferin is also studied. This phenomenon can be explained by stabilization of the enolate anion through strengthening of a bond between water and the nitrogen atom of the thiazole ring, in an enol-based proton-transfer-dependent way. PMID:25404255

  16. Boron based two-dimensional crystals: theoretical design, realization proposal and applications

    NASA Astrophysics Data System (ADS)

    Li, Xian-Bin; Xie, Sheng-Yi; Zheng, Hui; Tian, Wei Quan; Sun, Hong-Bo

    2015-11-01

    The successful realization of free-standing graphene and the various applications of its exotic properties have spurred tremendous research interest for two-dimensional (2D) layered materials. Besides graphene, many other 2D materials have been successfully produced by experiment, such as silicene, monolayer MoS2, few-layer black phosphorus and so on. As a neighbor of carbon in the periodic table, element boron is interesting and many researchers have contributed their efforts to realize boron related 2D structures. These structures may be significant both in fundamental science and future technical applications in nanoelectronics and nanodevices. In this review, we summarize the recent developments of 2D boron based materials. The theoretical design, possible experimental realization strategies and their potential technical applications are presented and discussed. Also, the current challenges and prospects of this area are discussed.

  17. Theoretical modeling of a Localized Surface Plasmon Resonance (LSPR) based fiber optic temperature sensor

    NASA Astrophysics Data System (ADS)

    Algorri, J. F.; García-Cámara, B.; García-García, A.; Urruchi, V.; Sánchez-Pena, J. M.

    2014-05-01

    A localized surface plasmon resonance based fiber optic sensor for temperature sensing has been analyzed theoretically. The effects of the size of the spherical metal nanoparticle on the performance of the sensor have been studied in detail. The high sensitivity of localized surface plasmon resonances to refraction index changes, in collaboration with the high thermo-optic coefficients of Liquid Crystal materials, has result in a fiber optical sensor with high temperature sensitivity. This sensitivity has been demonstrated to be dependent on nanoparticle size. Maximum sensitivities of 4nm/°C can be obtained for some specific temperature ranges. The proposed sensor will be low cost, and will have all the typical advantages of fiber optic sensors.

  18. Boron based two-dimensional crystals: theoretical design, realization proposal and applications.

    PubMed

    Li, Xian-Bin; Xie, Sheng-Yi; Zheng, Hui; Tian, Wei Quan; Sun, Hong-Bo

    2015-12-01

    The successful realization of free-standing graphene and the various applications of its exotic properties have spurred tremendous research interest for two-dimensional (2D) layered materials. Besides graphene, many other 2D materials have been successfully produced by experiment, such as silicene, monolayer MoS2, few-layer black phosphorus and so on. As a neighbor of carbon in the periodic table, element boron is interesting and many researchers have contributed their efforts to realize boron related 2D structures. These structures may be significant both in fundamental science and future technical applications in nanoelectronics and nanodevices. In this review, we summarize the recent developments of 2D boron based materials. The theoretical design, possible experimental realization strategies and their potential technical applications are presented and discussed. Also, the current challenges and prospects of this area are discussed. PMID:26523799

  19. Theoretical analysis and design of a near-infrared broadband absorber based on EC model.

    PubMed

    Zhang, Qing; Bai, Lihua; Bai, Zhengyuan; Hu, Pidong; Liu, Chengpu

    2015-04-01

    We theoretically introduced a design paradigm and tool by extending the circuit functionalities from radio frequency to near infrared domain, and its first usage to design a broadband near-infrared (1.5μm~3.5μm) absorber, is successfully demonstrated. After extracting the equivalent circuit (EC) model of the absorber structure, the formerly relatively complicated frequency response can be evaluated relatively easily based on classic circuit formulas. The feasibility is confirmed by its consistency with the rigorous FDTD calculation. The absorber is an array of truncated metal-dielectric multilayer composited pyramid unit structure, and the gradually modified square patch design makes the absorber be not sensitive to the incident angle and polarization of light. PMID:25968728

  20. Theoretical base and numerical tools for modeling transitions between continuous and disperse multiphase motions

    NASA Astrophysics Data System (ADS)

    Zhang, Duan; Ma, Xia; Giguere, Paul

    2009-11-01

    Transitions between continuous and disperse multiphase motions happen commonly in nature and in our daily life. The phenomena include dissolving sugar cubes in a cup, formation of rain and hail, shattering a piece of glass. The capability of numerically simulating these phenomena is both important to industrial applications and to the understanding of nature. Relative to other aspects in this topic, theories for disperse multiphase flow is better developed despite many important issues still to be resolved. The theory for continuous multiphase flow is still in its infancy. The study of transition between continuous and disperse multiphase motion is at an even earlier stage of development. In this talk, we describe a possible theoretical framework based on the probability and statistical theory and a useful numerical method in simulating these phenomena. Deficiencies in the theory and in the numerical method are also discussed.

  1. DNA bases assembled on the Au(110)/electrolyte interface: a combined experimental and theoretical study.

    PubMed

    Salvatore, Princia; Nazmutdinov, Renat R; Ulstrup, Jens; Zhang, Jingdong

    2015-02-19

    Among the low-index single-crystal gold surfaces, the Au(110) surface is the most active toward molecular adsorption and the one with fewest electrochemical adsorption data reported. Cyclic voltammetry (CV), electrochemically controlled scanning tunneling microscopy (EC-STM), and density functional theory (DFT) calculations have been employed in the present study to address the adsorption of the four nucleobases adenine (A), cytosine (C), guanine (G), and thymine (T), on the Au(110)-electrode surface. Au(110) undergoes reconstruction to the (1 × 3) surface in electrochemical environment, accompanied by a pair of strong voltammetry peaks in the double-layer region in acid solutions. Adsorption of the DNA bases gives featureless voltammograms with lower double-layer capacitance, suggesting that all the bases are chemisorbed on the Au(110) surface. Further investigation of the surface structures of the adlayers of the four DNA bases by EC-STM disclosed lifting of the Au(110) reconstruction, specific molecular packing in dense monolayers, and pH dependence of the A and G adsorption. DFT computations based on a cluster model for the Au(110) surface were performed to investigate the adsorption energy and geometry of the DNA bases in different adsorbate orientations. The optimized geometry is further used to compute models for STM images which are compared with the recorded STM images. This has provided insight into the physical nature of the adsorption. The specific orientations of A, C, G, and T on Au(110) and the nature of the physical adsorbate/surface interaction based on the combination of the experimental and theoretical studies are proposed, and differences from nucleobase adsorption on Au(111)- and Au(100)-electrode surfaces are discussed. PMID:25611676

  2. Simulations of the Structure and Properties of Large Icosahedral Boron Clusters Based on a Novel Semi-Empirical Hamiltonian

    NASA Astrophysics Data System (ADS)

    Tandy, Paul; Yu, Ming; Jayanthi, C. S.; Wu, Shi-Yu; Condensed Matter Theory Group Team

    2013-03-01

    A successful development of a parameterized semi-empirical Hamiltonian (SCED-LCAO) for boron based on a LCAO framework using a sp3 basis set will be discussed. The semi-empirical Hamiltonian contains environment-dependency and electron screening effects of a many-body Hamiltonian and allows for charge self-consistency. We have optimized the parameters of the SCED-LCAO Hamiltonian for boron by fitting the properties (e.g., the binding energy, bond length, etc.) of boron sheets, small clusters and boron alpha to first-principles calculations based on DFT calculations. Although extended phases of boron alpha and beta have been studied, large clusters of boron with icosahedral structures such as those cut from boron alpha are difficult if not impossible to simulate with ab initio methods. We will demonstrate the effectiveness of the SCED-LCAO Hamiltonian in studying icosahedral boron clusters containing up to 800 atoms and will report on some novel boron clusters and computational speed. Support has been provided by the Dillion Fellowship.

  3. Development of the Knowledge-based & Empirical Combined Scoring Algorithm (KECSA) to Score Protein-Ligand Interactions

    PubMed Central

    Zheng, Zheng

    2013-01-01

    We describe a novel knowledge-based protein-ligand scoring function that employs a new definition for the reference state, allowing us to relate a statistical potential to a Lennard-Jones (LJ) potential. In this way, the LJ potential parameters were generated from protein-ligand complex structural data contained in the PDB. Forty-nine types of atomic pairwise interactions were derived using this method, which we call the knowledge-based and empirical combined scoring algorithm (KECSA). Two validation benchmarks were introduced to test the performance of KECSA. The first validation benchmark included two test sets that address the training-set and enthalpy/entropy of KECSA The second validation benchmark suite included two large-scale and five small-scale test sets to compare the reproducibility of KECSA with respect to two empirical score functions previously developed in our laboratory (LISA and LISA+), as well as to other well-known scoring methods. Validation results illustrate that KECSA shows improved performance in all test sets when compared with other scoring methods especially in its ability to minimize the RMSE. LISA and LISA+ displayed similar performance using the correlation coefficient and Kendall τ as the metric of quality for some of the small test sets. Further pathways for improvement are discussed which would KECSA more sensitive to subtle changes in ligand structure. PMID:23560465

  4. A Compound fault diagnosis for rolling bearings method based on blind source separation and ensemble empirical mode decomposition.

    PubMed

    Wang, Huaqing; Li, Ruitong; Tang, Gang; Yuan, Hongfang; Zhao, Qingliang; Cao, Xi

    2014-01-01

    A Compound fault signal usually contains multiple characteristic signals and strong confusion noise, which makes it difficult to separate week fault signals from them through conventional ways, such as FFT-based envelope detection, wavelet transform or empirical mode decomposition individually. In order to improve the compound faults diagnose of rolling bearings via signals' separation, the present paper proposes a new method to identify compound faults from measured mixed-signals, which is based on ensemble empirical mode decomposition (EEMD) method and independent component analysis (ICA) technique. With the approach, a vibration signal is firstly decomposed into intrinsic mode functions (IMF) by EEMD method to obtain multichannel signals. Then, according to a cross correlation criterion, the corresponding IMF is selected as the input matrix of ICA. Finally, the compound faults can be separated effectively by executing ICA method, which makes the fault features more easily extracted and more clearly identified. Experimental results validate the effectiveness of the proposed method in compound fault separating, which works not only for the outer race defect, but also for the rollers defect and the unbalance fault of the experimental system. PMID:25289644

  5. The future scalability of pH-based genome sequencers: A theoretical perspective

    NASA Astrophysics Data System (ADS)

    Go, Jonghyun; Alam, Muhammad A.

    2013-10-01

    Sequencing of human genome is an essential prerequisite for personalized medicine and early prognosis of various genetic diseases. The state-of-art, high-throughput genome sequencing technologies provide improved sequencing; however, their reliance on relatively expensive optical detection schemes has prevented wide-spread adoption of the technology in routine care. In contrast, the recently announced pH-based electronic genome sequencers achieve fast sequencing at low cost because of the compatibility with the current microelectronics technology. While the progress in technology development has been rapid, the physics of the sequencing chips and the potential for future scaling (and therefore, cost reduction) remain unexplored. In this article, we develop a theoretical framework and a scaling theory to explain the principle of operation of the pH-based sequencing chips and use the framework to explore various perceived scaling limits of the technology related to signal to noise ratio, well-to-well crosstalk, and sequencing accuracy. We also address several limitations inherent to the key steps of pH-based genome sequencers, which are widely shared by many other sequencing platforms in the market but remained unexplained properly so far.

  6. Theoretical investigation of all-metal-based mushroom plasmonic metamaterial absorbers at infrared wavelengths

    NASA Astrophysics Data System (ADS)

    Ogawa, Shinpei; Fujisawa, Daisuke; Kimata, Masafumi

    2015-12-01

    High-performance wavelength-selective infrared (IR) sensors require small pixel structures, a low-thermal mass, and operation in the middle-wavelength infrared (MWIR) and long-wavelength infrared (LWIR) regions for multicolor IR imaging. All-metal-based mushroom plasmonic metamaterial absorbers (MPMAs) were investigated theoretically and were designed to enhance the performance of wavelength-selective uncooled IR sensors. All components of the MPMAs are based on thin layers of metals such as Au without oxide insulators for increased absorption. The absorption properties of the MPMAs were investigated by rigorous coupled-wave analysis. Strong wavelength-selective absorption is realized over a wide range of MWIR and LWIR wavelengths by the plasmonic resonance of the micropatch and the narrow-gap resonance, without disturbance from the intrinsic absorption of oxide insulators. The absorption wavelength is defined mainly by the micropatch size and is longer than its period. The metal post width has less impact on the absorption properties and can maintain single-mode operation. Through-holes can be formed on the plate area to reduce the thermal mass. A small pixel size with reduced thermal mass and wideband single-mode operation can be realized using all-metal-based MPMAs.

  7. Theoretical study of carbon-based tips for scanning tunnelling microscopy.

    PubMed

    González, C; Abad, E; Dappe, Y J; Cuevas, J C

    2016-03-11

    Motivated by recent experiments, we present here a detailed theoretical analysis of the use of carbon-based conductive tips in scanning tunnelling microscopy. In particular, we employ ab initio methods based on density functional theory to explore a graphitic, an amorphous carbon and two diamond-like tips for imaging with a scanning tunnelling microscope (STM), and we compare them with standard metallic tips made of gold and tungsten. We investigate the performance of these tips in terms of the corrugation of the STM images acquired when scanning a single graphene sheet. Moreover, we analyse the impact of the tip-sample distance and show that it plays a fundamental role in the resolution and symmetry of the STM images. We also explore in depth how the adsorption of single atoms and molecules in the tip apexes modifies the STM images and demonstrate that, in general, it leads to an improved image resolution. The ensemble of our results provides strong evidence that carbon-based tips can significantly improve the resolution of STM images, as compared to more standard metallic tips, which may open a new line of research in scanning tunnelling microscopy. PMID:26861537

  8. Theoretical Limits of Energy Density in Silicon-Carbon Composite Anode Based Lithium Ion Batteries

    PubMed Central

    Dash, Ranjan; Pannala, Sreekanth

    2016-01-01

    Silicon (Si) is under consideration as a potential next-generation anode material for the lithium ion battery (LIB). Experimental reports of up to 40% increase in energy density of Si anode based LIBs (Si-LIBs) have been reported in literature. However, this increase in energy density is achieved when the Si-LIB is allowed to swell (volumetrically expand) more than graphite based LIB (graphite-LIB) and beyond practical limits. The volume expansion of LIB electrodes should be negligible for applications such as automotive or mobile devices. We determine the theoretical bounds of Si composition in a Si–carbon composite (SCC) based anode to maximize the volumetric energy density of a LIB by constraining the external dimensions of the anode during charging. The porosity of the SCC anode is adjusted to accommodate the volume expansion during lithiation. The calculated threshold value of Si was then used to determine the possible volumetric energy densities of LIBs with SCC anode (SCC-LIBs) and the potential improvement over graphite-LIBs. The level of improvement in volumetric and gravimetric energy density of SCC-LIBs with constrained volume is predicted to be less than 10% to ensure the battery has similar power characteristics of graphite-LIBs. PMID:27311811

  9. Theoretical investigation of acoustic wave devices based on different piezoelectric films deposited on silicon carbide

    NASA Astrophysics Data System (ADS)

    Fan, Li; Zhang, Shu-yi; Ge, Huan; Zhang, Hui

    2013-07-01

    Performances of acoustic wave (AW) devices based on silicon carbide (SiC) substrates are theoretically studied, in which two types of piezoelectric films of ZnO and AlN deposited on 4H-SiC and 3C-SiC substrates are adopted. The phase velocities (PV), electromechanical coupling coefficients (ECC), and temperature coefficients of frequency (TCF) for three AW modes (Rayleigh wave, A0 and S0 modes of Lamb wave) often used in AW devices are calculated based on four types of configurations of interdigital transducers (IDTs). It is found that that the ZnO piezoelectric film is proper for the AW device operating in the low-frequency range because a high ECC can be realized using a thin ZnO film. The AlN piezoelectric film is proper for the device operating in the high-frequency range in virtue of the high PV of AlN, which can increase the finger width of the IDT. Generally, in the low-frequency Lamb wave devices using ZnO piezoelectric films with small normalized thicknesses of films to wavelengths hf/λ, thin SiC substrates can increase ECCs but induce high TCFs simultaneously. In the high-frequency device with a large hf/λ, the S0 mode of Lamb wave based on the AlN piezoelectric film deposited on a thick SiC substrate exhibits high performances by simultaneously considering the PV, ECC, and TCF.

  10. Theoretical Limits of Energy Density in Silicon-Carbon Composite Anode Based Lithium Ion Batteries.

    PubMed

    Dash, Ranjan; Pannala, Sreekanth

    2016-01-01

    Silicon (Si) is under consideration as a potential next-generation anode material for the lithium ion battery (LIB). Experimental reports of up to 40% increase in energy density of Si anode based LIBs (Si-LIBs) have been reported in literature. However, this increase in energy density is achieved when the Si-LIB is allowed to swell (volumetrically expand) more than graphite based LIB (graphite-LIB) and beyond practical limits. The volume expansion of LIB electrodes should be negligible for applications such as automotive or mobile devices. We determine the theoretical bounds of Si composition in a Si-carbon composite (SCC) based anode to maximize the volumetric energy density of a LIB by constraining the external dimensions of the anode during charging. The porosity of the SCC anode is adjusted to accommodate the volume expansion during lithiation. The calculated threshold value of Si was then used to determine the possible volumetric energy densities of LIBs with SCC anode (SCC-LIBs) and the potential improvement over graphite-LIBs. The level of improvement in volumetric and gravimetric energy density of SCC-LIBs with constrained volume is predicted to be less than 10% to ensure the battery has similar power characteristics of graphite-LIBs. PMID:27311811

  11. Theoretical Limits of Energy Density in Silicon-Carbon Composite Anode Based Lithium Ion Batteries

    NASA Astrophysics Data System (ADS)

    Dash, Ranjan; Pannala, Sreekanth

    2016-06-01

    Silicon (Si) is under consideration as a potential next-generation anode material for the lithium ion battery (LIB). Experimental reports of up to 40% increase in energy density of Si anode based LIBs (Si-LIBs) have been reported in literature. However, this increase in energy density is achieved when the Si-LIB is allowed to swell (volumetrically expand) more than graphite based LIB (graphite-LIB) and beyond practical limits. The volume expansion of LIB electrodes should be negligible for applications such as automotive or mobile devices. We determine the theoretical bounds of Si composition in a Si–carbon composite (SCC) based anode to maximize the volumetric energy density of a LIB by constraining the external dimensions of the anode during charging. The porosity of the SCC anode is adjusted to accommodate the volume expansion during lithiation. The calculated threshold value of Si was then used to determine the possible volumetric energy densities of LIBs with SCC anode (SCC-LIBs) and the potential improvement over graphite-LIBs. The level of improvement in volumetric and gravimetric energy density of SCC-LIBs with constrained volume is predicted to be less than 10% to ensure the battery has similar power characteristics of graphite-LIBs.

  12. Theoretical study of carbon-based tips for scanning tunnelling microscopy

    NASA Astrophysics Data System (ADS)

    González, C.; Abad, E.; Dappe, Y. J.; Cuevas, J. C.

    2016-03-01

    Motivated by recent experiments, we present here a detailed theoretical analysis of the use of carbon-based conductive tips in scanning tunnelling microscopy. In particular, we employ ab initio methods based on density functional theory to explore a graphitic, an amorphous carbon and two diamond-like tips for imaging with a scanning tunnelling microscope (STM), and we compare them with standard metallic tips made of gold and tungsten. We investigate the performance of these tips in terms of the corrugation of the STM images acquired when scanning a single graphene sheet. Moreover, we analyse the impact of the tip-sample distance and show that it plays a fundamental role in the resolution and symmetry of the STM images. We also explore in depth how the adsorption of single atoms and molecules in the tip apexes modifies the STM images and demonstrate that, in general, it leads to an improved image resolution. The ensemble of our results provides strong evidence that carbon-based tips can significantly improve the resolution of STM images, as compared to more standard metallic tips, which may open a new line of research in scanning tunnelling microscopy.

  13. Synthesis and characterization of three novel Schiff base compounds: Experimental and theoretical study

    NASA Astrophysics Data System (ADS)

    Taslı, P. T.; Bayrakdar, A.; Karakus, O. O.; Kart, H. H.; Koc, Y.

    2015-09-01

    In this study, three novel Schiff base compounds such as N-(4-nitrobenzyl)-4-methyl bromo aniline ( 1a), N-(2,4-dimethoxybenzyl)-4-methyl bromoaniline ( 2a), SN-((1H-indol-3-yl) methylene)-4- methyl bromoaniline ( 3a) are synthesized and characterized by using the spectroscopic methods of UV, IR and 1H-NMR. Molecular geometry and spectroscopic properties of synthesized compounds are also analyzed by using ab initio calculation methods based on the density functional theory (DFT) in the ground state. The extensive theoretical and experimental FT-IR and UV-vis spectrometry studies of synthesized compounds are performed. The optimized molecular structure and harmonic vibrational frequencies are studied by using B3LYP/6-311++G(d,p) method. Moreover, electronic structures are investigated by using the time dependent density functional theory (TD-DFT) while the energy changes of the parent compounds are examined in a solvent medium by using the polarizable continuum model (PCM). Additionally, the frontier molecular orbital analysis is performed for the Schiff base compounds. The electronic properties of each compound such as; chemical hardness, chemical softness, ionization potential, electron affinity, electronegativity and chemical potential are investigated by utilizing the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) energies.

  14. Spectral analysis of Hall-effect thruster plasma oscillations based on the empirical mode decomposition

    SciTech Connect

    Kurzyna, J.; Mazouffre, S.; Lazurenko, A.; Albarede, L.; Bonhomme, G.; Makowski, K.; Dudeck, M.; Peradzynski, Z.

    2005-12-15

    Hall-effect thruster plasma oscillations recorded by means of probes located at the channel exit are analyzed using the empirical mode decomposition (EMD) method. This self-adaptive technique permits to decompose a nonstationary signal into a set of intrinsic modes, and acts as a very efficient filter allowing to separate contributions of different underlying physical mechanisms. Applying the Hilbert transform to the whole set of modes allows to identify peculiar events and to assign them a range of instantaneous frequency and power. In addition to 25 kHz breathing-type oscillations which are unambiguously identified, the EMD approach confirms the existence of oscillations with instantaneous frequencies in the range of 100-500 kHz typical for ion transit-time oscillations. Modeling of high-frequency modes ({nu}{approx}10 MHz) resulting from EMD of measured wave forms supports the idea that high-frequency plasma oscillations originate from electron-density perturbations propagating azimuthally with the electron drift velocity.

  15. Quantitative hydrogen analysis in minerals based on a semi-empirical approach

    NASA Astrophysics Data System (ADS)

    Kristiansson, P.; Borysiuk, M.; Ros, L.; Skogby, H.; Abdel, N.; Elfman, M.; Nilsson, E. J. C.; Pallon, J.

    2013-07-01

    Hydrogen normally occurs as hydroxyl ions related to defects at specific crystallographic sites in the structures, and is normally characterized by infrared spectroscopy (FTIR). For quantification purposes the FTIR technique has proven to be less precise since calibrations against independent methods are needed. Hydrogen analysis by the NMP technique can solve many of the problems, due to the low detection limit, high lateral resolution, insignificant matrix effects and possibility to discriminate surface-adsorbed water. The technique has been shown to work both on thin samples and on thicker geological samples. To avoid disturbance from surface contamination the hydrogen is analyzed inside semi-thick geological samples. The technique used is an elastic recoil technique where both the incident projectile (proton) and the recoiled hydrogen are detected in coincidence in a segmented detector. Both the traditional annular system with the detector divided in two halves and the new double-sided silicon strip detector (DSSSD) has been used. In this work we present an upgraded version of the technique, studying two sets of mineral standards combined with pre-sample charge normalization. To improve the processing time of data we suggest a very simple semi-empirical approach to be used for data evaluation. The advantages and drawbacks with the approach are discussed and a possible extension of the model is suggested.

  16. Inferring causal molecular networks: empirical assessment through a community-based effort

    PubMed Central

    Hill, Steven M.; Heiser, Laura M.; Cokelaer, Thomas; Unger, Michael; Nesser, Nicole K.; Carlin, Daniel E.; Zhang, Yang; Sokolov, Artem; Paull, Evan O.; Wong, Chris K.; Graim, Kiley; Bivol, Adrian; Wang, Haizhou; Zhu, Fan; Afsari, Bahman; Danilova, Ludmila V.; Favorov, Alexander V.; Lee, Wai Shing; Taylor, Dane; Hu, Chenyue W.; Long, Byron L.; Noren, David P.; Bisberg, Alexander J.; Mills, Gordon B.; Gray, Joe W.; Kellen, Michael; Norman, Thea; Friend, Stephen; Qutub, Amina A.; Fertig, Elana J.; Guan, Yuanfang; Song, Mingzhou; Stuart, Joshua M.; Spellman, Paul T.; Koeppl, Heinz; Stolovitzky, Gustavo; Saez-Rodriguez, Julio; Mukherjee, Sach

    2016-01-01

    Inferring molecular networks is a central challenge in computational biology. However, it has remained unclear whether causal, rather than merely correlational, relationships can be effectively inferred in complex biological settings. Here we describe the HPN-DREAM network inference challenge that focused on learning causal influences in signaling networks. We used phosphoprotein data from cancer cell lines as well as in silico data from a nonlinear dynamical model. Using the phosphoprotein data, we scored more than 2,000 networks submitted by challenge participants. The networks spanned 32 biological contexts and were scored in terms of causal validity with respect to unseen interventional data. A number of approaches were effective and incorporating known biology was generally advantageous. Additional sub-challenges considered time-course prediction and visualization. Our results constitute the most comprehensive assessment of causal network inference in a mammalian setting carried out to date and suggest that learning causal relationships may be feasible in complex settings such as disease states. Furthermore, our scoring approach provides a practical way to empirically assess the causal validity of inferred molecular networks. PMID:26901648

  17. An Empirical Model of Saturn's Current Sheet Based on Global MHD Modeling of Saturn's Magnetosphere

    NASA Astrophysics Data System (ADS)

    Hansen, K. C.; Nickerson, J. S.; Gombosi, T. I.

    2014-12-01

    Cassini observations imply that during southern summer Saturn's magnetospheric current sheet is displaced northward above the rotational equator and should be similarly displaced southward during northern summer [C.S. Arridge et al., Warping of Saturn's magnetospheric and magnetotail current sheets, Journal of Geophysical Research, Vol. 113, August 2008]. Arridge et al. show that Cassini data from the noon, midnight and dawn local time sectors clearly indicate this bending and they present an azimuthally independent model to describe this bowl shaped geometry. We have used our global MHD model, BATS-R-US/SWMF, to study Saturn's magnetospheric current sheet under different solar wind dynamic pressures and solar zenith angle conditions. We find that under typical conditions the current sheet does bend upward and take on a basic shape similar to the Arridge model in the noon, midnight, and dawn sectors. However, the MHD model results show significant variations from the Arridge model including the degree of bending, variations away from a simple bowl shape, non-uniformity across local time sectors, drastic deviations in the dusk sector, and a dependence on the solar wind dynamic pressure. We will present a detailed description of our 3D MHD model results and the characteristics of the current sheet in the model. We will point out variations from the Arridge model. In addition, we will present a new empirical model of Saturn's current sheet that attempts to characterize the dependences on the local time sector and the solar wind dynamic pressure.

  18. Consequences of asymmetric competition between resident and invasive defoliators: a novel empirically based modelling approach.

    PubMed

    Ammunét, Tea; Klemola, Tero; Parvinen, Kalle

    2014-03-01

    Invasive species can have profound effects on a resident community via indirect interactions among community members. While long periodic cycles in population dynamics can make the experimental observation of the indirect effects difficult, modelling the possible effects on an evolutionary time scale may provide the much needed information on the potential threats of the invasive species on the ecosystem. Using empirical data from a recent invasion in northernmost Fennoscandia, we applied adaptive dynamics theory and modelled the long term consequences of the invasion by the winter moth into the resident community. Specifically, we investigated the outcome of the observed short-term asymmetric preferences of generalist predators and specialist parasitoids on the long term population dynamics of the invasive winter moth and resident autumnal moth sharing these natural enemies. Our results indicate that coexistence after the invasion is possible. However, the outcome of the indirect interaction on the population dynamics of the moth species was variable and the dynamics might not be persistent on an evolutionary time scale. In addition, the indirect interactions between the two moth species via shared natural enemies were able to cause asynchrony in the population cycles corresponding to field observations from previous sympatric outbreak areas. Therefore, the invasion may cause drastic changes in the resident community, for example by prolonging outbreak periods of birch-feeding moths, increasing the average population densities of the moths or, alternatively, leading to extinction of the resident moth species or to equilibrium densities of the two, formerly cyclic, herbivores. PMID:24380810

  19. Cardiopulmonary Resuscitation Pattern Evaluation Based on Ensemble Empirical Mode Decomposition Filter via Nonlinear Approaches.

    PubMed

    Sadrawi, Muammar; Sun, Wei-Zen; Ma, Matthew Huei-Ming; Dai, Chun-Yi; Abbod, Maysam F; Shieh, Jiann-Shing

    2016-01-01

    Good quality cardiopulmonary resuscitation (CPR) is the mainstay of treatment for managing patients with out-of-hospital cardiac arrest (OHCA). Assessment of the quality of the CPR delivered is now possible through the electrocardiography (ECG) signal that can be collected by an automated external defibrillator (AED). This study evaluates a nonlinear approximation of the CPR given to the asystole patients. The raw ECG signal is filtered using ensemble empirical mode decomposition (EEMD), and the CPR-related intrinsic mode functions (IMF) are chosen to be evaluated. In addition, sample entropy (SE), complexity index (CI), and detrended fluctuation algorithm (DFA) are collated and statistical analysis is performed using ANOVA. The primary outcome measure assessed is the patient survival rate after two hours. CPR pattern of 951 asystole patients was analyzed for quality of CPR delivered. There was no significant difference observed in the CPR-related IMFs peak-to-peak interval analysis for patients who are younger or older than 60 years of age, similarly to the amplitude difference evaluation for SE and DFA. However, there is a difference noted for the CI (p < 0.05). The results show that patients group younger than 60 years have higher survival rate with high complexity of the CPR-IMFs amplitude differences. PMID:27529068

  20. Development of Items for a Pedagogical Content Knowledge Test Based on Empirical Analysis of Pupils' Errors

    NASA Astrophysics Data System (ADS)

    Jüttner, Melanie; Neuhaus, Birgit J.

    2012-05-01

    In view of the lack of instruments for measuring biology teachers' pedagogical content knowledge (PCK), this article reports on a study about the development of PCK items for measuring teachers' knowledge of pupils' errors and ways for dealing with them. This study investigated 9th and 10th grade German pupils' (n = 461) drawings in an achievement test about the knee-jerk in biology, which were analysed by using the inductive qualitative analysis of their content. The empirical data were used for the development of the items in the PCK test. The validation of the items was determined with think-aloud interviews of German secondary school teachers (n = 5). If the item was determined, the reliability was tested by the results of German secondary school biology teachers (n = 65) who took the PCK test. The results indicated that these items are satisfactorily reliable (Cronbach's alpha values ranged from 0.60 to 0.65). We suggest a larger sample size and American biology teachers be used in our further studies. The findings of this study about teachers' professional knowledge from the PCK test could provide new information about the influence of teachers' knowledge on their pupils' understanding of biology and their possible errors in learning biology.

  1. Cardiopulmonary Resuscitation Pattern Evaluation Based on Ensemble Empirical Mode Decomposition Filter via Nonlinear Approaches

    PubMed Central

    Ma, Matthew Huei-Ming

    2016-01-01

    Good quality cardiopulmonary resuscitation (CPR) is the mainstay of treatment for managing patients with out-of-hospital cardiac arrest (OHCA). Assessment of the quality of the CPR delivered is now possible through the electrocardiography (ECG) signal that can be collected by an automated external defibrillator (AED). This study evaluates a nonlinear approximation of the CPR given to the asystole patients. The raw ECG signal is filtered using ensemble empirical mode decomposition (EEMD), and the CPR-related intrinsic mode functions (IMF) are chosen to be evaluated. In addition, sample entropy (SE), complexity index (CI), and detrended fluctuation algorithm (DFA) are collated and statistical analysis is performed using ANOVA. The primary outcome measure assessed is the patient survival rate after two hours. CPR pattern of 951 asystole patients was analyzed for quality of CPR delivered. There was no significant difference observed in the CPR-related IMFs peak-to-peak interval analysis for patients who are younger or older than 60 years of age, similarly to the amplitude difference evaluation for SE and DFA. However, there is a difference noted for the CI (p < 0.05). The results show that patients group younger than 60 years have higher survival rate with high complexity of the CPR-IMFs amplitude differences. PMID:27529068

  2. Improving the empirical model for plasma nitrided AISI 316L corrosion resistance based on Mössbauer spectroscopy

    NASA Astrophysics Data System (ADS)

    Campos, M.; de Souza, S. D.; de Souza, S.; Olzon-Dionysio, M.

    2011-11-01

    Traditional plasma nitriding treatments using temperatures ranging from approximately 650 to 730 K can improve wear, corrosion resistance and surface hardness on stainless steels. The nitrided layer consists of some iron nitrides: the cubic γ ' phase (Fe4N), the hexagonal phase ɛ (Fe2 - 3N) and a nitrogen supersatured solid phase γ N . An empirical model is proposed to explain the corrosion resistance of AISI 316L and ASTM F138 nitrided samples based on Mössbauer Spectroscopy results: the larger the ratio between ɛ and γ ' phase fractions of the sample, the better its resistance corrosion is. In this work, this model is examined using some new results of AISI 316L samples, nitrided under the same previous conditions of gas composition and temperature, but at different pressure, for 3, 4 and 5 h. The sample nitrided for 4 h, whose value for ɛ/ γ ' is maximum (= 0.73), shows a slightly better response than the other two samples, nitrided for 5 and 3 h ( ɛ/ γ ' = 0.72 and 0.59, respectively). Moreover, these samples show very similar behavior. Therefore, this set of samples was not suitable to test the empirical model. However, the comparison between the present results of potentiodynamic polarization curves and those obtained previously at 4 and 4.5 torr, could indicated that the corrosion resistance of the sample which only presents the γ N phase was the worst of them. Moreover, the empirical model seems not to be ready to explain the response to corrosion and it should be improved including the γ N phase.

  3. Experimentation and Theoretic Calculation of a BODIPY Sensor Based on Photoinduced Electron Transfer for Ions Detection

    NASA Astrophysics Data System (ADS)

    Lu, Hua; Zhang, Shushu; Liu, Hanzhuang; Wang, Yanwei; Shen, Zhen; Liu, Chungen; You, Xiaozeng

    2009-12-01

    A boron-dipyrromethene (BODIPY)-based fluorescence probe with a N,N'-(pyridine-2, 6-diylbis(methylene))-dianiline substituent (1) has been prepared by condensation of 2,6-pyridinedicarboxaldehyde with 8-(4-amino)-4,4-difluoro-1,3,5,7-tetramethyl-4-bora-3a,4a-diaza-s-indacene and reduction by NaBH4. The sensing properties of compound 1 toward various metal ions are investigated via fluorometric titration in methanol, which show highly selective fluorescent turn-on response in the presence of Hg2+ over the other metal ions, such as Li+, Na+, K+, Ca2+, Mg2+, Pb2+, Fe2+, Co2+, Ni2+, Cu2+, Zn2+, Cd2+, Ag+, and Mn2+. Computational approach has been carried out to investigate the mechanism why compound 1 provides different fluorescent signal for Hg2+ and other ions. Theoretic calculations of the energy levels show that the quenching of the bright green fluorescence of boradiazaindacene fluorophore is due to the reductive photoinduced electron transfer (PET) from the aniline subunit to the excited state of BODIPY fluorophore. In metal complexes, the frontier molecular orbital energy levels changes greatly. Binding Zn2+ or Cd2+ ion leads to significant decreasing of both the HOMO and LUMO energy levels of the receptor, thus inhibit the reductive PET process, whereas an oxidative PET from the excited state fluorophore to the receptor occurs, vice versa, which also quenches the fluorescence. However, for 1-Hg2+ complex, both the reductive and oxidative PETs are prohibited; therefore, strong fluorescence emission from the fluorophore can be observed experimentally. The agreement of the experimental results and theoretic calculations suggests that our calculation method can be applicable as guidance for the design of new chemosensors for other metal ions.

  4. Providing a contextual base and a theoretical structure to guide the teaching of science from early years to senior years

    NASA Astrophysics Data System (ADS)

    Stinner, Arthur

    1996-07-01

    this paper addresses the need for and the problem of organizing a science curriculum around contextual settings and science stories that serve to involve and motivate students to develop a scientific understanding of the world (with emphasis on physical science). A program of activities placed around contextual settings, science stories and contemporary issues of interest is recommended in an attempt to move toward a slow and secure abolition of the gulf between scientific knowledge and ‘common sense’ beliefs. A conceptual development is described to guide the connection between theory and evidence on a level appropriate for children, from early years to senior years. For senior years it is also important to connect the activity of teaching to a sound theoretical structure. The theoretical structure must illuminate the status of theory in science, establish what counts as evidence, clarify the relationship between experiment and explanation, and make connections to the history of science. This paper concludes with a proposed program of activities in terms of a sequence of theoretical and empirical activities that involve contextual settings, science stories, large context problems, thematic teaching, and popular science literature teaching.

  5. Empirical evidence for identical band gaps in substituted C{sub 60} and C{sub 70} based fullerenes

    SciTech Connect

    Mattias Andersson, L. Tanaka, Hideyuki

    2014-01-27

    Optical absorptance data, and a strong correlation between solar cell open circuit voltages and the ionization potentials of a wide range of differently substituted fullerene acceptors, are presented as empirical evidence for identical, or at least very similar, band gaps in all substituted C{sub 60} and C{sub 70} based fullerenes. Both the number and kind of substituents in this study are sufficiently varied to imply generality. While the band gaps of the fullerenes remain the same for all the different substitutions, their ionization potentials vary greatly in a span of more than 0.4 eV. The merits and drawbacks of using these results together with photoelectron based techniques to determine relative fullerene energy levels for, e.g., organic solar cell applications compared to more direct electrochemical methods are also discussed.

  6. Theoretical analysis of transcranial Hall-effect stimulation based on passive cable model

    NASA Astrophysics Data System (ADS)

    Yuan, Yi; Li, Xiao-Li

    2015-12-01

    Transcranial Hall-effect stimulation (THS) is a new stimulation method in which an ultrasonic wave in a static magnetic field generates an electric field in an area of interest such as in the brain to modulate neuronal activities. However, the biophysical basis of simulating the neurons remains unknown. To address this problem, we perform a theoretical analysis based on a passive cable model to investigate the THS mechanism of neurons. Nerve tissues are conductive; an ultrasonic wave can move ions embedded in the tissue in a static magnetic field to generate an electric field (due to Lorentz force). In this study, a simulation model for an ultrasonically induced electric field in a static magnetic field is derived. Then, based on the passive cable model, the analytical solution for the voltage distribution in a nerve tissue is determined. The simulation results showthat THS can generate a voltage to stimulate neurons. Because the THS method possesses a higher spatial resolution and a deeper penetration depth, it shows promise as a tool for treating or rehabilitating neuropsychiatric disorders. Project supported by the National Natural Science Foundation of China (Grant Nos. 61273063 and 61503321), the China Postdoctoral Science Foundation (Grant No. 2013M540215), the Natural Science Foundation of Hebei Province, China (Grant No. F2014203161), and the Youth Research Program of Yanshan University, China (Grant No. 02000134).

  7. Extended charge accumulation in ruthenium-4H-imidazole-based black absorbers: a theoretical design concept.

    PubMed

    Kupfer, Stephan

    2016-05-11

    A theoretical-guided design concept aiming to achieve highly efficient unidirectional charge transfer and multi-charge separation upon successive photoexcitation for light-harvesting dyes in the scope of supramolecular photocatalysts is presented. Four 4H-imidazole-ruthenium(ii) complexes incorporating a biimidazole-based electron-donating ligand sphere have been designed based on the well-known 4H-imidazole-ruthenium(ii) polypyridyl dyes. The quantum chemical evaluation, performed at the density functional and time-dependent density functional level of theory, revealed extraordinary unidirectional charge transfer bands from the near-infrared to the ultraviolet region of the absorption spectrum upon multi-photoexcitation. Spectro-electrochemical simulations modeling photoexcited intermediates determined the outstanding multi-electron storage capacity for this novel class of black dyes. These remarkable photochemical and photophysical properties are found to be preserved upon site-specific protonation rendering 4H-imidazole-ruthenium(ii) biimidazole dyes ideal for light-harvesting applications in the field of solar energy conversion. PMID:27121270

  8. Theoretical investigation of the halogen bonded complexes between carbonyl bases and molecular chlorine.

    PubMed

    Zierkiewicz, Wiktor; Bieńko, Dariusz C; Michalska, Danuta; Zeegers-Huyskens, Thérèse

    2015-04-30

    The halogen bonded complexes between six carbonyl bases and molecular chlorine are investigated theoretically. The interaction energies calculated at the CCSD(T)/aug-cc-pVTZ level range between -1.61 and -3.50 kcal mol(-1). These energies are related to the ionization potential, proton affinity, and also to the most negative values (V(s,min)) on the electrostatic potential surface of the carbonyl bases. A symmetry adapted perturbation theory decomposition of the energies has been performed. The interaction results in an elongation of the Cl-Cl bond and a contraction of the CF and CH bonds accompanied by a blue shift of the ν(CH) vibrations. The properties of the Cl2 molecules are discussed as a function of the σ*(Cl-Cl) occupation, the hybridization, and the occupation of the Rydberg orbitals of the two chlorine atoms. Our calculations predict a large enhancement of the infrared and Raman intensities of the ν(Cl-Cl) vibration on going from isolated to complexed Cl2. PMID:25727322

  9. An all-atom structure-based potential for proteins: bridging minimal models with all-atom empirical forcefields.

    PubMed

    Whitford, Paul C; Noel, Jeffrey K; Gosavi, Shachi; Schug, Alexander; Sanbonmatsu, Kevin Y; Onuchic, José N

    2009-05-01

    Protein dynamics take place on many time and length scales. Coarse-grained structure-based (Go) models utilize the funneled energy landscape theory of protein folding to provide an understanding of both long time and long length scale dynamics. All-atom empirical forcefields with explicit solvent can elucidate our understanding of short time dynamics with high energetic and structural resolution. Thus, structure-based models with atomic details included can be used to bridge our understanding between these two approaches. We report on the robustness of folding mechanisms in one such all-atom model. Results for the B domain of Protein A, the SH3 domain of C-Src Kinase, and Chymotrypsin Inhibitor 2 are reported. The interplay between side chain packing and backbone folding is explored. We also compare this model to a C(alpha) structure-based model and an all-atom empirical forcefield. Key findings include: (1) backbone collapse is accompanied by partial side chain packing in a cooperative transition and residual side chain packing occurs gradually with decreasing temperature, (2) folding mechanisms are robust to variations of the energetic parameters, (3) protein folding free-energy barriers can be manipulated through parametric modifications, (4) the global folding mechanisms in a C(alpha) model and the all-atom model agree, although differences can be attributed to energetic heterogeneity in the all-atom model, and (5) proline residues have significant effects on folding mechanisms, independent of isomerization effects. Because this structure-based model has atomic resolution, this work lays the foundation for future studies to probe the contributions of specific energetic factors on protein folding and function. PMID:18837035

  10. Ensemble Empirical Mode Decomposition based methodology for ultrasonic testing of coarse grain austenitic stainless steels.

    PubMed

    Sharma, Govind K; Kumar, Anish; Jayakumar, T; Purnachandra Rao, B; Mariyappa, N

    2015-03-01

    A signal processing methodology is proposed in this paper for effective reconstruction of ultrasonic signals in coarse grained high scattering austenitic stainless steel. The proposed methodology is comprised of the Ensemble Empirical Mode Decomposition (EEMD) processing of ultrasonic signals and application of signal minimisation algorithm on selected Intrinsic Mode Functions (IMFs) obtained by EEMD. The methodology is applied to ultrasonic signals obtained from austenitic stainless steel specimens of different grain size, with and without defects. The influence of probe frequency and data length of a signal on EEMD decomposition is also investigated. For a particular sampling rate and probe frequency, the same range of IMFs can be used to reconstruct the ultrasonic signal, irrespective of the grain size in the range of 30-210 μm investigated in this study. This methodology is successfully employed for detection of defects in a 50mm thick coarse grain austenitic stainless steel specimens. Signal to noise ratio improvement of better than 15 dB is observed for the ultrasonic signal obtained from a 25 mm deep flat bottom hole in 200 μm grain size specimen. For ultrasonic signals obtained from defects at different depths, a minimum of 7 dB extra enhancement in SNR is achieved as compared to the sum of selected IMF approach. The application of minimisation algorithm with EEMD processed signal in the proposed methodology proves to be effective for adaptive signal reconstruction with improved signal to noise ratio. This methodology was further employed for successful imaging of defects in a B-scan. PMID:25488024

  11. Changing Healthcare Providers’ Behavior during Pediatric Inductions with an Empirically-based Intervention

    PubMed Central

    Martin, Sarah R.; Chorney, Jill MacLaren; Tan, Edwin T.; Fortier, Michelle A.; Blount, Ronald L.; Wald, Samuel H.; Shapiro, Nina L.; Strom, Suzanne L.; Patel, Swati; Kain, Zeev N.

    2011-01-01

    Background Each year over 4 million children experience significant levels of preoperative anxiety, which has been linked to poor recovery outcomes. Healthcare providers (HCP) and parents represent key resources for children to help them manage their preoperative anxiety. The present study reports on the development and preliminary feasibility testing of a new intervention designed to change HCP and parent perioperative behaviors that have been previously reported to be associated with children’s coping and stress behaviors before surgery. Methods An empirically-derived intervention, Provider-Tailored Intervention for Perioperative Stress, was developed to train HCPs to increase behaviors that promote children’s coping and decrease behaviors that may exacerbate children’s distress. Rates of HCP behaviors were coded and compared between pre-intervention and post-intervention. Additionally, rates of parents’ behaviors were compared between those that interacted with HCPs before training to those interacting with HCPs post-intervention. Results Effect sizes indicated that HCPs that underwent training demonstrated increases in rates of desired behaviors (range: 0.22 to 1.49) and decreases in rates of undesired behaviors (range: 0.15 to 2.15). Additionally, parents, who were indirectly trained, also demonstrated changes to their rates of desired (range: 0.30 to 0.60) and undesired behaviors (range: 0.16 to 0.61). Conclusions The intervention successfully modified HCP and parent behaviors. It represents a potentially new clinical way to decrease anxiety in children. A recently National Institute of Child Health and Development funded multi-site randomized control trial will examine the efficacy of this intervention in reducing children’s preoperative anxiety and improving children’s postoperative recovery is about to start. PMID:21606826

  12. Empirical modeling the topside ion density around 600 km based on ROCSAT-1 satellite observations

    NASA Astrophysics Data System (ADS)

    Liu, Libo; Huijun Le, lake709.; Chen, Yiding; Wan, Weixing; Huang, He

    ROCSAT-1 satellite was operated in circular orbit at an altitude of 600 km with an inclination of 35 degree during the period from 1999 to 2004. Ionospheric Plasma and Electrodynamics Instrument (IPEI) on board the satellite includes an Ion Trap (IT), which was mainly used to measure the total ion concentration. An empirical model of ion density was constructed by using the data obtained from IT with the temporal resolution of 1s in the range of solar proxy P10.7 from 100 to 240sfu under relatively quiet geomagnetic conditions (Ap≤22). The model describes the ion density variations as functions of local time, day of year, solar activity level, longitude, and height within the altitude range of 560-660 km. An outstanding merit of the model is that it can take the altitude variation of the electron density into account. The model reproduces the ROCSAT-1 ion density accurately with 0.141 root mean square error (RMSE), performing better than International Reference Ionosphere 2007 (IRI2007) with 5.986 RMSE. Furthermore, we use it to predict ion density observed within the similar ROCSAT-1 altitudes, such as the Japanese HINOTORI satellite, to further validate our model. The comparisons show the relative error of 94.5% data located in ±5% and more than 99% data located in ±15%. The model provides a way to describe temporal and spatial variation of topside ion density. It can capture the variations of ion density under given conditions (including height), without doing altitude normalization before modeling. With this model, it is more conducive to understand the climatological features of topside ion density. Acknowledgements: Ionosonde data are provided from BNOSE of IGGCAS. This research was supported by the projects of Chinese Academy of Sciences (KZZD-EW-01-3), National Key Basic Research Program of China (2012CB825604), and National Natural Science Foundation of China (41231065, 41174137).

  13. Combining Empirical Relationships with Data Based Mechanistic Modeling to Inform Solute Tracer Investigations across Stream Orders

    NASA Astrophysics Data System (ADS)

    Herrington, C.; Gonzalez-Pinzon, R.; Covino, T. P.; Mortensen, J.

    2015-12-01

    Solute transport studies in streams and rivers often begin with the introduction of conservative and reactive tracers into the water column. Information on the transport of these substances is then captured within tracer breakthrough curves (BTCs) and used to estimate, for instance, travel times and dissolved nutrient and carbon dynamics. Traditionally, these investigations have been limited to systems with small discharges (< 200 L/s) and with small reach lengths (< 500 m), partly due to the need for a priori information of the reach's hydraulic characteristics (e.g., channel geometry, resistance and dispersion coefficients) to predict arrival times, times to peak concentrations of the solute and mean travel times. Current techniques to acquire these channel characteristics through preliminary tracer injections become cost prohibitive at higher stream orders and the use of semi-continuous water quality sensors for collecting real-time information may be affected from erroneous readings that are masked by high turbidity (e.g., nitrate signals with SUNA instruments or fluorescence measures) and/or high total dissolved solids (e.g., making prohibitively expensive the use of salt tracers such as NaCl) in larger systems. Additionally, a successful time-of-travel study is valuable for only a single discharge and river stage. We have developed a method to predict tracer BTCs to inform sampling frequencies at small and large stream orders using empirical relationships developed from multiple tracer injections spanning several orders of magnitude in discharge and reach length. This method was successfully tested in 1st to 8th order systems along the Middle Rio Grande River Basin in New Mexico, USA.

  14. An Empirical Study of Neural Network-Based Audience Response Technology in a Human Anatomy Course for Pharmacy Students.

    PubMed

    Fernández-Alemán, José Luis; López-González, Laura; González-Sequeros, Ofelia; Jayne, Chrisina; López-Jiménez, Juan José; Carrillo-de-Gea, Juan Manuel; Toval, Ambrosio

    2016-04-01

    This paper presents an empirical study of a formative neural network-based assessment approach by using mobile technology to provide pharmacy students with intelligent diagnostic feedback. An unsupervised learning algorithm was integrated with an audience response system called SIDRA in order to generate states that collect some commonality in responses to questions and add diagnostic feedback for guided learning. A total of 89 pharmacy students enrolled on a Human Anatomy course were taught using two different teaching methods. Forty-four students employed intelligent SIDRA (i-SIDRA), whereas 45 students received the same training but without using i-SIDRA. A statistically significant difference was found between the experimental group (i-SIDRA) and the control group (traditional learning methodology), with T (87) = 6.598, p < 0.001. In four MCQs tests, the difference between the number of correct answers in the first attempt and in the last attempt was also studied. A global effect size of 0.644 was achieved in the meta-analysis carried out. The students expressed satisfaction with the content provided by i-SIDRA and the methodology used during the process of learning anatomy (M = 4.59). The new empirical contribution presented in this paper allows instructors to perform post hoc analyses of each particular student's progress to ensure appropriate training. PMID:26815339

  15. Graph theoretic framework based cooperative control and estimation of multiple UAVs for target tracking

    NASA Astrophysics Data System (ADS)

    Ahmed, Mousumi

    Designing the control technique for nonlinear dynamic systems is a significant challenge. Approaches to designing a nonlinear controller are studied and an extensive study on backstepping based technique is performed in this research with the purpose of tracking a moving target autonomously. Our main motivation is to explore the controller for cooperative and coordinating unmanned vehicles in a target tracking application. To start with, a general theoretical framework for target tracking is studied and a controller in three dimensional environment for a single UAV is designed. This research is primarily focused on finding a generalized method which can be applied to track almost any reference trajectory. The backstepping technique is employed to derive the controller for a simplified UAV kinematic model. This controller can compute three autopilot modes i.e. velocity, ground heading (or course angle), and flight path angle for tracking the unmanned vehicle. Numerical implementation is performed in MATLAB with the assumption of having perfect and full state information of the target to investigate the accuracy of the proposed controller. This controller is then frozen for the multi-vehicle problem. Distributed or decentralized cooperative control is discussed in the context of multi-agent systems. A consensus based cooperative control is studied; such consensus based control problem can be viewed from the algebraic graph theory concepts. The communication structure between the UAVs is represented by the dynamic graph where UAVs are represented by the nodes and the communication links are represented by the edges. The previously designed controller is augmented to account for the group to obtain consensus based on their communication. A theoretical development of the controller for the cooperative group of UAVs is presented and the simulation results for different communication topologies are shown. This research also investigates the cases where the communication

  16. Holding-based network of nations based on listed energy companies: An empirical study on two-mode affiliation network of two sets of actors

    NASA Astrophysics Data System (ADS)

    Li, Huajiao; Fang, Wei; An, Haizhong; Gao, Xiangyun; Yan, Lili

    2016-05-01

    Economic networks in the real world are not homogeneous; therefore, it is important to study economic networks with heterogeneous nodes and edges to simulate a real network more precisely. In this paper, we present an empirical study of the one-mode derivative holding-based network constructed by the two-mode affiliation network of two sets of actors using the data of worldwide listed energy companies and their shareholders. First, we identify the primitive relationship in the two-mode affiliation network of the two sets of actors. Then, we present the method used to construct the derivative network based on the shareholding relationship between two sets of actors and the affiliation relationship between actors and events. After constructing the derivative network, we analyze different topological features on the node level, edge level and entire network level and explain the meanings of the different values of the topological features combining the empirical data. This study is helpful for expanding the usage of complex networks to heterogeneous economic networks. For empirical research on the worldwide listed energy stock market, this study is useful for discovering the inner relationships between the nations and regions from a new perspective.

  17. Information-theoretic discrepancy based iterative reconstructions (IDIR) for polychromatic x-ray tomography

    SciTech Connect

    Jang, Kwang Eun; Lee, Jongha; Sung, Younghun; Lee, SeongDeok

    2013-09-15

    Purpose: X-ray photons generated from a typical x-ray source for clinical applications exhibit a broad range of wavelengths, and the interactions between individual particles and biological substances depend on particles' energy levels. Most existing reconstruction methods for transmission tomography, however, neglect this polychromatic nature of measurements and rely on the monochromatic approximation. In this study, we developed a new family of iterative methods that incorporates the exact polychromatic model into tomographic image recovery, which improves the accuracy and quality of reconstruction.Methods: The generalized information-theoretic discrepancy (GID) was employed as a new metric for quantifying the distance between the measured and synthetic data. By using special features of the GID, the objective function for polychromatic reconstruction which contains a double integral over the wavelength and the trajectory of incident x-rays was simplified to a paraboloidal form without using the monochromatic approximation. More specifically, the original GID was replaced with a surrogate function with two auxiliary, energy-dependent variables. Subsequently, the alternating minimization technique was applied to solve the double minimization problem. Based on the optimization transfer principle, the objective function was further simplified to the paraboloidal equation, which leads to a closed-form update formula. Numerical experiments on the beam-hardening correction and material-selective reconstruction were conducted to compare and assess the performance of conventional methods and the proposed algorithms.Results: The authors found that the GID determines the distance between its two arguments in a flexible manner. In this study, three groups of GIDs with distinct data representations were considered. The authors demonstrated that one type of GIDs that comprises “raw” data can be viewed as an extension of existing statistical reconstructions; under a

  18. The NIHR collaboration for leadership in applied health research and care (CLAHRC) for greater manchester: combining empirical, theoretical and experiential evidence to design and evaluate a large-scale implementation strategy

    PubMed Central

    2011-01-01

    Background In response to policy recommendations, nine National Institute for Health Research (NIHR) Collaborations for Leadership in Applied Health Research and Care (CLAHRCs) were established in England in 2008, aiming to create closer working between the health service and higher education and narrow the gap between research and its implementation in practice. The Greater Manchester (GM) CLAHRC is a partnership between the University of Manchester and twenty National Health Service (NHS) trusts, with a five-year mission to improve healthcare and reduce health inequalities for people with cardiovascular conditions. This paper outlines the GM CLAHRC approach to designing and evaluating a large-scale, evidence- and theory-informed, context-sensitive implementation programme. Discussion The paper makes a case for embedding evaluation within the design of the implementation strategy. Empirical, theoretical, and experiential evidence relating to implementation science and methods has been synthesised to formulate eight core principles of the GM CLAHRC implementation strategy, recognising the multi-faceted nature of evidence, the complexity of the implementation process, and the corresponding need to apply approaches that are situationally relevant, responsive, flexible, and collaborative. In turn, these core principles inform the selection of four interrelated building blocks upon which the GM CLAHRC approach to implementation is founded. These determine the organizational processes, structures, and roles utilised by specific GM CLAHRC implementation projects, as well as the approach to researching implementation, and comprise: the Promoting Action on Research Implementation in Health Services (PARIHS) framework; a modified version of the Model for Improvement; multiprofessional teams with designated roles to lead, facilitate, and support the implementation process; and embedded evaluation and learning. Summary Designing and evaluating a large-scale implementation

  19. Multiscale Detrended Cross-Correlation Analysis of Traffic Time Series Based on Empirical Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Yin, Yi; Shang, Pengjian

    2015-04-01

    In this paper, we propose multiscale detrended cross-correlation analysis (MSDCCA) to detect the long-range power-law cross-correlation of considered signals in the presence of nonstationarity. For improving the performance and getting better robustness, we further introduce the empirical mode decomposition (EMD) to eliminate the noise effects and propose MSDCCA method combined with EMD, which is called MS-EDXA method, then systematically investigate the multiscale cross-correlation structure of the real traffic signals. We apply the MSDCCA and MS-EDXA methods to study the cross-correlations in three situations: velocity and volume on one lane, velocities on the present and the next moment and velocities on the adjacent lanes, and further compare their spectrums respectively. When the difference between the spectrums of MSDCCA and MS-EDXA becomes unobvious, there is a crossover which denotes the turning point of difference. The crossover results from the competition between the noise effects in the original signals and the intrinsic fluctuation of traffic signals and divides the plot of spectrums into two regions. In all the three case, MS-EDXA method makes the average of local scaling exponents increased and the standard deviation decreased and provides a relative stable persistent scaling cross-correlated behavior which gets the analysis more precise and more robust and improves the performance after noises being removed. Applying MS-EDXA method avoids the inaccurate characteristics of multiscale cross-correlation structure at the short scale including the spectrum minimum, the range for the spectrum fluctuation and general trend, which are caused by the noise in the original signals. We get the conclusions that the traffic velocity and volume are long-range cross-correlated, which is accordant to their actual evolution, while velocities on the present and the next moment and velocities on adjacent lanes reflect the strong cross-correlations both in temporal and

  20. Meta-Analysis of Group Learning Activities: Empirically Based Teaching Recommendations

    ERIC Educational Resources Information Center

    Tomcho, Thomas J.; Foels, Rob

    2012-01-01

    Teaching researchers commonly employ group-based collaborative learning approaches in Teaching of Psychology teaching activities. However, the authors know relatively little about the effectiveness of group-based activities in relation to known psychological processes associated with group dynamics. Therefore, the authors conducted a meta-analytic…

  1. Empirically Supported Family-Based Treatments for Conduct Disorder and Delinquency in Adolescents

    ERIC Educational Resources Information Center

    Henggeler, Scott W.; Sheidow, Ashli J.

    2012-01-01

    Several family-based treatments of conduct disorder and delinquency in adolescents have emerged as evidence-based and, in recent years, have been transported to more than 800 community practice settings. These models include multisystemic therapy, functional family therapy, multidimensional treatment foster care, and, to a lesser extent, brief…

  2. Formula-Based Public School Funding System in Victoria: An Empirical Analysis of Equity

    ERIC Educational Resources Information Center

    Bandaranayake, Bandara

    2013-01-01

    This article explores the formula-based school funding system in the state of Victoria, Australia, where state funds are directly allocated to schools based on a range of equity measures. The impact of Victoria' funding system for education in terms of alleviating inequality and disadvantage is contentious, to say the least. It is difficult…

  3. [Empiric treatment of pyelonephritis].

    PubMed

    Iarovoĭ, S K; Shimanovskiĭ, N L; Kareva, E N

    2011-01-01

    The article analyses the most typical clinical situations in empirical treatment of pyelonephritis including situations with comorbid severe diseases: decompensated diabetes mellitus, chronic renal failure, HIV-infection. Choice of antibacterial medicines for empiric treatment of pyelonephritis is based on the results of the latest studies of antibioticoresistance of pyelonephritis pathogens as well as on specific features of pharmacokinetics and pharmacodynamics of antibacterial drugs. PMID:21815461

  4. Theoretical Evaluation of Electroactive Polymer Based Micropump Diaphragm for Air Flow Control

    NASA Technical Reports Server (NTRS)

    Xu, Tian-Bing; Su, Ji; Zhang, Qiming

    2004-01-01

    An electroactive polymer (EAP), high energy electron irradiated poly(vinylidene fluoride-trifluoroethylene) [P(VDFTrFE)] copolymer, based actuation micropump diaphragm (PAMPD) have been developed for air flow control. The displacement strokes and profiles as a function of amplifier and frequency of electric field have been characterized. The volume stroke rates (volume rate) as function of electric field, driving frequency have been theoretically evaluated, too. The PAMPD exhibits high volume rate. It is easily tuned with varying of either amplitude or frequency of the applied electric field. In addition, the performance of the diaphragms were modeled and the agreement between the modeling results and experimental data confirms that the response of the diaphragms follow the design parameters. The results demonstrated that the diaphragm can fit some future aerospace applications to replace the traditional complex mechanical systems, increase the control capability and reduce the weight of the future air dynamic control systems. KEYWORDS: Electroactive polymer (EAP), micropump, diaphragm, actuation, displacement, volume rate, pumping speed, clamping ratio.

  5. A game theoretic framework for incentive-based models of intrinsic motivation in artificial systems

    PubMed Central

    Merrick, Kathryn E.; Shafi, Kamran

    2013-01-01

    An emerging body of research is focusing on understanding and building artificial systems that can achieve open-ended development influenced by intrinsic motivations. In particular, research in robotics and machine learning is yielding systems and algorithms with increasing capacity for self-directed learning and autonomy. Traditional software architectures and algorithms are being augmented with intrinsic motivations to drive cumulative acquisition of knowledge and skills. Intrinsic motivations have recently been considered in reinforcement learning, active learning and supervised learning settings among others. This paper considers game theory as a novel setting for intrinsic motivation. A game theoretic framework for intrinsic motivation is formulated by introducing the concept of optimally motivating incentive as a lens through which players perceive a game. Transformations of four well-known mixed-motive games are presented to demonstrate the perceived games when players' optimally motivating incentive falls in three cases corresponding to strong power, affiliation and achievement motivation. We use agent-based simulations to demonstrate that players with different optimally motivating incentive act differently as a result of their altered perception of the game. We discuss the implications of these results both for modeling human behavior and for designing artificial agents or robots. PMID:24198797

  6. Theoretical study of the charge transport through C60-based single-molecule junctions

    NASA Astrophysics Data System (ADS)

    Bilan, S.; Zotti, L. A.; Pauly, F.; Cuevas, J. C.

    2012-05-01

    We present a theoretical study of the conductance and thermopower of single-molecule junctions based on C60 and C60-terminated molecules. We first analyze the transport properties of gold-C60-gold junctions and show that these junctions can be highly conductive (with conductances above 0.1G0, where G0=2e2/h is the quantum of conductance). Moreover, we find that the thermopower in these junctions is negative due to the fact that the lowest unoccupied molecular orbital dominates the charge transport, and its magnitude can reach several tens of microvolts per kelvin, depending on the contact geometry. On the other hand, we study the suitability of C60 as an anchoring group in single-molecule junctions. For this purpose, we analyze the transport through several dumbbell derivatives using C60 as anchors, and we compare the results with those obtained with thiol and amine groups. Our results show that the conductance of C60-terminated molecules is rather sensitive to the binding geometry. Moreover, the conductance of the molecules is typically reduced by the presence of the C60 anchors, which in turn makes the junctions more sensitive to the functionalization of the molecular core with appropriate side groups.

  7. Respiratory rate detection algorithm based on RGB-D camera: theoretical background and experimental results.

    PubMed

    Benetazzo, Flavia; Freddi, Alessandro; Monteriù, Andrea; Longhi, Sauro

    2014-09-01

    Both the theoretical background and the experimental results of an algorithm developed to perform human respiratory rate measurements without any physical contact are presented. Based on depth image sensing techniques, the respiratory rate is derived by measuring morphological changes of the chest wall. The algorithm identifies the human chest, computes its distance from the camera and compares this value with the instantaneous distance, discerning if it is due to the respiratory act or due to a limited movement of the person being monitored. To experimentally validate the proposed algorithm, the respiratory rate measurements coming from a spirometer were taken as a benchmark and compared with those estimated by the algorithm. Five tests were performed, with five different persons sat in front of the camera. The first test aimed to choose the suitable sampling frequency. The second test was conducted to compare the performances of the proposed system with respect to the gold standard in ideal conditions of light, orientation and clothing. The third, fourth and fifth tests evaluated the algorithm performances under different operating conditions. The experimental results showed that the system can correctly measure the respiratory rate, and it is a viable alternative to monitor the respiratory activity of a person without using invasive sensors. PMID:26609383

  8. Gyroscope-driven mouse pointer with an EMOTIV® EEG headset and data analysis based on Empirical Mode Decomposition.

    PubMed

    Rosas-Cholula, Gerardo; Ramirez-Cortes, Juan Manuel; Alarcon-Aquino, Vicente; Gomez-Gil, Pilar; Rangel-Magdaleno, Jose de Jesus; Reyes-Garcia, Carlos

    2013-01-01

    This paper presents a project on the development of a cursor control emulating the typical operations of a computer-mouse, using gyroscope and eye-blinking electromyographic signals which are obtained through a commercial 16-electrode wireless headset, recently released by Emotiv. The cursor position is controlled using information from a gyroscope included in the headset. The clicks are generated through the user's blinking with an adequate detection procedure based on the spectral-like technique called Empirical Mode Decomposition (EMD). EMD is proposed as a simple and quick computational tool, yet effective, aimed to artifact reduction from head movements as well as a method to detect blinking signals for mouse control. Kalman filter is used as state estimator for mouse position control and jitter removal. The detection rate obtained in average was 94.9%. Experimental setup and some obtained results are presented. PMID:23948873

  9. Gyroscope-Driven Mouse Pointer with an EMOTIV® EEG Headset and Data Analysis Based on Empirical Mode Decomposition

    PubMed Central

    Rosas-Cholula, Gerardo; Ramirez-Cortes, Juan Manuel; Alarcon-Aquino, Vicente; Gomez-Gil, Pilar; Rangel-Magdaleno, Jose de Jesus; Reyes-Garcia, Carlos

    2013-01-01

    This paper presents a project on the development of a cursor control emulating the typical operations of a computer-mouse, using gyroscope and eye-blinking electromyographic signals which are obtained through a commercial 16-electrode wireless headset, recently released by Emotiv. The cursor position is controlled using information from a gyroscope included in the headset. The clicks are generated through the user's blinking with an adequate detection procedure based on the spectral-like technique called Empirical Mode Decomposition (EMD). EMD is proposed as a simple and quick computational tool, yet effective, aimed to artifact reduction from head movements as well as a method to detect blinking signals for mouse control. Kalman filter is used as state estimator for mouse position control and jitter removal. The detection rate obtained in average was 94.9%. Experimental setup and some obtained results are presented. PMID:23948873

  10. Empirically Supported Treatments in Psychotherapy: Towards an Evidence-Based or Evidence-Biased Psychology in Clinical Settings?

    PubMed Central

    Castelnuovo, Gianluca

    2010-01-01

    The field of research and practice in psychotherapy has been deeply influenced by two different approaches: the empirically supported treatments (ESTs) movement, linked with the evidence-based medicine (EBM) perspective and the “Common Factors” approach, typically connected with the “Dodo Bird Verdict”. About the first perspective, since 1998 a list of ESTs has been established in mental health field. Criterions for “well-established” and “probably efficacious” treatments have arisen. The development of these kinds of paradigms was motivated by the emergence of a “managerial” approach and related systems for remuneration also for mental health providers and for insurance companies. In this article ESTs will be presented underlining also some possible criticisms. Finally complementary approaches, that could add different evidence in the psychotherapy research in comparison with traditional EBM approach, are presented. PMID:21833197

  11. Analysis of Vibration and Noise of Construction Machinery Based on Ensemble Empirical Mode Decomposition and Spectral Correlation Analysis Method

    NASA Astrophysics Data System (ADS)

    Chen, Yuebiao; Zhou, Yiqi; Yu, Gang; Lu, Dan

    In order to analyze the effect of engine vibration on cab noise of construction machinery in multi-frequency bands, a new method based on ensemble empirical mode decomposition (EEMD) and spectral correlation analysis is proposed. Firstly, the intrinsic mode functions (IMFs) of vibration and noise signals were obtained by EEMD method, and then the IMFs which have the same frequency bands were selected. Secondly, we calculated the spectral correlation coefficients between the selected IMFs, getting the main frequency bands in which engine vibration has significant impact on cab noise. Thirdly, the dominated frequencies were picked out and analyzed by spectral analysis method. The study result shows that the main frequency bands and dominated frequencies in which engine vibration have serious impact on cab noise can be identified effectively by the proposed method, which provides effective guidance to noise reduction of construction machinery.

  12. Re-reading nursing and re-writing practice: towards an empirically based reformulation of the nursing mandate.

    PubMed

    Allen, Davina

    2004-12-01

    This article examines field studies of nursing work published in the English language between 1993 and 2003 as the first step towards an empirically based reformulation of the nursing mandate. A decade of ethnographic research reveals that, contrary to contemporary theories which promote an image of nursing work centred on individualised unmediated caring relationships, in real-life practice the core nursing contribution is that of the healthcare mediator. Eight bundles of activity that comprise this intermediary role are described utilising evidence from the literature. The mismatch between nursing's culture and ideals and the structure and constraints of the work setting is a chronic source of practitioner dissatisfaction. It is argued that the profession has little to gain by pursuing an agenda of holistic patient care centred on emotional intimacy and that an alternative occupational mandate focused on the healthcare mediator function might make for more humane health services and a more viable professional future. PMID:15601415

  13. Network-Based Enriched Gene Subnetwork Identification: A Game-Theoretic Approach

    PubMed Central

    Razi, Abolfazl; Afghah, Fatemeh; Singh, Salendra; Varadan, Vinay

    2016-01-01

    Identifying subsets of genes that jointly mediate cancer etiology, progression, or therapy response remains a challenging problem due to the complexity and heterogeneity in cancer biology, a problem further exacerbated by the relatively small number of cancer samples profiled as compared with the sheer number of potential molecular factors involved. Pure data-driven methods that merely rely on multiomics data have been successful in discovering potentially functional genes but suffer from high false-positive rates and tend to report subsets of genes whose biological interrelationships are unclear. Recently, integrative data-driven models have been developed to integrate multiomics data with signaling pathway networks in order to identify pathways associated with clinical or biological phenotypes. However, these approaches suffer from an important drawback of being restricted to previously discovered pathway structures and miss novel genomic interactions as well as potential crosstalk among the pathways. In this article, we propose a novel coalition-based game-theoretic approach to overcome the challenge of identifying biologically relevant gene subnetworks associated with disease phenotypes. The algorithm starts from a set of seed genes and traverses a protein–protein interaction network to identify modulated subnetworks. The optimal set of modulated subnetworks is identified using Shapley value that accounts for both individual and collective utility of the subnetwork of genes. The algorithm is applied to two illustrative applications, including the identification of subnetworks associated with (i) disease progression risk in response to platinum-based therapy in ovarian cancer and (ii) immune infiltration in triple-negative breast cancer. The results demonstrate an improved predictive power of the proposed method when compared with state-of-the-art feature selection methods, with the added advantage of identifying novel potentially functional gene subnetworks

  14. Budget impact of rare diseases: proposal for a theoretical framework based on evidence from Bulgaria.

    PubMed

    Iskrov, G; Jessop, E; Miteva-Katrandzhieva, T; Stefanov, R

    2015-05-01

    This study aimed to estimate the impact of rare disease (RD) drugs on Bulgaria's National Health Insurance Fund's (NHIF) total drug budget for 2011-2014. While standard budget impact analysis is usually used in a prospective way, assessing the impact of new health technologies on the health system's sustainability, we adopted a retrospective approach instead. Budget impact was quantified from a NHIF perspective. Descriptive statistics was used to analyse cost details, while dynamics was studied, using chain-linked growth rates (every period preceding the accounting period serves as a base). NHIF costs for RD therapies were expected to increase up to 74.5 million BGN in 2014 (7.8% of NHIF's total pharmaceutical expenditure). Greatest increase in cost per patient and number of patients treated was observed in conditions, for which there were newly approved for funding therapies. While simple cost drivers are well known - number of patients treated and mean cost per patient - in real-world settings these two factors are likely to depend on the availability and accessibility of effective innovative therapies. As RD were historically underdiagnosed, undertreated and underfunded in Bulgaria, improved access to RD drugs will inevitably lead to increasing budget burden for payers. Based on the evidence from this study, we propose a theoretical framework of a budget impact study for RD. First, a retrospective analysis could provide essential health policy insights in terms of impact on accessibility and population health, which are significant benchmarks in shaping funding decisions in healthcare. We suggest an interaction between the classical prospective BIA with the retrospective analysis in order to optimise health policy decision-making. Second, we recommend budget impact studies to focus on RD rather than orphan drugs (OD). In policy context, RD are the public health priority. OD are just one of the tools to address the complex issues of RD. Moreover, OD is a dynamic

  15. Accurate Young's modulus measurement based on Rayleigh wave velocity and empirical Poisson's ratio

    NASA Astrophysics Data System (ADS)

    Li, Mingxia; Feng, Zhihua

    2016-07-01

    This paper presents a method for Young's modulus measurement based on Rayleigh wave speed. The error in Poisson's ratio has weak influence on the measurement of Young's modulus based on Rayleigh wave speed, and Poisson's ratio minimally varies in a certain material; thus, we can accurately estimate Young's modulus with surface wave speed and a rough Poisson's ratio. We numerically analysed three methods using Rayleigh, longitudinal, and transversal wave speed, respectively, and the error in Poisson's ratio shows the least influence on the result in the method involving Rayleigh wave speed. An experiment was performed and has proved the feasibility of this method. Device for speed measuring could be small, and no sample pretreatment is needed. Hence, developing a portable instrument based on this method is possible. This method makes a good compromise between usability and precision.

  16. Performance-based management and quality of work: an empirical assessment.

    PubMed

    Falzon, Pierre; Nascimento, Adelaide; Gaudart, Corinne; Piney, Cécile; Dujarier, Marie-Anne; Germe, Jean-François

    2012-01-01

    In France, in the private sector as in the public sector, performance-based management tends to become a norm. Performance-based management is supposed to improve service quality, productivity and efficiency, transparency of allotted means and achieved results, and to better focus the activity of employees and of the whole organization. This text reports a study conducted for the French Ministry of Budget by a team of researchers in ergonomics, sociology and management science, in order to assess the impact of performance-based management on employees, on teams and on work organization. About 100 interviews were conducted with employees of all categories and 6 working groups were set up in order to discuss and validate or amend our first analyses. Results concern several aspects: workload and work intensification, indicators and performance management and the transformation of jobs induced by performance management. PMID:22317310

  17. Building performance-based accountability with limited empirical evidence: performance measurement for public health preparedness.

    PubMed

    Shelton, Shoshana R; Nelson, Christopher D; McLees, Anita W; Mumford, Karen; Thomas, Craig

    2013-08-01

    Efforts to respond to performance-based accountability mandates for public health emergency preparedness have been hindered by a weak evidence base linking preparedness activities with response outcomes. We describe an approach to measure development that was successfully implemented in the Centers for Disease Control and Prevention Public Health Emergency Preparedness Cooperative Agreement. The approach leverages insights from process mapping and experts to guide measure selection, and provides mechanisms for reducing performance-irrelevant variation in measurement data. Also, issues are identified that need to be addressed to advance the science of measurement in public health emergency preparedness. PMID:24229520

  18. Introducing Evidence-Based Principles to Guide Collaborative Approaches to Evaluation: Results of an Empirical Process

    ERIC Educational Resources Information Center

    Shulha, Lyn M.; Whitmore, Elizabeth; Cousins, J. Bradley; Gilbert, Nathalie; al Hudib, Hind

    2016-01-01

    This article introduces a set of evidence-based principles to guide evaluation practice in contexts where evaluation knowledge is collaboratively produced by evaluators and stakeholders. The data from this study evolved in four phases: two pilot phases exploring the desirability of developing a set of principles; an online questionnaire survey…

  19. Teaching Standards-Based Group Work Competencies to Social Work Students: An Empirical Examination

    ERIC Educational Resources Information Center

    Macgowan, Mark J.; Vakharia, Sheila P.

    2012-01-01

    Objectives: Accreditation standards and challenges in group work education require competency-based approaches in teaching social work with groups. The Association for the Advancement of Social Work with Groups developed Standards for Social Work Practice with Groups, which serve as foundation competencies for professional practice. However, there…

  20. Homogeneity in Community-Based Rape Prevention Programs: Empirical Evidence of Institutional Isomorphism

    ERIC Educational Resources Information Center

    Townsend, Stephanie M.; Campbell, Rebecca

    2007-01-01

    This study examined the practices of 24 community-based rape prevention programs. Although these programs were geographically dispersed throughout one state, they were remarkably similar in their approach to rape prevention programming. DiMaggio and Powell's (1991) theory of institutional isomorphism was used to explain the underlying causes of…

  1. Journeys into Inquiry-Based Elementary Science: Literacy Practices, Questioning, and Empirical Study

    ERIC Educational Resources Information Center

    Howes, Elaine V.; Lim, Miyoun; Campos, Jaclyn

    2009-01-01

    Teaching literacy in inquiry-based science-teaching settings has recently become a focus of research in science education. Because professional scientists' uses of reading, writing, and speaking are foundational to their work, as well as to nonscientists' comprehension of it , it follows that literacy practices should also be central to science…

  2. Young Readers' Narratives Based on a Picture Book: Model Readers and Empirical Readers

    ERIC Educational Resources Information Center

    Hoel, Trude

    2015-01-01

    The article present parts of a research project where the aim is to investigate six- to seven-year-old children's language use in storytelling. The children's oral texts are based on the wordless picture book "Frog, Where Are You?" Which has been, and still remains, a frequent tool for collecting narratives from children. The Frog story…

  3. Perceptions of the Effectiveness of System Dynamics-Based Interactive Learning Environments: An Empirical Study

    ERIC Educational Resources Information Center

    Qudrat-Ullah, Hassan

    2010-01-01

    The use of simulations in general and of system dynamics simulation based interactive learning environments (SDILEs) in particular is well recognized as an effective way of improving users' decision making and learning in complex, dynamic tasks. However, the effectiveness of SDILEs in classrooms has rarely been evaluated. This article describes…

  4. An Adaptive E-Learning System Based on Students' Learning Styles: An Empirical Study

    ERIC Educational Resources Information Center

    Drissi, Samia; Amirat, Abdelkrim

    2016-01-01

    Personalized e-learning implementation is recognized as one of the most interesting research areas in the distance web-based education. Since the learning style of each learner is different one must fit e-learning with the different needs of learners. This paper presents an approach to integrate learning styles into adaptive e-learning hypermedia.…

  5. Inequality of Higher Education in China: An Empirical Test Based on the Perspective of Relative Deprivation

    ERIC Educational Resources Information Center

    Hou, Liming

    2014-01-01

    The primary goal of this paper is to examine what makes Chinese college students dissatisfied with entrance opportunities for higher education. Based on the author's survey data, we test two parameters which could be a potential cause of this dissatisfaction: 1) distributive inequality, which emphasizes the individual's dissatisfaction caused by…

  6. Preparation, Practice, and Performance: An Empirical Examination of the Impact of Standards-Based Instruction on Secondary Students' Math and Science Achievement

    ERIC Educational Resources Information Center

    Thompson, Carla J.

    2009-01-01

    For almost two decades proponents of educational reform have advocated the use of standards-based education in maths and science classrooms for improving teacher practices, increasing student learning, and raising the quality of maths and science instruction. This study empirically examined the impact of specific standards-based teacher…

  7. Conventional empirical law reverses in the phase transitions of 122-type iron-based superconductors

    PubMed Central

    Yu, Zhenhai; Wang, Lin; Wang, Luhong; Liu, Haozhe; Zhao, Jinggeng; Li, Chunyu; Sinogeikin, Stanislav; Wu, Wei; Luo, Jianlin; Wang, Nanlin; Yang, Ke; Zhao, Yusheng; Mao, Ho-kwang

    2014-01-01

    Phase transition of solid-state materials is a fundamental research topic in condensed matter physics, materials science and geophysics. It has been well accepted and widely proven that isostructural compounds containing different cations undergo same pressure-induced phase transitions but at progressively lower pressures as the cation radii increases. However, we discovered that this conventional law reverses in the structural transitions in 122-type iron-based superconductors. In this report, a combined low temperature and high pressure X-ray diffraction (XRD) measurement has identified the phase transition curves among the tetragonal (T), orthorhombic (O) and the collapsed-tetragonal (cT) phases in the structural phase diagram of the iron-based superconductor AFe2As2 (A = Ca, Sr, Eu, and Ba). The cation radii dependence of the phase transition pressure (T → cT) shows an opposite trend in which the compounds with larger ambient radii cations have a higher transition pressure. PMID:25417655

  8. Specification-based software sizing: An empirical investigation of function metrics

    NASA Technical Reports Server (NTRS)

    Jeffery, Ross; Stathis, John

    1993-01-01

    For some time the software industry has espoused the need for improved specification-based software size metrics. This paper reports on a study of nineteen recently developed systems in a variety of application domains. The systems were developed by a single software services corporation using a variety of languages. The study investigated several metric characteristics. It shows that: earlier research into inter-item correlation within the overall function count is partially supported; a priori function counts, in themself, do not explain the majority of the effort variation in software development in the organization studied; documentation quality is critical to accurate function identification; and rater error is substantial in manual function counting. The implication of these findings for organizations using function based metrics are explored.

  9. Cell death following BNCT: a theoretical approach based on Monte Carlo simulations.

    PubMed

    Ballarini, F; Bakeine, J; Bortolussi, S; Bruschi, P; Cansolino, L; Clerici, A M; Ferrari, C; Protti, N; Stella, S; Zonta, A; Zonta, C; Altieri, S

    2011-12-01

    In parallel to boron measurements and animal studies, investigations on radiation-induced cell death are also in progress in Pavia, with the aim of better characterisation of the effects of a BNCT treatment down to the cellular level. Such studies are being carried out not only experimentally but also theoretically, based on a mechanistic model and a Monte Carlo code. Such model assumes that: (1) only clustered DNA strand breaks can lead to chromosome aberrations; (2) only chromosome fragments within a certain threshold distance can undergo misrejoining; (3) the so-called "lethal aberrations" (dicentrics, rings and large deletions) lead to cell death. After applying the model to normal cells exposed to monochromatic fields of different radiation types, the irradiation section of the code was purposely extended to mimic the cell exposure to a mixed radiation field produced by the (10)B(n,α) (7)Li reaction, which gives rise to alpha particles and Li ions of short range and high biological effectiveness, and by the (14)N(n,p)(14)C reaction, which produces 0.58 MeV protons. Very good agreement between model predictions and literature data was found for human and animal cells exposed to X- or gamma-rays, protons and alpha particles, thus allowing to validate the model for cell death induced by monochromatic radiation fields. The model predictions showed good agreement also with experimental data obtained by our group exposing DHD cells to thermal neutrons in the TRIGA Mark II reactor of the University of Pavia; this allowed to validate the model also for a BNCT exposure scenario, providing a useful predictive tool to bridge the gap between irradiation and cell death. PMID:21481595

  10. Theoretical spectroscopic study of seven zinc(II) complex with macrocyclic Schiff-base ligand.

    PubMed

    Sayin, Koray; Kariper, Sultan Erkan; Sayin, Tuba Alagöz; Karakaş, Duran

    2014-12-10

    Seven zinc complexes, which are [ZnL(1)](2+), [ZnL(2)](2+), [ZnL(3)](2+), [ZnL(4)](2+), [ZnL(5)](2+), [ZnL(6)](2+) and [ZnL(7)](2+), are studied as theoretically. Structural parameters, vibration frequencies, electronic absorption spectra and (1)H and (13)C NMR spectra are obtained for Zn(II) complexes of macrocyclic penta and heptaaza Schiff-base ligand. Vibration spectra of Zn(II) complexes are studied by using Density Functional Theory (DFT) calculations at the B3LYP/LANL2DZ. The UV-VIS and NMR spectra of the zinc complexes are obtained by using Time Dependent-Density Functional Theory (TD-DFT) method and Giao method, respectively. The agreements are found between experimental data of [ZnL(5)](2+), [ZnL(6)](2+) and [ZnL(7)](2+) complex ions and their calculated results. The geometries of complexes are found as distorted pentagonal planar for [ZnL(1)](2+), [ZnL(2)](2+) and [ZnL(3)](2+) complex ions, distorted tetrahedral for [ZnL(4)](2+) complex ion and distorted pentagonal bipyramidal for [ZnL(5)](2+), [ZnL(6)](2+) and [ZnL(7)](2+) complex ions. Ranking of biological activity is determined by using quantum chemical parameters and this ranking is found as: [ZnL(7)](2+)>[ZnL(6)](2+)>[ZnL(5)](2+)>[ZnL(3)](2+)>[ZnL(2)](2+)>[ZnL(1)](2+). PMID:24967540

  11. Patients’ Acceptance towards a Web-Based Personal Health Record System: An Empirical Study in Taiwan

    PubMed Central

    Liu, Chung-Feng; Tsai, Yung-Chieh; Jang, Fong-Lin

    2013-01-01

    The health care sector has become increasingly interested in developing personal health record (PHR) systems as an Internet-based telehealthcare implementation to improve the quality and decrease the cost of care. However, the factors that influence patients’ intention to use PHR systems remain unclear. Based on physicians’ therapeutic expertise, we implemented a web-based infertile PHR system and proposed an extended Technology Acceptance Model (TAM) that integrates the physician-patient relationship (PPR) construct into TAM’s original perceived ease of use (PEOU) and perceived usefulness (PU) constructs to explore which factors will influence the behavioral intentions (BI) of infertile patients to use the PHR. From ninety participants from a medical center, 50 valid responses to a self-rating questionnaire were collected, yielding a response rate of 55.56%. The partial least squares (PLS) technique was used to assess the causal relationships that were hypothesized in the extended model. The results indicate that infertile patients expressed a moderately high intention to use the PHR system. The PPR and PU of patients had significant effects on their BI to use PHR, whereas the PEOU indirectly affected the patients’ BI through the PU. This investigation confirms that PPR can have a critical role in shaping patients’ perceptions of the use of healthcare information technologies. Hence, we suggest that hospitals should promote the potential usefulness of PHR and improve the quality of the physician-patient relationship to increase patients’ intention of using PHR. PMID:24142185

  12. An empirical evaluation of exemplar based image inpainting algorithms for natural scene image completion

    NASA Astrophysics Data System (ADS)

    Sangeetha, K.; Sengottuvelan, P.

    2013-03-01

    Image inpainting is the process of filling in of missing region so as to preserve its overall continuity. Image inpainting is manipulation and modification of an image in a form that is not easily detected. Digital image inpainting is relatively new area of research, but numerous and different approaches to tackle the inpainting problem have been proposed since the concept was first introduced. This paper analyzes and compares two recent exemplar based inpainting algorithms by Zhaolin Lu et al and Hao Guo et al. A number of examples on real images are demonstrated to evaluate the results of algorithms using Peak Signal to Noise Ratio (PSNR).

  13. Theoretical Issues

    SciTech Connect

    Marc Vanderhaeghen

    2007-04-01

    The theoretical issues in the interpretation of the precision measurements of the nucleon-to-Delta transition by means of electromagnetic probes are highlighted. The results of these measurements are confronted with the state-of-the-art calculations based on chiral effective-field theories (EFT), lattice QCD, large-Nc relations, perturbative QCD, and QCD-inspired models. The link of the nucleon-to-Delta form factors to generalized parton distributions (GPDs) is also discussed.

  14. Conventional empirical law reverses in the phase transitions of 122-type iron-based superconductors

    DOE PAGESBeta

    Yu, Zhenhai; Wang, Lin; Wang, Luhong; Liu, Haozhe; Zhao, Jinggeng; Li, Chunyu; Sinogeikin, Stanislav; Wu, Wei; Luo, Jianlin; Wang, Nanlin; et al

    2014-11-24

    Phase transition of solid-state materials is a fundamental research topic in condensed matter physics, materials science and geophysics. It has been well accepted and widely proven that isostructural compounds containing different cations undergo same pressure-induced phase transitions but at progressively lower pressures as the cation radii increases. However, we discovered that this conventional law reverses in the structural transitions in 122-type iron-based superconductors. In this report, a combined low temperature and high pressure X-ray diffraction (XRD) measurement has identified the phase transition curves among the tetragonal (T), orthorhombic (O) and the collapsed-tetragonal (cT) phases in the structural phase diagram ofmore » the iron-based superconductor AFe2As2 (A = Ca, Sr, Eu, and Ba). As a result, the cation radii dependence of the phase transition pressure (T → cT) shows an opposite trend in which the compounds with larger ambient radii cations have a higher transition pressure.« less

  15. An Empirical Comparison of Tree-Based Methods for Propensity Score Estimation

    PubMed Central

    Watkins, Stephanie; Jonsson-Funk, Michele; Brookhart, M Alan; Rosenberg, Steven A; O'Shea, T Michael; Daniels, Julie

    2013-01-01

    Objective To illustrate the use of ensemble tree-based methods (random forest classification [RFC] and bagging) for propensity score estimation and to compare these methods with logistic regression, in the context of evaluating the effect of physical and occupational therapy on preschool motor ability among very low birth weight (VLBW) children. Data Source We used secondary data from the Early Childhood Longitudinal Study Birth Cohort (ECLS-B) between 2001 and 2006. Study Design We estimated the predicted probability of treatment using tree-based methods and logistic regression (LR). We then modeled the exposure-outcome relation using weighted LR models while considering covariate balance and precision for each propensity score estimation method. Principal Findings Among approximately 500 VLBW children, therapy receipt was associated with moderately improved preschool motor ability. Overall, ensemble methods produced the best covariate balance (Mean Squared Difference: 0.03–0.07) and the most precise effect estimates compared to LR (Mean Squared Difference: 0.11). The overall magnitude of the effect estimates was similar between RFC and LR estimation methods. Conclusion Propensity score estimation using RFC and bagging produced better covariate balance with increased precision compared to LR. Ensemble methods are a useful alterative to logistic regression to control confounding in observational studies. PMID:23701015

  16. Conventional empirical law reverses in the phase transitions of 122-type iron-based superconductors

    SciTech Connect

    Yu, Zhenhai; Wang, Lin; Wang, Luhong; Liu, Haozhe; Zhao, Jinggeng; Li, Chunyu; Sinogeikin, Stanislav; Wu, Wei; Luo, Jianlin; Wang, Nanlin; Yang, Ke; Zhao, Yusheng; Mao, Ho -kwang

    2014-11-24

    Phase transition of solid-state materials is a fundamental research topic in condensed matter physics, materials science and geophysics. It has been well accepted and widely proven that isostructural compounds containing different cations undergo same pressure-induced phase transitions but at progressively lower pressures as the cation radii increases. However, we discovered that this conventional law reverses in the structural transitions in 122-type iron-based superconductors. In this report, a combined low temperature and high pressure X-ray diffraction (XRD) measurement has identified the phase transition curves among the tetragonal (T), orthorhombic (O) and the collapsed-tetragonal (cT) phases in the structural phase diagram of the iron-based superconductor AFe2As2 (A = Ca, Sr, Eu, and Ba). As a result, the cation radii dependence of the phase transition pressure (T → cT) shows an opposite trend in which the compounds with larger ambient radii cations have a higher transition pressure.

  17. Competence-based demands made of senior physicians: an empirical study to evaluate leadership competencies.

    PubMed

    Lehr, Bosco; Ostermann, Herwig; Schubert, Harald

    2011-01-01

    As a result of more economising in German hospitals, changes evolve in organising the deployment of senior medical staff. New demands are made of senior hospital management. Leadership competencies in the training and development of physicians are of prime importance to the successful perception of managerial responsibilities. The present study investigates the actual and targeted demands of leadership made of senior medical staff in terms of how these demands are perceived. To this end, the demands of leadership were surveyed using a competence-based questionnaire and investigated with a view to potentials in professional development by way of example of the senior management of psychiatric hospitals in Germany. In all, the results show high ratings in personal performance, the greatest significance being attributed to value-oriented competence in the actual assessment of demands on leadership. Besides gender-specific differences in the actual assessments of single fields of competence, the greatest differences between the targeted and the actual demands are, in all, shown to be in the competencies of self-management and communication. Competence-based core areas in leadership can be demonstrated for the professional development of physicians and an adaptive mode of procedure deduced. PMID:22176981

  18. Empirically based guidelines for moderate drinking: 1-year results from three studies with problem drinkers.

    PubMed Central

    Sanchez-Craig, M; Wilkinson, D A; Davila, R

    1995-01-01

    OBJECTIVES. The study was conducted to refine guidelines on moderate drinking for problem drinkers, persons whose alcohol use is hazardous or harmful. Information on levels of alcohol intake unlikely to cause problems is useful for health professionals, educators, and policymakers. METHODS. Based on their reports of alcohol-related problems, participants in three studies assessing interventions to reduce heavy drinking (114 men, 91 women) were categorized as "problem-free" or "problem" drinkers at follow-up. Drinking measures were examined to identify patterns separating these outcome categories. RESULTS. Analyses using 95% confidence intervals for means on drinking measures showed that guidelines should be sex-specific. Based on analyses of positive and negative predictive value, sensitivity, and specificity, it is recommended that men consume no more than 4 standard drinks in any day and 16 drinks in any week, and that women consume no more than 3 drinks in any day and 12 drinks in any week. CONCLUSIONS. These guidelines are consistent with those from several official bodies and should be useful for advising problem drinkers when moderation is a valid treatment goal. Their applicability to the general population is unevaluated. PMID:7762717

  19. Theoretical and experimental investigations in characterizing and developing multiplexed diamond-based neutron spectrometers

    NASA Astrophysics Data System (ADS)

    Lukosi, Eric

    In this work a novel technique of multiplexing diamond is presented where electronic grade diamond plates are connected electrically in series and in parallel to increase the overall detection efficiency of diamond-based neutron detection systems. Theoretical results utilizing MCNPX indicate that further development in this simulation software is required to accurately predict the response of diamond to various interrogating neutron energies. However, the results were accurate enough to indicate that an equivalent diamond plate 1cm thick only lowers the energy resolution of the 12 C(n,αo)9Be peak from a 14.1 MeV interrogating neutron reference field by a factor of two compared to a single diamond plate 0.5mm thick while increasing the detection efficiency from 1.34 percent for a single diamond plate to 25.4 percent for the 1cm thick diamond plate. Further, the number of secondary neutron interactions is minimal, approximately 5.3 percent, with a detection medium this size. It is also shown that photons can interfere with lower energy neutron signals when multiplexing is used, especially at lower impinging photon energies, although the full energy peak still does not dominantly present itself in the pulse height spectrum for multiplexed arrays approaching 1cm with respect to the interrogating neutron reference field vector. Experimental results indicate that series multiplexing is not capable for use as a means of increasing the active detection volume of a diamond-based neutron spectrometer because of the interaction of diamond detection mediums in series with each other and the input capacitor of a charge sensitive preamplifier, where severe signal degradation is seen due to the equal impedances of the single crystal diamond plates. However, parallel multiplexing is shown to have great promise, although there are limitations to this technique due to the large capacitance at the preamplifier input for a large parallel multiplexed array. Still, the latter

  20. Behavioral Modeling Based on Probabilistic Finite Automata: An Empirical Study †

    PubMed Central

    Tîrnăucă, Cristina; Montaña, José L.; Ontañón, Santiago; González, Avelino J.; Pardo, Luis M.

    2016-01-01

    Imagine an agent that performs tasks according to different strategies. The goal of Behavioral Recognition (BR) is to identify which of the available strategies is the one being used by the agent, by simply observing the agent’s actions and the environmental conditions during a certain period of time. The goal of Behavioral Cloning (BC) is more ambitious. In this last case, the learner must be able to build a model of the behavior of the agent. In both settings, the only assumption is that the learner has access to a training set that contains instances of observed behavioral traces for each available strategy. This paper studies a machine learning approach based on Probabilistic Finite Automata (PFAs), capable of achieving both the recognition and cloning tasks. We evaluate the performance of PFAs in the context of a simulated learning environment (in this case, a virtual Roomba vacuum cleaner robot), and compare it with a collection of other machine learning approaches. PMID:27347956

  1. Psychological First Aid: A Consensus-Derived, Empirically Supported, Competency-Based Training Model

    PubMed Central

    Everly, George S.; Brown, Lisa M.; Wendelboe, Aaron M.; Abd Hamid, Nor Hashidah; Tallchief, Vicki L.; Links, Jonathan M.

    2014-01-01

    Surges in demand for professional mental health services occasioned by disasters represent a major public health challenge. To build response capacity, numerous psychological first aid (PFA) training models for professional and lay audiences have been developed that, although often concurring on broad intervention aims, have not systematically addressed pedagogical elements necessary for optimal learning or teaching. We describe a competency-based model of PFA training developed under the auspices of the Centers for Disease Control and Prevention and the Association of Schools of Public Health. We explain the approach used for developing and refining the competency set and summarize the observable knowledge, skills, and attitudes underlying the 6 core competency domains. We discuss the strategies for model dissemination, validation, and adoption in professional and lay communities. PMID:23865656

  2. An Empirical Pixel-Based CTE Correction for ACS/WFC

    NASA Astrophysics Data System (ADS)

    Anderson, Jay

    2010-07-01

    This presentation summarizes a paper that has been recently published in PASP, Anderson & Bedin (2010). The paper describes our pixel-based approach to correcting ACS data for imperfect CTE (charge-transfer efficiency). We developed the approach by characterizing the size and profiles of trails behind warm pixels in dark exposures. We found an algorithm that simulates the way imperfect CTE impacts the readout process. To correct images for imperfect CTE, we use a forwardmodeling procedure to determine the likely original distribution of charge, given the distribution that was read out. We applied this CTE-reconstruction algorithm to science images and found that the fluxes, positions and shapes of stars were restored to high fidelity. The ACS team is currently working to make this correction available to the public; they are also running tests to determine whether and how best to implement it in the pipeline.

  3. Joint multifractal analysis based on the partition function approach: analytical analysis, numerical simulation and empirical application

    NASA Astrophysics Data System (ADS)

    Xie, Wen-Jie; Jiang, Zhi-Qiang; Gu, Gao-Feng; Xiong, Xiong; Zhou, Wei-Xing

    2015-10-01

    Many complex systems generate multifractal time series which are long-range cross-correlated. Numerous methods have been proposed to characterize the multifractal nature of these long-range cross correlations. However, several important issues about these methods are not well understood and most methods consider only one moment order. We study the joint multifractal analysis based on partition function with two moment orders, which was initially invented to investigate fluid fields, and derive analytically several important properties. We apply the method numerically to binomial measures with multifractal cross correlations and bivariate fractional Brownian motions without multifractal cross correlations. For binomial multifractal measures, the explicit expressions of mass function, singularity strength and multifractal spectrum of the cross correlations are derived, which agree excellently with the numerical results. We also apply the method to stock market indexes and unveil intriguing multifractality in the cross correlations of index volatilities.

  4. A prediction procedure for propeller aircraft flyover noise based on empirical data

    NASA Astrophysics Data System (ADS)

    Smith, M. H.

    1981-04-01

    Forty-eight different flyover noise certification tests are analyzed using multiple linear regression methods. A prediction model is presented based on this analysis, and the results compared with the test data and two other prediction methods. The aircraft analyzed include 30 single engine aircraft, 16 twin engine piston aircraft, and two twin engine turboprops. The importance of helical tip Mach number is verified and the relationship of several other aircraft, engine, and propeller parameters is developed. The model shows good agreement with the test data and is at least as accurate as the other prediction methods. It has the advantage of being somewhat easier to use since it is in the form of a single equation.

  5. Empirical estimation of consistency parameter in intertemporal choice based on Tsallis’ statistics

    NASA Astrophysics Data System (ADS)

    Takahashi, Taiki; Oono, Hidemi; Radford, Mark H. B.

    2007-07-01

    Impulsivity and inconsistency in intertemporal choice have been attracting attention in econophysics and neuroeconomics. Although loss of self-control by substance abusers is strongly related to their inconsistency in intertemporal choice, researchers in neuroeconomics and psychopharmacology have usually studied impulsivity in intertemporal choice using a discount rate (e.g. hyperbolic k), with little effort being expended on parameterizing subject's inconsistency in intertemporal choice. Recent studies using Tsallis’ statistics-based econophysics have found a discount function (i.e. q-exponential discount function), which may continuously parameterize a subject's consistency in intertemporal choice. In order to examine the usefulness of the consistency parameter (0⩽q⩽1) in the q-exponential discounting function in behavioral studies, we experimentally estimated the consistency parameter q in Tsallis’ statistics-based discounting function by assessing the points of subjective equality (indifference points) at seven delays (1 week-25 years) in humans (N=24). We observed that most (N=19) subjects’ intertemporal choice was completely inconsistent ( q=0, i.e. hyperbolic discounting), the mean consistency (0⩽q⩽1) was smaller than 0.5, and only one subject had a completely consistent intertemporal choice ( q=1, i.e. exponential discounting). There was no significant correlation between impulsivity and inconsistency parameters. Our results indicate that individual differences in consistency in intertemporal choice can be parameterized by introducing a q-exponential discount function and most people discount delayed rewards hyperbolically, rather than exponentially (i.e. mean q is smaller than 0.5). Further, impulsivity and inconsistency in intertemporal choice can be considered as separate behavioral tendencies. The usefulness of the consistency parameter q in psychopharmacological studies of addictive behavior was demonstrated in the present study.

  6. Empirical quantum mechanics

    NASA Astrophysics Data System (ADS)

    Nishimura, Hirokazu

    1996-06-01

    Machida and Namiki developed a many-Hilbert-spaces formalism for dealing with the interaction between a quantum object and a measuring apparatus. Their mathematically rugged formalism was polished first by Araki from an operator-algebraic standpoint and then by Ozawa for Boolean quantum mechanics, which approaches a quantum system with a compatible family of continuous superselection rules from a notable and perspicacious viewpoint. On the other hand, Foulis and Randall set up a formal theory for the empirical foundation of all sciences, at the hub of which lies the notion of a manual of operations. They deem an operation as the set of possible outcomes and put down a manual of operations at a family of partially overlapping operations. Their notion of a manual of operations was incorporated into a category-theoretic standpoint into that of a manual of Boolean locales by Nishimura, who looked upon an operation as the complete Boolean algebra of observable events. Considering a family of Hilbert spaces not over a single Boolean locale but over a manual of Boolean locales as a whole, Ozawa's Boolean quantum mechanics is elevated into empirical quantum mechanics, which is, roughly speaking, the study of quantum systems with incompatible families of continuous superselection rules. To this end, we are obliged to develop empirical Hilbert space theory. In particular, empirical versions of the square root lemma for bounded positive operators, the spectral theorem for (possibly unbounded) self-adjoint operators, and Stone's theorem for one-parameter unitary groups are established.

  7. A beginner's guide to writing the nursing conceptual model-based theoretical rationale.

    PubMed

    Gigliotti, Eileen; Manister, Nancy N

    2012-10-01

    Writing the theoretical rationale for a study can be a daunting prospect for novice researchers. Nursing's conceptual models provide excellent frameworks for placement of study variables, but moving from the very abstract concepts of the nursing model to the less abstract concepts of the study variables is difficult. Similar to the five-paragraph essay used by writing teachers to assist beginning writers to construct a logical thesis, the authors of this column present guidelines that beginners can follow to construct their theoretical rationale. This guide can be used with any nursing conceptual model but Neuman's model was chosen here as the exemplar. PMID:23087334

  8. Empirical evidence for site coefficients in building code provisions

    USGS Publications Warehouse

    Borcherdt, R.D.

    2002-01-01

    Site-response coefficients, Fa and Fv, used in U.S. building code provisions are based on empirical data for motions up to 0.1 g. For larger motions they are based on theoretical and laboratory results. The Northridge earthquake of 17 January 1994 provided a significant new set of empirical data up to 0.5 g. These data together with recent site characterizations based on shear-wave velocity measurements provide empirical estimates of the site coefficients at base accelerations up to 0.5 g for Site Classes C and D. These empirical estimates of Fa and Fnu; as well as their decrease with increasing base acceleration level are consistent at the 95 percent confidence level with those in present building code provisions, with the exception of estimates for Fa at levels of 0.1 and 0.2 g, which are less than the lower confidence bound by amounts up to 13 percent. The site-coefficient estimates are consistent at the 95 percent confidence level with those of several other investigators for base accelerations greater than 0.3 g. These consistencies and present code procedures indicate that changes in the site coefficients are not warranted. Empirical results for base accelerations greater than 0.2 g confirm the need for both a short- and a mid- or long-period site coefficient to characterize site response for purposes of estimating site-specific design spectra.

  9. Emotional competencies in geriatric nursing: empirical evidence from a computer based large scale assessment calibration study.

    PubMed

    Kaspar, Roman; Hartig, Johannes

    2016-03-01

    The care of older people was described as involving substantial emotion-related affordances. Scholars in vocational training and nursing disagree whether emotion-related skills could be conceptualized and assessed as a professional competence. Studies on emotion work and empathy regularly neglect the multidimensionality of these phenomena and their relation to the care process, and are rarely conclusive with respect to nursing behavior in practice. To test the status of emotion-related skills as a facet of client-directed geriatric nursing competence, 402 final-year nursing students from 24 German schools responded to a 62-item computer-based test. 14 items were developed to represent emotion-related affordances. Multi-dimensional IRT modeling was employed to assess a potential subdomain structure. Emotion-related test items did not form a separate subdomain, and were found to be discriminating across the whole competence continuum. Tasks concerning emotion work and empathy are reliable indicators for various levels of client-directed nursing competence. Claims for a distinct emotion-related competence in geriatric nursing, however, appear excessive with a process-oriented perspective. PMID:26108300

  10. An empirical comparison of different LDA methods in fMRI-based brain states decoding.

    PubMed

    Xia, Maogeng; Song, Sutao; Yao, Li; Long, Zhiying

    2015-01-01

    Decoding brain states from response patterns with multivariate pattern recognition techniques is a popular method for detecting multivoxel patterns of brain activation. These patterns are informative with respect to a subject's perceptual or cognitive states. Linear discriminant analysis (LDA) cannot be directly applied to fMRI data analysis because of the "few samples and large features" nature of functional magnetic resonance imaging (fMRI) data. Although several improved LDA methods have been used in fMRI-based decoding, little is known regarding the relative performance of different LDA classifiers on fMRI data. In this study, we compared five LDA classifiers using both simulated data with varied noise levels and real fMRI data. The compared LDA classifiers include LDA combined with PCA (LDA-PCA), LDA with three types of regularizations (identity matrix, diagonal matrix and scaled identity matrix) and LDA with optimal-shrinkage covariance estimator using Ledoit and Wolf lemma (LDA-LW). The results indicated that LDA-LW was the most robust to noises. Moreover, LDA-LW and LDA with scaled identity matrix showed better stability and classification accuracy than the other methods. LDA-LW demonstrated the best overall performance. PMID:26405876

  11. Empirically based recommendations to support parents facing the dilemma of paediatric cadaver organ donation.

    PubMed

    Bellali, T; Papazoglou, I; Papadatou, D

    2007-08-01

    The aim of the study was to describe the challenges donor and non-donor parents encounter before, during, and after the organ donation decision, and to identify parents' needs and expectations from health care professionals. A further aim was to propose evidence-based recommendations for effectively introducing the option of donation, and supporting families through the grieving process. This study was undertaken as part of a larger research project investigating the experiences of Greek parents who consented or declined organ and tissue donation, using a qualitative methodology for data collection and analysis. The experiences of 22 Greek bereaved parents of 14 underage brain dead children were studied through semi-structured interviews. Parents' decision-making process was described as challenging and fraught with difficulties both before and after the donation period. Identified challenges were clustered into: (a) personal challenges, (b) conditions of organ request, and (c) interpersonal challenges. Parents' main concern following donation was the lack of information about transplantation outcomes. Findings led to a list of recommendations for nurses and other health professionals for approaching and supporting parents in making choices about paediatric organ donation that are appropriate to them, and for facilitating their adjustment to the sudden death of their underage child. PMID:17475498

  12. Turbulent inflow conditions for large-eddy simulation based on low-order empirical model

    NASA Astrophysics Data System (ADS)

    Perret, Laurent; Delville, Joël; Manceau, Rémi; Bonnet, Jean-Paul

    2008-07-01

    Generation of turbulent inflow boundary conditions is performed by interfacing an experimental database acquired by particle image velocimetry to a computational code. The proposed method ensures that the velocity fields introduced as inlet conditions in the computational code present correct one- and two-point spatial statistics and a realistic temporal dynamics. This approach is based on the use of the proper orthogonal decomposition (POD) to interpolate and extrapolate the experimental data onto the numerical mesh and to model both the temporal dynamics and the spatial organization of the flow in the inlet section. Realistic representation of the flow is achieved by extracting and modeling independently its coherent and incoherent parts. A low-order dynamical model is derived from the experimental database in order to provide the temporal evolution of the most energetic structures. The incoherent motion is modeled by employing time series of Gaussian random numbers to mimic the temporal evolution of higher order POD modes. Validation of the proposed method is provided by performing a large-eddy simulation of a turbulent plane mixing layer, which is compared to experimental results.

  13. Empirical modeling of the fine particle fraction for carrier-based pulmonary delivery formulations

    PubMed Central

    Pacławski, Adam; Szlęk, Jakub; Lau, Raymond; Jachowicz, Renata; Mendyk, Aleksander

    2015-01-01

    In vitro study of the deposition of drug particles is commonly used during development of formulations for pulmonary delivery. The assay is demanding, complex, and depends on: properties of the drug and carrier particles, including size, surface characteristics, and shape; interactions between the drug and carrier particles and assay conditions, including flow rate, type of inhaler, and impactor. The aerodynamic properties of an aerosol are measured in vitro using impactors and in most cases are presented as the fine particle fraction, which is a mass percentage of drug particles with an aerodynamic diameter below 5 μm. In the present study, a model in the form of a mathematical equation was developed for prediction of the fine particle fraction. The feature selection was performed using the R-environment package “fscaret”. The input vector was reduced from a total of 135 independent variables to 28. During the modeling stage, techniques like artificial neural networks, genetic programming, rule-based systems, and fuzzy logic systems were used. The 10-fold cross-validation technique was used to assess the generalization ability of the models created. The model obtained had good predictive ability, which was confirmed by a root-mean-square error and normalized root-mean-square error of 4.9 and 11%, respectively. Moreover, validation of the model using external experimental data was performed, and resulted in a root-mean-square error and normalized root-mean-square error of 3.8 and 8.6%, respectively. PMID:25653522

  14. [Research on ECG de-noising method based on ensemble empirical mode decomposition and wavelet transform using improved threshold function].

    PubMed

    Ye, Linlin; Yang, Dan; Wang, Xu

    2014-06-01

    A de-noising method for electrocardiogram (ECG) based on ensemble empirical mode decomposition (EEMD) and wavelet threshold de-noising theory is proposed in our school. We decomposed noised ECG signals with the proposed method using the EEMD and calculated a series of intrinsic mode functions (IMFs). Then we selected IMFs and reconstructed them to realize the de-noising for ECG. The processed ECG signals were filtered again with wavelet transform using improved threshold function. In the experiments, MIT-BIH ECG database was used for evaluating the performance of the proposed method, contrasting with de-noising method based on EEMD and wavelet transform with improved threshold function alone in parameters of signal to noise ratio (SNR) and mean square error (MSE). The results showed that the ECG waveforms de-noised with the proposed method were smooth and the amplitudes of ECG features did not attenuate. In conclusion, the method discussed in this paper can realize the ECG denoising and meanwhile keep the characteristics of original ECG signal. PMID:25219236

  15. An Empirical Orthogonal Function-Based Algorithm for Estimating Terrestrial Latent Heat Flux from Eddy Covariance, Meteorological and Satellite Observations.

    PubMed

    Feng, Fei; Li, Xianglan; Yao, Yunjun; Liang, Shunlin; Chen, Jiquan; Zhao, Xiang; Jia, Kun; Pintér, Krisztina; McCaughey, J Harry

    2016-01-01

    Accurate estimation of latent heat flux (LE) based on remote sensing data is critical in characterizing terrestrial ecosystems and modeling land surface processes. Many LE products were released during the past few decades, but their quality might not meet the requirements in terms of data consistency and estimation accuracy. Merging multiple algorithms could be an effective way to improve the quality of existing LE products. In this paper, we present a data integration method based on modified empirical orthogonal function (EOF) analysis to integrate the Moderate Resolution Imaging Spectroradiometer (MODIS) LE product (MOD16) and the Priestley-Taylor LE algorithm of Jet Propulsion Laboratory (PT-JPL) estimate. Twenty-two eddy covariance (EC) sites with LE observation were chosen to evaluate our algorithm, showing that the proposed EOF fusion method was capable of integrating the two satellite data sets with improved consistency and reduced uncertainties. Further efforts were needed to evaluate and improve the proposed algorithm at larger spatial scales and time periods, and over different land cover types. PMID:27472383

  16. An Empirical Orthogonal Function-Based Algorithm for Estimating Terrestrial Latent Heat Flux from Eddy Covariance, Meteorological and Satellite Observations

    PubMed Central

    Feng, Fei; Li, Xianglan; Yao, Yunjun; Liang, Shunlin; Chen, Jiquan; Zhao, Xiang; Jia, Kun; Pintér, Krisztina; McCaughey, J. Harry

    2016-01-01

    Accurate estimation of latent heat flux (LE) based on remote sensing data is critical in characterizing terrestrial ecosystems and modeling land surface processes. Many LE products were released during the past few decades, but their quality might not meet the requirements in terms of data consistency and estimation accuracy. Merging multiple algorithms could be an effective way to improve the quality of existing LE products. In this paper, we present a data integration method based on modified empirical orthogonal function (EOF) analysis to integrate the Moderate Resolution Imaging Spectroradiometer (MODIS) LE product (MOD16) and the Priestley-Taylor LE algorithm of Jet Propulsion Laboratory (PT-JPL) estimate. Twenty-two eddy covariance (EC) sites with LE observation were chosen to evaluate our algorithm, showing that the proposed EOF fusion method was capable of integrating the two satellite data sets with improved consistency and reduced uncertainties. Further efforts were needed to evaluate and improve the proposed algorithm at larger spatial scales and time periods, and over different land cover types. PMID:27472383

  17. Chemical and physical influences on aerosol activation in liquid clouds: an empirical study based on observations from the Jungfraujoch, Switzerland

    NASA Astrophysics Data System (ADS)

    Hoyle, C. R.; Webster, C. S.; Rieder, H. E.; Hammer, E.; Gysel, M.; Bukowiecki, N.; Weingartner, E.; Steinbacher, M.; Baltensperger, U.

    2015-06-01

    A simple empirical model to predict the number of aerosols which activate to form cloud droplets in a warm, free tropospheric cloud has been established, based on data from four summertime Cloud and Aerosol Characterisation Experiments (CLACE) campaigns at the Jungfraujoch (JFJ). It is shown that 76% of the observed variance in droplet numbers can be represented by a model accounting only for the number of potential CCN (defined as number of particles larger than 90 nm in diameter), while the mean errors in the model representation may be reduced by the addition of further explanatory variables, such as the mixing ratios of O3, CO and the height of the measurements above cloud base. The model has similar ability to represent the observed droplet numbers in each of the individual years, as well as for the two predominant local wind directions at the JFJ (north west and south east). Given the central European location of the JFJ, with air masses in summer being representative of the free troposphere with regular boundary layer in-mixing via convection, we expect that this model is applicable to warm, free tropospheric clouds over the European continent.

  18. On the use of Empirical Data to Downscale Non-scientific Scepticism About Results From Complex Physical Based Models

    NASA Astrophysics Data System (ADS)

    Germer, S.; Bens, O.; Hüttl, R. F.

    2008-12-01

    The scepticism of non-scientific local stakeholders about results from complex physical based models is a major problem concerning the development and implementation of local climate change adaptation measures. This scepticism originates from the high complexity of such models. Local stakeholders perceive complex models as black-box models, as it is impossible to gasp all underlying assumptions and mathematically formulated processes at a glance. The use of physical based models is, however, indispensible to study complex underlying processes and to predict future environmental changes. The increase of climate change adaptation efforts following the release of the latest IPCC report indicates that the communication of facts about what has already changed is an appropriate tool to trigger climate change adaptation. Therefore we suggest increasing the practice of empirical data analysis in addition to modelling efforts. The analysis of time series can generate results that are easier to comprehend for non-scientific stakeholders. Temporal trends and seasonal patterns of selected hydrological parameters (precipitation, evapotranspiration, groundwater levels and river discharge) can be identified and the dependence of trends and seasonal patters to land use, topography and soil type can be highlighted. A discussion about lag times between the hydrological parameters can increase the awareness of local stakeholders for delayed environment responses.

  19. A Cutting Pattern Recognition Method for Shearers Based on Improved Ensemble Empirical Mode Decomposition and a Probabilistic Neural Network

    PubMed Central

    Xu, Jing; Wang, Zhongbin; Tan, Chao; Si, Lei; Liu, Xinhua

    2015-01-01

    In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD) and Probabilistic Neural Network (PNN) is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF) components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method. PMID:26528985

  20. A Cutting Pattern Recognition Method for Shearers Based on Improved Ensemble Empirical Mode Decomposition and a Probabilistic Neural Network.

    PubMed

    Xu, Jing; Wang, Zhongbin; Tan, Chao; Si, Lei; Liu, Xinhua

    2015-01-01

    In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD) and Probabilistic Neural Network (PNN) is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF) components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method. PMID:26528985

  1. A UNIFIED EMPIRICAL MODEL FOR INFRARED GALAXY COUNTS BASED ON THE OBSERVED PHYSICAL EVOLUTION OF DISTANT GALAXIES

    SciTech Connect

    Bethermin, Matthieu; Daddi, Emanuele; Sargent, Mark T.; Elbaz, David; Mullaney, James; Pannella, Maurilio; Hezaveh, Yashar; Le Borgne, Damien; Buat, Veronique; Charmandaris, Vassilis; Lagache, Guilaine; Scott, Douglas

    2012-10-01

    We reproduce the mid-infrared to radio galaxy counts with a new empirical model based on our current understanding of the evolution of main-sequence (MS) and starburst (SB) galaxies. We rely on a simple spectral energy distribution (SED) library based on Herschel observations: a single SED for the MS and another one for SB, getting warmer with redshift. Our model is able to reproduce recent measurements of galaxy counts performed with Herschel, including counts per redshift slice. This agreement demonstrates the power of our 2-Star-Formation Modes (2SFM) decomposition in describing the statistical properties of infrared sources and their evolution with cosmic time. We discuss the relative contribution of MS and SB galaxies to the number counts at various wavelengths and flux densities. We also show that MS galaxies are responsible for a bump in the 1.4 GHz radio counts around 50 {mu}Jy. Material of the model (predictions, SED library, mock catalogs, etc.) is available online.

  2. Spectroscopic, colorimetric and theoretical investigation of Salicylidene hydrazine based reduced Schiff base and its application towards biologically important anions

    NASA Astrophysics Data System (ADS)

    Jana, Sankar; Dalapati, Sasanka; Alam, Md. Akhtarul; Guchhait, Nikhil

    A reduced Schiff base anionic receptor 1 [N,N'-bis-(2-hydroxy-5-nitro-benzyl)hydrazine] has been synthesized, characterized and reported as a selective chromogenic receptor for fluoride, acetate and phosphate anions over the other tested anions such as chloride, bromide, iodide and hydrogensulphite. Colorimetric naked-eye detection and UV-vis absorption spectroscopic techniques were used to distinguish the recognition behaviours towards various anions. The receptor-anion complexation mainly occurs via hydrogen bonding interactions which facile to generate the charge transfer band in the UV-vis spectra and cause large bathochromic shift as well as naked-eye colour change. Complexation stoichiometry, binding constant and free energy change due to complex formation were determined from Benesi-Hildebrand plot. The binding constant and the free energy change values are well interactive for spontaneous complexation. The experimental results have been correlated with the theoretical calculations using B3LYP hybrid functional and 6-311++G(d,p) basis set for both the receptor and complex by Density Functional Theory (DFT) method.

  3. Empirical study on neural network based predictive techniques for automatic number plate recognition

    NASA Astrophysics Data System (ADS)

    Shashidhara, M. S.; Indrakumar, S. S.

    2011-10-01

    The objective of this study is to provide an easy, accurate and effective technology for the Bangalore city traffic control. This is based on the techniques of image processing and laser beam technology. The core concept chosen here is an image processing technology by the method of automatic number plate recognition system. First number plate is recognized if any vehicle breaks the traffic rules in the signals. The number is fetched from the database of the RTO office by the process of automatic database fetching. Next this sends the notice and penalty related information to the vehicle owner email-id and an SMS sent to vehicle owner. In this paper, we use of cameras with zooming options & laser beams to get accurate pictures further applied image processing techniques such as Edge detection to understand the vehicle, Identifying the location of the number plate, Identifying the number plate for further use, Plain plate number, Number plate with additional information, Number plates in the different fonts. Accessing the database of the vehicle registration office to identify the name and address and other information of the vehicle number. The updates to be made to the database for the recording of the violation and penalty issues. A feed forward artificial neural network is used for OCR. This procedure is particularly important for glyphs that are visually similar such as '8' and '9' and results in training sets of between 25,000 and 40,000 training samples. Over training of the neural network is prevented by Bayesian regularization. The neural network output value is set to 0.05 when the input is not desired glyph, and 0.95 for correct input.

  4. An empirical approach to predicting long term behavior of metal particle based recording media

    NASA Technical Reports Server (NTRS)

    Hadad, Allan S.

    1991-01-01

    Alpha iron particles used for magnetic recording are prepared through a series of dehydration and reduction steps of alpha-Fe2O3-H2O resulting in acicular, polycrystalline, body centered cubic (bcc) alpha-Fe particles that are single magnetic domains. Since fine iron particles are pyrophoric by nature, stabilization processes had to be developed in order for iron particles to be considered as a viable recording medium for long term archival (i.e., 25+ years) information storage. The primary means of establishing stability is through passivation or controlled oxidation of the iron particle's surface. Since iron particles used for magnetic recording are small, additional oxidation has a direct impact on performance especially where archival storage of recorded information for long periods of time is important. Further stabilization chemistry/processes had to be developed to guarantee that iron particles could be considered as a viable long term recording medium. In an effort to retard the diffusion of iron ions through the oxide layer, other elements such as silicon, aluminum, and chromium have been added to the base iron to promote more dense scale formation or to alleviate some of the non-stoichiometric behavior of the oxide or both. The presence of water vapor has been shown to disrupt the passive layer, subsequently increasing the oxidation rate of the iron. A study was undertaken to examine the degradation in magnetic properties as a function of both temperature and humidity on silicon-containing iron particles between 50-120 deg C and 3-89 percent relative humidity. The methodology to which experimental data was collected and analyzed leading to predictive capability is discussed.

  5. Does community-based conservation shape favorable attitudes among locals? an empirical study from nepal.

    PubMed

    Mehta, J N; Heinen, J T

    2001-08-01

    Like many developing countries, Nepal has adopted a community-based conservation (CBC) approach in recent years to manage its protected areas mainly in response to poor park-people relations. Among other things, under this approach the government has created new "people-oriented" conservation areas, formed and devolved legal authority to grassroots-level institutions to manage local resources, fostered infrastructure development, promoted tourism, and provided income-generating trainings to local people. Of interest to policy-makers and resource managers in Nepal and worldwide is whether this approach to conservation leads to improved attitudes on the part of local people. It is also important to know if personal costs and benefits associated with various intervention programs, and socioeconomic and demographic characteristics influence these attitudes. We explore these questions by looking at the experiences in Annapurna and Makalu-Barun Conservation Areas, Nepal, which have largely adopted a CBC approach in policy formulation, planning, and management. The research was conducted during 1996 and 1997; the data collection methods included random household questionnaire surveys, informal interviews, and review of official records and published literature. The results indicated that the majority of local people held favorable attitudes toward these conservation areas. Logistic regression results revealed that participation in training, benefit from tourism, wildlife depredation issue, ethnicity, gender, and education level were the significant predictors of local attitudes in one or the other conservation area. We conclude that the CBC approach has potential to shape favorable local attitudes and that these attitudes will be mediated by some personal attributes. PMID:11443381

  6. A theoretical individual-based model of Brown Ring Disease in Manila clams, Venerupis philippinarum

    NASA Astrophysics Data System (ADS)

    Paillard, Christine; Jean, Fred; Ford, Susan E.; Powell, Eric N.; Klinck, John M.; Hofmann, Eileen E.; Flye-Sainte-Marie, Jonathan

    2014-08-01

    An individual-based mathematical model was developed to investigate the biological and environmental interactions that influence the prevalence and intensity of Brown Ring Disease (BRD), a disease, caused by the bacterial pathogen, Vibrio tapetis, in the Manila clam (Venerupis (= Tapes, = Ruditapes) philippinarum). V. tapetis acts as an external microparasite, adhering at the surface of the mantle edge and its secretion, the periostracal lamina, causing the symptomatic brown deposit. Brown Ring Disease is atypical in that it leaves a shell scar that provides a unique tool for diagnosis of either live or dead clams. The model was formulated using laboratory and field measurements of BRD development in Manila clams, physiological responses of the clam to the pathogen, and the physiology of V. tapetis, as well as theoretical understanding of bacterial disease progression in marine shellfish. The simulation results obtained for an individual Manila clam were expanded to cohorts and populations using a probability distribution that prescribed a range of variability for parameters in a three dimensional framework; assimilation rate, clam hemocyte activity rate (the number of bacteria ingested per hemocyte per day), and clam calcification rate (a measure of the ability to recover by covering over the symptomatic brown ring deposit), which sensitivity studies indicated to be processes important in determining BRD prevalence and intensity. This approach allows concurrent simulation of individuals with a variety of different physiological capabilities (phenotypes) and hence by implication differing genotypic composition. Different combinations of the three variables provide robust estimates for the fate of individuals with particular characteristics in a population that consists of mixtures of all possible combinations. The BRD model was implemented using environmental observations from sites in Brittany, France, where Manila clams routinely exhibit BRD signs. The simulated

  7. Theoretical study of the noble metals on semiconductor surfaces and Ti-base shape memory alloys

    SciTech Connect

    Ding, Yungui

    1994-07-27

    The electronic and structural properties of the ({radical}3 {times} {radical}3) R30{degrees} Ag/Si(111) and ({radical}3 {times} {radical}3) R30{degrees} Au/Si(111) surfaces are investigated using first principles total energy calculations. We have tested almost all experimentally proposed structural models for both surfaces and found the energetically most favorable model for each of them. The lowest energy model structure of the ({radical}3 {times} {radical}3) R30{degrees} Ag/Si(111) surface consists of a top layer of Ag atoms arranged as ``honeycomb-chained-trimers`` lying above a distorted ``missing top layer`` Si(111) substrate. The coverage of Ag is 1 monolayer (ML). We find that the honeycomb structure observed in STM images arise from the electronic charge densities of an empty surface band near the Fermi level. The electronic density of states of this model gives a ``pseudo-gap`` around the Fermi level, which is consistent with experimental results. The lowest energy model for the ({radical}3 {times} {radical}3) R30{degrees} Au/Si(111) surface is a conjugate honeycomb-chained-trimer (CHCT-1) configuration which consists of a top layer of trimers formed by 1 ML Au atoms lying above a ``missing top layer`` Si(111) substrate with a honeycomb-chained-trimer structure for its first layer. The structures of Au and Ag are in fact quite similar and belong to the same class of structural models. However, small variation in the structural details gives rise to quite different observed STM images, as revealed in the theoretical calculations. The electronic charge density from bands around the Fermi level for the ({radical}3 {times} {radical}3) R30{degrees}, Au/Si(111) surface also gives a good description of the images observed in STM experiments. First principles calculations are performed to study the electronic and structural properties of a series of Ti-base binary alloys TiFe, TiNi, TiPd, TiMo, and TiAu in the B2 structure.

  8. GIS-based analysis and modelling with empirical and remotely-sensed data on coastline advance and retreat

    NASA Astrophysics Data System (ADS)

    Ahmad, Sajid Rashid

    With the understanding that far more research remains to be done on the development and use of innovative and functional geospatial techniques and procedures to investigate coastline changes this thesis focussed on the integration of remote sensing, geographical information systems (GIS) and modelling techniques to provide meaningful insights on the spatial and temporal dynamics of coastline changes. One of the unique strengths of this research was the parameterization of the GIS with long-term empirical and remote sensing data. Annual empirical data from 1941--2007 were analyzed by the GIS, and then modelled with statistical techniques. Data were also extracted from Landsat TM and ETM+ images. The band ratio method was used to extract the coastlines. Topographic maps were also used to extract digital map data. All data incorporated into ArcGIS 9.2 were analyzed with various modules, including Spatial Analyst, 3D Analyst, and Triangulated Irregular Networks. The Digital Shoreline Analysis System was used to analyze and predict rates of coastline change. GIS results showed the spatial locations along the coast that will either advance or retreat over time. The linear regression results highlighted temporal changes which are likely to occur along the coastline. Box-Jenkins modelling procedures were utilized to determine statistical models which best described the time series (1941--2007) of coastline change data. After several iterations and goodness-of-fit tests, second-order spatial cyclic autoregressive models, first-order autoregressive models and autoregressive moving average models were identified as being appropriate for describing the deterministic and random processes operating in Guyana's coastal system. The models highlighted not only cyclical patterns in advance and retreat of the coastline, but also the existence of short and long-term memory processes. Long-term memory processes could be associated with mudshoal propagation and stabilization while short

  9. The Practice-Theory-Practice Model: The Establishment of the Theoretical Bases of a Case Study.

    ERIC Educational Resources Information Center

    Michael, Robert O.; Barbe, Richard H.

    The Practice-Theory-Practice Model (PTPM), a method designed to infuse theoretical perspectives into case study materials and to serve as a guide for examining chance processes in institutions of higher education, is described. The PTPM considers the historical and experiential environment that acts upon an institution, its practices and its…

  10. Theoretical and experimental analysis of optical gyroscopes based on fiber ring resonators

    NASA Astrophysics Data System (ADS)

    Liu, Yao-ying; Xue, Chen-yang; Cui, Xiao-wen; Cui, Dan-feng; Wei, Li-ping; Wang, Yong-hua; Li, Yan-na

    2014-12-01

    The research on gyroscopes has lasted for a long time, but there is not a thorough analysis of them. In this paper, a detailed theoretical analysis of fiber ring gyroscope and its gyroscope effect were presented, the performance characteristics of optical resonator gyroscope ranging from transmission function Tfrr, Finesse, Q-factor, the gyro sensitivity, signal noise ratio, random walk to dynamic range are all deduced in detail. In addition, a large number of experiments have been done to verify the deduced theoretical results. Simulating the relevance of dQ and turn number of fiber ring, analyzing the frequency difference of two counter transmitted waves (CW and CCW) of the rotated system, make the conclusion that with the increase of turn number of ring, the resonance depth increased while the dQ value decreased, obtain a high sensitivity of 0.210/h, random walk of 0.00350/√h, and Q factor of 8×106. Moreover, in the digital frequency locked dual rotation gyro experiments, obvious step effect was observed. And the experimental line of frequency difference is very agreement with the theoretical line. The research provides a good theoretical and experimental basis for the study of gyroscopes.

  11. How many base-pairs per turn does DNA have in solution and in chromatin? Some theoretical calculations.

    PubMed Central

    Levitt, M

    1978-01-01

    Calculations on a 20-base pair segment of DNA double helix using empirical energy functions show that DNA can be bent smoothly and uniformly into a superhelix with a small enough radius (45 A) to fit the dimensions of chromatin. The variation of energy with the twist of the base pairs about the helix axis shows the straight DNA free in solution is most stable with about 10 1/2 base pairs per turn rather than 10 as observed in the solid state, whereas superhelical DNA in chromatin is most stable with about 10 base pairs per turn. This result, which has a simple physical interpretation, explains the pattern of nuclease cuts and the linkage number changes observed for DNA arranged in chromatin. PMID:273227

  12. Wind-blown Sand Electrification Inspired Triboelectric Energy Harvesting Based on Homogeneous Inorganic Materials Contact: A Theoretical Study and Prediction

    PubMed Central

    Hu, Wenwen; Wu, Weiwei; Zhou, Hao-miao

    2016-01-01

    Triboelectric nanogenerator (TENG) based on contact electrification between heterogeneous materials has been widely studied. Inspired from wind-blown sand electrification, we design a novel kind of TENG based on size dependent electrification using homogeneous inorganic materials. Based on the asymmetric contact theory between homogeneous material surfaces, a calculation of surface charge density has been carried out. Furthermore, the theoretical output of homogeneous material based TENG has been simulated. Therefore, this work may pave the way of fabricating TENG without the limitation of static sequence. PMID:26817411

  13. Wind-blown Sand Electrification Inspired Triboelectric Energy Harvesting Based on Homogeneous Inorganic Materials Contact: A Theoretical Study and Prediction

    NASA Astrophysics Data System (ADS)

    Hu, Wenwen; Wu, Weiwei; Zhou, Hao-Miao

    2016-01-01

    Triboelectric nanogenerator (TENG) based on contact electrification between heterogeneous materials has been widely studied. Inspired from wind-blown sand electrification, we design a novel kind of TENG based on size dependent electrification using homogeneous inorganic materials. Based on the asymmetric contact theory between homogeneous material surfaces, a calculation of surface charge density has been carried out. Furthermore, the theoretical output of homogeneous material based TENG has been simulated. Therefore, this work may pave the way of fabricating TENG without the limitation of static sequence.

  14. Wind-blown Sand Electrification Inspired Triboelectric Energy Harvesting Based on Homogeneous Inorganic Materials Contact: A Theoretical Study and Prediction.

    PubMed

    Hu, Wenwen; Wu, Weiwei; Zhou, Hao-Miao

    2016-01-01

    Triboelectric nanogenerator (TENG) based on contact electrification between heterogeneous materials has been widely studied. Inspired from wind-blown sand electrification, we design a novel kind of TENG based on size dependent electrification using homogeneous inorganic materials. Based on the asymmetric contact theory between homogeneous material surfaces, a calculation of surface charge density has been carried out. Furthermore, the theoretical output of homogeneous material based TENG has been simulated. Therefore, this work may pave the way of fabricating TENG without the limitation of static sequence. PMID:26817411

  15. Theoretical studies on the polarization-modulator-based single-side-band modulator used for generation of optical multicarrier.

    PubMed

    Li, Jianping; Zhang, Xuebing; Li, Zhaohui; Zhang, Xiaoguang; Li, Guifang; Lu, Chao

    2014-06-16

    This paper focuses on the studies on the polarization- modulator-based single-side-band modulator (PSSBM) as well as its implementation in generation of frequency-locked multicarrier. The principle and properties of PSSBM have been analyzed in detail with theoretical and simulation results. Then, the PSSBM-based frequency-locked multicarrier generator (PSMCG) with recirculating frequency shifting loop has also been demonstrated via simulation. The results have a good agreement with the theoretical analysis. According to the results, multiple frequency-locked carriers with high quality can be achieved based on the proposed PSMCG. The generated carriers have the potential applications in different scenarios such as optical transmission, microwave photonics and so on. PMID:24977506

  16. Intelligence in Bali--A Case Study on Estimating Mean IQ for a Population Using Various Corrections Based on Theory and Empirical Findings

    ERIC Educational Resources Information Center

    Rindermann, Heiner; te Nijenhuis, Jan

    2012-01-01

    A high-quality estimate of the mean IQ of a country requires giving a well-validated test to a nationally representative sample, which usually is not feasible in developing countries. So, we used a convenience sample and four corrections based on theory and empirical findings to arrive at a good-quality estimate of the mean IQ in Bali. Our study…

  17. Empirical metallicity-dependent calibrations of effective temperature against colours for dwarfs and giants based on interferometric data

    NASA Astrophysics Data System (ADS)

    Huang, Y.; Liu, X.-W.; Yuan, H.-B.; Xiang, M.-S.; Chen, B.-Q.; Zhang, H.-W.

    2015-12-01

    We present empirical metallicity-dependent calibrations of effective temperature against colours for dwarfs of luminosity classes IV and V and for giants of luminosity classes II and III, based on a collection from the literature of about two hundred nearby stars with direct effective temperature measurements of better than 2.5 per cent. The calibrations are valid for an effective temperature range 3100-10 000 K for dwarfs of spectral types M5 to A0 and 3100-5700 K for giants of spectral types K5 to G5. A total of 21 colours for dwarfs and 18 colours for giants of bands of four photometric systems, i.e. the Johnson (UBVRJIJJHK), the Cousins (RCIC), the Sloan Digital Sky Survey (gr) and the Two Micron All Sky Survey (JHKs), have been calibrated. Restricted by the metallicity range of the current sample, the calibrations are mainly applicable for disc stars ([Fe/H] ≳ - 1.0). The normalized percentage residuals of the calibrations are typically 2.0 and 1.5 per cent for dwarfs and giants, respectively. Some systematic discrepancies at various levels are found between the current scales and those available in the literature (e.g. those based on the infrared flux method or spectroscopy). Based on the current calibrations, we have re-determined the colours of the Sun. We have also investigated the systematic errors in effective temperatures yielded by the current on-going large-scale low- to intermediate-resolution stellar spectroscopic surveys. We show that the calibration of colour (g - Ks) presented in this work provides an invaluable tool for the estimation of stellar effective temperature for those on-going or upcoming surveys.

  18. The dappled nature of causes of psychiatric illness: replacing the organic-functional/hardware-software dichotomy with empirically based pluralism.

    PubMed

    Kendler, K S

    2012-04-01

    Our tendency to see the world of psychiatric illness in dichotomous and opposing terms has three major sources: the philosophy of Descartes, the state of neuropathology in late nineteenth century Europe (when disorders were divided into those with and without demonstrable pathology and labeled, respectively, organic and functional), and the influential concept of computer functionalism wherein the computer is viewed as a model for the human mind-brain system (brain=hardware, mind=software). These mutually re-enforcing dichotomies, which have had a pernicious influence on our field, make a clear prediction about how 'difference-makers' (aka causal risk factors) for psychiatric disorders should be distributed in nature. In particular, are psychiatric disorders like our laptops, which when they dysfunction, can be cleanly divided into those with software versus hardware problems? I propose 11 categories of difference-makers for psychiatric illness from molecular genetics through culture and review their distribution in schizophrenia, major depression and alcohol dependence. In no case do these distributions resemble that predicted by the organic-functional/hardware-software dichotomy. Instead, the causes of psychiatric illness are dappled, distributed widely across multiple categories. We should abandon Cartesian and computer-functionalism-based dichotomies as scientifically inadequate and an impediment to our ability to integrate the diverse information about psychiatric illness our research has produced. Empirically based pluralism provides a rigorous but dappled view of the etiology of psychiatric illness. Critically, it is based not on how we wish the world to be but how the difference-makers for psychiatric illness are in fact distributed. PMID:22230881

  19. The dappled nature of causes of psychiatric illness: replacing the organic–functional/hardware–software dichotomy with empirically based pluralism

    PubMed Central

    Kendler, KS

    2012-01-01

    Our tendency to see the world of psychiatric illness in dichotomous and opposing terms has three major sources: the philosophy of Descartes, the state of neuropathology in late nineteenth century Europe (when disorders were divided into those with and without demonstrable pathology and labeled, respectively, organic and functional), and the influential concept of computer functionalism wherein the computer is viewed as a model for the human mind–brain system (brain = hardware, mind = software). These mutually re-enforcing dichotomies, which have had a pernicious influence on our field, make a clear prediction about how ‘difference-makers’ (aka causal risk factors) for psychiatric disorders should be distributed in nature. In particular, are psychiatric disorders like our laptops, which when they dysfunction, can be cleanly divided into those with software versus hardware problems? I propose 11 categories of difference-makers for psychiatric illness from molecular genetics through culture and review their distribution in schizophrenia, major depression and alcohol dependence. In no case do these distributions resemble that predicted by the organic–functional/hardware–software dichotomy. Instead, the causes of psychiatric illness are dappled, distributed widely across multiple categories. We should abandon Cartesian and computer-functionalism-based dichotomies as scientifically inadequate and an impediment to our ability to integrate the diverse information about psychiatric illness our research has produced. Empirically based pluralism provides a rigorous but dappled view of the etiology of psychiatric illness. Critically, it is based not on how we wish the world to be but how the difference-makers for psychiatric illness are in fact distributed. PMID:22230881

  20. In silico structure-based screening of versatile P-glycoprotein inhibitors using polynomial empirical scoring functions.

    PubMed

    Shityakov, Sergey; Förster, Carola

    2014-01-01

    P-glycoprotein (P-gp) is an ATP (adenosine triphosphate)-binding cassette transporter that causes multidrug resistance of various chemotherapeutic substances by active efflux from mammalian cells. P-gp plays a pivotal role in limiting drug absorption and distribution in different organs, including the intestines and brain. Thus, the prediction of P-gp-drug interactions is of vital importance in assessing drug pharmacokinetic and pharmacodynamic properties. To find the strongest P-gp blockers, we performed an in silico structure-based screening of P-gp inhibitor library (1,300 molecules) by the gradient optimization method, using polynomial empirical scoring (POLSCORE) functions. We report a strong correlation (r (2)=0.80, F=16.27, n=6, P<0.0157) of inhibition constants (Kiexp or pKiexp; experimental Ki or negative decimal logarithm of Kiexp) converted from experimental IC50 (half maximal inhibitory concentration) values with POLSCORE-predicted constants (KiPOLSCORE or pKiPOLSCORE), using a linear regression fitting technique. The hydrophobic interactions between P-gp and selected drug substances were detected as the main forces responsible for the inhibition effect. The results showed that this scoring technique might be useful in the virtual screening and filtering of databases of drug-like compounds at the early stage of drug development processes. PMID:24711707

  1. Empirically-Based Crop Insurance for China: A Pilot Study in the Down-middle Yangtze River Area of China

    NASA Astrophysics Data System (ADS)

    Wang, Erda; Yu, Yang; Little, Bertis B.; Chen, Zhongxin; Ren, Jianqiang

    Factors that caused slow growth in crop insurance participation and its ultimate failure in China were multi-faceted including high agricultural production risk, low participation rate, inadequate public awareness, high loss ratio, insufficient and interrupted government financial support. Thus, a clear and present need for data driven analyses and empirically-based risk management exists in China. In the present investigation, agricultural production data for two crops (corn, rice) in five counties in Jiangxi Province and Hunan province for design of a pilot crop insurance program in China. A crop insurance program was designed which (1) provides 75% coverage, (2) a 55% premium rate reduction for the farmer compared to catastrophic coverage most recently offered, and uses the currently approved governmental premium subsidy level. Thus a safety net for Chinese farmers that help maintain agricultural production at a level of self-sufficiency that costs less than half the current plans requires one change to the program: ≥80% of producers must participate in an area.

  2. Empirically Based Profiles of the Early Literacy Skills of Children With Language Impairment in Early Childhood Special Education.

    PubMed

    Justice, Laura; Logan, Jessica; Kaderavek, Joan; Schmitt, Mary Beth; Tompkins, Virginia; Bartlett, Christopher

    2015-01-01

    The purpose of this study was to empirically determine whether specific profiles characterize preschool-aged children with language impairment (LI) with respect to their early literacy skills (print awareness, name-writing ability, phonological awareness, alphabet knowledge); the primary interest was to determine if one or more profiles suggested vulnerability for future reading problems. Participants were 218 children enrolled in early childhood special education classrooms, 95% of whom received speech-language services. Children were administered an assessment of early literacy skills in the fall of the academic year. Based on results of latent profile analysis, four distinct literacy profiles were identified, with the single largest profile (55% of children) representing children with generally poor literacy skills across all areas examined. Children in the two low-risk categories had higher oral language skills than those in the high-risk and moderate-risk profiles. Across three of the four early literacy measures, children with language as their primary disability had higher scores than those with LI concomitant with other disabilities. These findings indicate that there are specific profiles of early literacy skills among children with LI, with about one half of children exhibiting a profile indicating potential susceptibility for future reading problems. PMID:24232733

  3. An improved empirical model of electron and ion fluxes at geosynchronous orbit based on upstream solar wind conditions

    DOE PAGESBeta

    Denton, M. H.; Henderson, M. G.; Jordanova, V. K.; Thomsen, M. F.; Borovsky, J. E.; Woodroffe, J.; Hartley, D. P.; Pitchford, D.

    2016-07-27

    In this study, a new empirical model of the electron fluxes and ion fluxes at geosynchronous orbit (GEO) is introduced, based on observations by Los Alamos National Laboratory (LANL) satellites. The model provides flux predictions in the energy range ~1 eV to ~40 keV, as a function of local time, energy, and the strength of the solar wind electric field (the negative product of the solar wind speed and the z component of the magnetic field). Given appropriate upstream solar wind measurements, the model provides a forecast of the fluxes at GEO with a ~1 h lead time. Model predictionsmore » are tested against in-sample observations from LANL satellites and also against out-of-sample observations from the Compact Environmental Anomaly Sensor II detector on the AMC-12 satellite. The model does not reproduce all structure seen in the observations. However, for the intervals studied here (quiet and storm times) the normalized root-mean-square deviation < ~0.3. It is intended that the model will improve forecasting of the spacecraft environment at GEO and also provide improved boundary/input conditions for physical models of the magnetosphere.« less

  4. Multi-fault diagnosis for rolling element bearings based on ensemble empirical mode decomposition and optimized support vector machines

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoyuan; Zhou, Jianzhong

    2013-12-01

    This study presents a novel procedure based on ensemble empirical mode decomposition (EEMD) and optimized support vector machine (SVM) for multi-fault diagnosis of rolling element bearings. The vibration signal is adaptively decomposed into a number of intrinsic mode functions (IMFs) by EEMD. Two types of features, the EEMD energy entropy and singular values of the matrix whose rows are IMFs, are extracted. EEMD energy entropy is used to specify whether the bearing has faults or not. If the bearing has faults, singular values are input to multi-class SVM optimized by inter-cluster distance in the feature space (ICDSVM) to specify the fault type. The proposed method was tested on a system with an electric motor which has two rolling bearings with 8 normal working conditions and 48 fault working conditions. Five groups of experiments were done to evaluate the effectiveness of the proposed method. The results show that the proposed method outperforms other methods both mentioned in this paper and published in other literatures.

  5. Combined magnetic and kinetic control of advanced tokamak steady state scenarios based on semi-empirical modelling

    NASA Astrophysics Data System (ADS)

    Moreau, D.; Artaud, J. F.; Ferron, J. R.; Holcomb, C. T.; Humphreys, D. A.; Liu, F.; Luce, T. C.; Park, J. M.; Prater, R.; Turco, F.; Walker, M. L.

    2015-06-01

    This paper shows that semi-empirical data-driven models based on a two-time-scale approximation for the magnetic and kinetic control of advanced tokamak (AT) scenarios can be advantageously identified from simulated rather than real data, and used for control design. The method is applied to the combined control of the safety factor profile, q(x), and normalized pressure parameter, βN, using DIII-D parameters and actuators (on-axis co-current neutral beam injection (NBI) power, off-axis co-current NBI power, electron cyclotron current drive power, and ohmic coil). The approximate plasma response model was identified from simulated open-loop data obtained using a rapidly converging plasma transport code, METIS, which includes an MHD equilibrium and current diffusion solver, and combines plasma transport nonlinearity with 0D scaling laws and 1.5D ordinary differential equations. The paper discusses the results of closed-loop METIS simulations, using the near-optimal ARTAEMIS control algorithm (Moreau D et al 2013 Nucl. Fusion 53 063020) for steady state AT operation. With feedforward plus feedback control, the steady state target q-profile and βN are satisfactorily tracked with a time scale of about 10 s, despite large disturbances applied to the feedforward powers and plasma parameters. The robustness of the control algorithm with respect to disturbances of the H&CD actuators and of plasma parameters such as the H-factor, plasma density and effective charge, is also shown.

  6. In silico structure-based screening of versatile P-glycoprotein inhibitors using polynomial empirical scoring functions

    PubMed Central

    Shityakov, Sergey; Förster, Carola

    2014-01-01

    P-glycoprotein (P-gp) is an ATP (adenosine triphosphate)-binding cassette transporter that causes multidrug resistance of various chemotherapeutic substances by active efflux from mammalian cells. P-gp plays a pivotal role in limiting drug absorption and distribution in different organs, including the intestines and brain. Thus, the prediction of P-gp–drug interactions is of vital importance in assessing drug pharmacokinetic and pharmacodynamic properties. To find the strongest P-gp blockers, we performed an in silico structure-based screening of P-gp inhibitor library (1,300 molecules) by the gradient optimization method, using polynomial empirical scoring (POLSCORE) functions. We report a strong correlation (r2=0.80, F=16.27, n=6, P<0.0157) of inhibition constants (Kiexp or pKiexp; experimental Ki or negative decimal logarithm of Kiexp) converted from experimental IC50 (half maximal inhibitory concentration) values with POLSCORE-predicted constants (KiPOLSCORE or pKiPOLSCORE), using a linear regression fitting technique. The hydrophobic interactions between P-gp and selected drug substances were detected as the main forces responsible for the inhibition effect. The results showed that this scoring technique might be useful in the virtual screening and filtering of databases of drug-like compounds at the early stage of drug development processes. PMID:24711707

  7. BOLD-based Techniques for Quantifying Brain Hemodynamic and Metabolic Properties – Theoretical Models and Experimental Approaches

    PubMed Central

    Yablonskiy, Dmitriy A.; Sukstanskii, Alexander L.; He, Xiang

    2012-01-01

    Quantitative evaluation of brain hemodynamics and metabolism, particularly the relationship between brain function and oxygen utilization, is important for understanding normal human brain operation as well as pathophysiology of neurological disorders. It can also be of great importance for evaluation of hypoxia within tumors of the brain and other organs. A fundamental discovery by Ogawa and co-workers of the BOLD (Blood Oxygenation Level Dependent) contrast opened a possibility to use this effect to study brain hemodynamic and metabolic properties by means of MRI measurements. Such measurements require developing theoretical models connecting MRI signal to brain structure and functioning and designing experimental techniques allowing MR measurements of salient features of theoretical models. In our review we discuss several such theoretical models and experimental methods for quantification brain hemodynamic and metabolic properties. Our review aims mostly at methods for measuring oxygen extraction fraction, OEF, based on measuring blood oxygenation level. Combining measurement of OEF with measurement of CBF allows evaluation of oxygen consumption, CMRO2. We first consider in detail magnetic properties of blood – magnetic susceptibility, MR relaxation and theoretical models of intravascular contribution to MR signal under different experimental conditions. Then, we describe a “through-space” effect – the influence of inhomogeneous magnetic fields, created in the extravascular space by intravascular deoxygenated blood, on the MR signal formation. Further we describe several experimental techniques taking advantage of these theoretical models. Some of these techniques - MR susceptometry, and T2-based quantification of oxygen OEF – utilize intravascular MR signal. Another technique – qBOLD – evaluates OEF by making use of through-space effects. In this review we targeted both scientists just entering the MR field and more experienced MR researchers

  8. Theoretical luminescence spectra in p-type superlattices based on InGaAsN

    PubMed Central

    2012-01-01

    In this work, we present a theoretical photoluminescence (PL) for p-doped GaAs/InGaAsN nanostructures arrays. We apply a self-consistent k→p→ method in the framework of the effective mass theory. Solving a full 8 × 8 Kane's Hamiltonian, generalized to treat different materials in conjunction with the Poisson equation, we calculate the optical properties of these systems. The trends in the calculated PL spectra, due to many-body effects within the quasi-two-dimensional hole gas, are analyzed as a function of the acceptor doping concentration and the well width. Effects of temperature in the PL spectra are also investigated. This is the first attempt to show theoretical luminescence spectra for GaAs/InGaAsN nanostructures and can be used as a guide for the design of nanostructured devices such as optoelectronic devices, solar cells, and others. PMID:23113975

  9. Discussion on climate oscillations: CMIP5 general circulation models versus a semi-empirical harmonic model based on astronomical cycles

    NASA Astrophysics Data System (ADS)

    Scafetta, Nicola

    2013-11-01

    Power spectra of global surface temperature (GST) records (available since 1850) reveal major periodicities at about 9.1, 10-11, 19-22 and 59-62 years. Equivalent oscillations are found in numerous multisecular paleoclimatic records. The Coupled Model Intercomparison Project 5 (CMIP5) general circulation models (GCMs), to be used in the IPCC Fifth Assessment Report (AR5, 2013), are analyzed and found not able to reconstruct this variability. In particular, from 2000 to 2013.5 a GST plateau is observed while the GCMs predicted a warming rate of about 2 °C/century. In contrast, the hypothesis that the climate is regulated by specific natural oscillations more accurately fits the GST records at multiple time scales. For example, a quasi 60-year natural oscillation simultaneously explains the 1850-1880, 1910-1940 and 1970-2000 warming periods, the 1880-1910 and 1940-1970 cooling periods and the post 2000 GST plateau. This hypothesis implies that about 50% of the ~ 0.5 °C global surface warming observed from 1970 to 2000 was due to natural oscillations of the climate system, not to anthropogenic forcing as modeled by the CMIP3 and CMIP5 GCMs. Consequently, the climate sensitivity to CO2 doubling should be reduced by half, for example from the 2.0-4.5 °C range (as claimed by the IPCC, 2007) to 1.0-2.3 °C with a likely median of ~ 1.5 °C instead of ~ 3.0 °C. Also modern paleoclimatic temperature reconstructions showing a larger preindustrial variability than the hockey-stick shaped temperature reconstructions developed in early 2000 imply a weaker anthropogenic effect and a stronger solar contribution to climatic changes. The observed natural oscillations could be driven by astronomical forcings. The ~ 9.1 year oscillation appears to be a combination of long soli-lunar tidal oscillations, while quasi 10-11, 20 and 60 year oscillations are typically found among major solar and heliospheric oscillations driven mostly by Jupiter and Saturn movements. Solar models based

  10. Empirical Study on Relationship Capital in Supply Chain-Based on Analysis of Enterprises in Hunan Province

    NASA Astrophysics Data System (ADS)

    Shan, Lu; Qiang-Bin, Ou-Yang

    Based on the existing theories and studies, this thesis aims to propose a theoretical model for describing the relationship between the relationship capital in the supply chain and its influencing factors, and meanwhile, the EFA (exploratory factor analysis) and CFA (confirmatory factor analysis) are carried out on 188 sample data. Through the evaluation of goodness of fit on the structure model as well as assumption testing, it turns out that there are four influencing factors for the relationship capital in the supply chain, namely, capability and reputation of the cooperation companies in the supply chain, input in specific assets and transfer cost, which are in a positive correlation with relationship capital separately. Then a decision-making basis is provided for the practice of relationship capital in the supply chain.

  11. Photodesorption of diatomic molecules from surfaces: A theoretical approach based on first principles

    NASA Astrophysics Data System (ADS)

    Klüner, Thorsten

    2010-05-01

    experimental results are interpreted using empirical or semi-empirical models.

  12. Theoretical and Simulations-Based Modeling of Micellization in Linear and Branched Surfactant Systems

    NASA Astrophysics Data System (ADS)

    Mendenhall, Jonathan D.

    's and other micellization properties for a variety of linear and branched surfactant chemical architectures which are commonly encountered in practice. Single-component surfactant solutions are investigated, in order to clarify the specific contributions of the surfactant head and tail to the free energy of micellization, a quantity which determines the cmc and all other aspects of micellization. First, a molecular-thermodynamic (MT) theory is presented which makes use of bulk-phase thermodynamics and a phenomenological thought process to describe the energetics related to the formation of a micelle from its constituent surfactant monomers. Second, a combined computer-simulation/molecular-thermodynamic (CSMT) framework is discussed which provides a more detailed quantification of the hydrophobic effect using molecular dynamics simulations. A novel computational strategy to identify surfactant head and tail using an iterative dividing surface approach, along with simulated micelle results, is proposed. Force-field development for novel surfactant structures is also discussed. Third, a statistical-thermodynamic, single-chain, mean-field theory for linear and branched tail packing is formulated, which enables quantification of the specific energetic penalties related to confinement and constraint of surfactant tails within micelles. Finally, these theoretical and simulations-based strategies are used to predict the micellization behavior of 55 linear surfactants and 28 branched surfactants. Critical micelle concentration and optimal micelle properties are reported and compared with experiment, demonstrating good agreement across a range of surfactant head and tail types. In particular, the CSMT framework is found to provide improved agreement with experimental cmc's for the branched surfactants considered. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)

  13. A theoretical framework for whole-plant carbon assimilation efficiency based on metabolic scaling theory: a test case using Picea seedlings.

    PubMed

    Wang, Zhiqiang; Ji, Mingfei; Deng, Jianming; Milne, Richard I; Ran, Jinzhi; Zhang, Qiang; Fan, Zhexuan; Zhang, Xiaowei; Li, Jiangtao; Huang, Heng; Cheng, Dongliang; Niklas, Karl J

    2015-06-01

    Simultaneous and accurate measurements of whole-plant instantaneous carbon-use efficiency (ICUE) and annual total carbon-use efficiency (TCUE) are difficult to make, especially for trees. One usually estimates ICUE based on the net photosynthetic rate or the assumed proportional relationship between growth efficiency and ICUE. However, thus far, protocols for easily estimating annual TCUE remain problematic. Here, we present a theoretical framework (based on the metabolic scaling theory) to predict whole-plant annual TCUE by directly measuring instantaneous net photosynthetic and respiratory rates. This framework makes four predictions, which were evaluated empirically using seedlings of nine Picea taxa: (i) the flux rates of CO(2) and energy will scale isometrically as a function of plant size, (ii) whole-plant net and gross photosynthetic rates and the net primary productivity will scale isometrically with respect to total leaf mass, (iii) these scaling relationships will be independent of ambient temperature and humidity fluctuations (as measured within an experimental chamber) regardless of the instantaneous net photosynthetic rate or dark respiratory rate, or overall growth rate and (iv) TCUE will scale isometrically with respect to instantaneous efficiency of carbon use (i.e., the latter can be used to predict the former) across diverse species. These predictions were experimentally verified. We also found that the ranking of the nine taxa based on net photosynthetic rates differed from ranking based on either ICUE or TCUE. In addition, the absolute values of ICUE and TCUE significantly differed among the nine taxa, with both ICUE and temperature-corrected ICUE being highest for Picea abies and lowest for Picea schrenkiana. Nevertheless, the data are consistent with the predictions of our general theoretical framework, which can be used to access annual carbon-use efficiency of different species at the level of an individual plant based on simple, direct

  14. South African maize production scenarios for 2055 using a combined empirical and process-based model approach

    NASA Astrophysics Data System (ADS)

    Estes, L.; Bradley, B.; Oppenheimer, M.; Wilcove, D.; Beukes, H.; Schulze, R. E.; Tadross, M.

    2011-12-01

    In South Africa, a semi-arid country with a diverse agricultural sector, climate change is projected to negatively impact staple crop production. Our study examines future impacts to maize, South Africa's most widely grown staple crop. Working at finer spatial resolution than previous studies, we combine the process-based DSSAT4.5 and the empirical MAXENT models to study future maize suitability. Climate scenarios were based on 9 GCMs run under SRES A2 and B1 emissions scenarios down-scaled (using self-organizing maps) to 5838 locations. Soil properties were derived from textural and compositional data linked to 26422 landforms. DSSAT was run with typical dryland planting parameters and mean projected CO2 values. MAXENT was trained using aircraft-observed distributions and monthly climatologies data derived from downscaled daily records, with future rainfall increased by 10% to simulate CO2 related water-use efficiency gains. We assessed model accuracy based on correlations between model output and a satellite-derived yield proxy (integrated NDVI), and the overlap of modeled and observed maize field distributions. DSSAT yields were linearly correlated to mean integrated NDVI (R2 = 0.38), while MAXENT's relationship was logistic. Binary suitability maps based on thresholding model outputs were slightly more accurate for MAXENT (88%) than for DSSAT (87%) when compared to current maize field distribution. We created 18 suitability maps for each model (9 GCMs X 2 SRES) using projected changes relative to historical suitability thresholds. Future maps largely agreed in eastern South Africa, but disagreed strongly in the semi-arid west. Using a 95% confidence criterion (17 models agree), MAXENT showed a 241305 km2 suitability loss relative to its modeled historical suitability, while DSSAT showed a potential loss of only 112446 km2. Even the smaller potential loss highlighted by DSSAT is uncertain, given that DSSAT's mean (across all 18 climate scenarios) projected yield

  15. Stellar population synthesis models between 2.5 and 5 μm based on the empirical IRTF stellar library

    NASA Astrophysics Data System (ADS)

    Röck, B.; Vazdekis, A.; Peletier, R. F.; Knapen, J. H.; Falcón-Barroso, J.

    2015-05-01

    We present the first single-burst stellar population models in the infrared wavelength range between 2.5 and 5 μm which are exclusively based on empirical stellar spectra. Our models take as input 180 spectra from the stellar IRTF (Infrared Telescope Facility) library. Our final single-burst stellar population models are calculated based on two different sets of isochrones and various types of initial mass functions of different slopes, ages larger than 1 Gyr and metallicities between [Fe/H] = -0.70 and 0.26. They are made available online to the scientific community on the MILES web page. We analyse the behaviour of the Spitzer [3.6]-[4.5] colour calculated from our single stellar population models and find only slight dependences on both metallicity and age. When comparing to the colours of observed early-type galaxies, we find a good agreement for older, more massive galaxies that resemble a single-burst population. Younger, less massive and more metal-poor galaxies show redder colours with respect to our models. This mismatch can be explained by a more extended star formation history of these galaxies which includes a metal-poor or/and young population. Moreover, the colours derived from our models agree very well with most other models available in this wavelength range. We confirm that the mass-to-light ratio determined in the Spitzer [3.6] μm band changes much less as a function of both age and metallicity than in the optical bands.

  16. Photothermal Deflection Experiments: Comparison of Existing Theoretical Models and Their Applications to Characterization of -Based Thin Films

    NASA Astrophysics Data System (ADS)

    Korte, Dorota; Franko, Mladen

    2014-12-01

    A method for determination of thermooptical, transport, and structural parameters of -based thin films is presented. The measurements were conducted using beam deflection spectroscopy (BDS) and supporting theoretical analysis performed in the framework of complex geometrical optics providing a novel method of BDS data modeling. It was observed that the material's thermal parameters strongly depend on sample properties determining its photocatalytic activity such as the energy bandgap, carrier lifetime, surface structure, or porosity. Because of that, the fitting procedure of the theoretical dependence into experimental data was developed to determine the sample's thermal parameters, on the basis of which the information about its structure was further found. The obtained results were compared to those based on geometrical and wave optics approaches that are currently widely used for that purpose. It was demonstrated that the choice of the proper model for data modeling is a crucial point when performing such a type of analysis.

  17. Assessing changes to South African maize production areas in 2055 using empirical and process-based crop models

    NASA Astrophysics Data System (ADS)

    Estes, L.; Bradley, B.; Oppenheimer, M.; Beukes, H.; Schulze, R. E.; Tadross, M.

    2010-12-01

    Rising temperatures and altered precipitation patterns associated with climate change pose a significant threat to crop production, particularly in developing countries. In South Africa, a semi-arid country with a diverse agricultural sector, anthropogenic climate change is likely to affect staple crops and decrease food security. Here, we focus on maize production, South Africa’s most widely grown crop and one with high socio-economic value. We build on previous coarser-scaled studies by working at a finer spatial resolution and by employing two different modeling approaches: the process-based DSSAT Cropping System Model (CSM, version 4.5), and an empirical distribution model (Maxent). For climate projections, we use an ensemble of 10 general circulation models (GCMs) run under both high and low CO2 emissions scenarios (SRES A2 and B1). The models were down-scaled to historical climate records for 5838 quinary-scale catchments covering South Africa (mean area = 164.8 km2), using a technique based on self-organizing maps (SOMs) that generates precipitation patterns more consistent with observed gradients than those produced by the parent GCMs. Soil hydrological and mechanical properties were derived from textural and compositional data linked to a map of 26422 land forms (mean area = 46 km2), while organic carbon from 3377 soil profiles was mapped using regression kriging with 8 spatial predictors. CSM was run using typical management parameters for the several major dryland maize production regions, and with projected CO2 values. The Maxent distribution model was trained using maize locations identified using annual phenology derived from satellite images coupled with airborne crop sampling observations. Temperature and precipitation projections were based on GCM output, with an additional 10% increase in precipitation to simulate higher water-use efficiency under future CO2 concentrations. The two modeling approaches provide spatially explicit projections of

  18. Theoretical links supporting the use of problem-based learning in the education of the nurse practitioner.

    PubMed

    Chikotas, Noreen Elaine

    2008-01-01

    The need to evaluate current strategies in educating the advanced practice nurse, specifically the nurse practitioner, is becoming more and more imperative due to the ever-changing health care environment. This article addresses the role of problem-based learning (PBL) as an instructional strategy in educating and preparing the nurse practitioner for future practice.Two theoretical frameworks supporting PBL, andragogy and constructivism, are presented as important to the use of PBL in the education of the nurse practitioner. PMID:19244802

  19. Experimental and theoretical bases for mechanisms of antigen discrimination by T cells

    PubMed Central

    Kajita, Masashi K.; Yokota, Ryo; Aihara, Kazuyuki; Kobayashi, Tetsuya J.

    2015-01-01

    Interaction only within specific molecules is a requisite for accurate operations of a biochemical reaction in a cell where bulk of background molecules exist. While structural specificity is a well-established mechanism for specific interaction, biophysical and biochemical experiments indicate that the mechanism is not sufficient for accounting for the antigen discrimination by T cells. In addition, the antigen discrimination by T cells also accompanies three intriguing properties other than the specificity: sensitivity, speed, and concentration compensation. In this work, we review experimental and theoretical works on the antigen discrimination by focusing on these four properties and show future directions towards understanding of the fundamental principle for molecular discrimination. PMID:27493520

  20. Developing Theoretically Based and Culturally Appropriate Interventions to Promote Hepatitis B Testing in 4 Asian American Populations, 2006–2011

    PubMed Central

    Bastani, Roshan; Glenn, Beth A.; Taylor, Victoria M.; Nguyen, Tung T.; Stewart, Susan L.; Burke, Nancy J.; Chen, Moon S.

    2014-01-01

    Introduction Hepatitis B infection is 5 to 12 times more common among Asian Americans than in the general US population and is the leading cause of liver disease and liver cancer among Asians. The purpose of this article is to describe the step-by-step approach that we followed in community-based participatory research projects in 4 Asian American groups, conducted from 2006 through 2011 in California and Washington state to develop theoretically based and culturally appropriate interventions to promote hepatitis B testing. We provide examples to illustrate how intervention messages addressing identical theoretical constructs of the Health Behavior Framework were modified to be culturally appropriate for each community. Methods Intervention approaches included mass media in the Vietnamese community, small-group educational sessions at churches in the Korean community, and home visits by lay health workers in the Hmong and Cambodian communities. Results Use of the Health Behavior Framework allowed a systematic approach to intervention development across populations, resulting in 4 different culturally appropriate interventions that addressed the same set of theoretical constructs. Conclusions The development of theory-based health promotion interventions for different populations will advance our understanding of which constructs are critical to modify specific health behaviors. PMID:24784908

  1. Why Psychology Cannot be an Empirical Science.

    PubMed

    Smedslund, Jan

    2016-06-01

    The current empirical paradigm for psychological research is criticized because it ignores the irreversibility of psychological processes, the infinite number of influential factors, the pseudo-empirical nature of many hypotheses, and the methodological implications of social interactivity. An additional point is that the differences and correlations usually found are much too small to be useful in psychological practice and in daily life. Together, these criticisms imply that an objective, accumulative, empirical and theoretical science of psychology is an impossible project. PMID:26712604

  2. Experimental and Theoretical Characterization of Artificial Muscles Based on Charge Injection in Carbon Nanotubes

    NASA Astrophysics Data System (ADS)

    Baughman, Ray

    2002-03-01

    We theoretically predicted that carbon nanotubes have the potential of providing at least an order of magnitude higher work capacity per cycle and stress generation capability, as compared with any prior-art material for directly converting electrical energy to mechanical energy. Experimental and theoretical results expand understanding of the nanotube actuation mechanism, and demonstrate that improvements in nanotube sheet and macrofiber properties correspondingly increase actuator performance. The actuation mechanism is electrochemical double-layer charge injection, which we show is dominated by band structure effects for low degrees of charge transfer and by intra-tube electrostatic repulsion when charge transfer is large. Measurements indicate that charge transfer is limited to the outer nanotubes in a nanotube bundle, which limits present performance (as does creep, nanotube misalignment, and poor inter-bundle stress transfer). Nevertheless, measured actuation stresses are 100 times that of natural muscle, and the measured gravimetric work-per-cycle (fixed load condition) is already much higher than for the hard ferroelectrics. Efforts to eliminate these problems (via debundling, nanotube welding, and improvements in nanotube spinning methods) will be described, together with the initial demonstration and analysis of chemically powered carbon nanotube muscles.

  3. Theoretical performance of solar cell based on mini-bands quantum dots

    SciTech Connect

    Aly, Abou El-Maaty M. E-mail: ashraf.nasr@gmail.com; Nasr, A. E-mail: ashraf.nasr@gmail.com

    2014-03-21

    The tremendous amount of research in solar energy is directed toward intermediate band solar cell for its advantages compared with the conventional solar cell. The latter has lower efficiency because the photons have lower energy than the bandgap energy and cannot excite mobile carriers from the valence band to the conduction band. On the other hand, if mini intermediate band is introduced between the valence and conduction bands, then the smaller energy photons can be used to promote charge carriers transfer to the conduction band and thereby the total current increases while maintaining a large open circuit voltage. In this article, the influence of the new band on the power conversion efficiency for structure of quantum dots intermediate band solar cell is theoretically investigated and studied. The time-independent Schrödinger equation is used to determine the optimum width and location of the intermediate band. Accordingly, achievement of a maximum efficiency by changing the width of quantum dots and barrier distances is studied. Theoretical determination of the power conversion efficiency under the two different ranges of QD width is presented. From the obtained results, the maximum power conversion efficiency is about 70.42%. It is carried out for simple cubic quantum dot crystal under fully concentrated light. It is strongly dependent on the width of quantum dots and barrier distances.

  4. Information-Theoretical Quantifier of Brain Rhythm Based on Data-Driven Multiscale Representation

    PubMed Central

    Choi, Young-Seok

    2015-01-01

    This paper presents a data-driven multiscale entropy measure to reveal the scale dependent information quantity of electroencephalogram (EEG) recordings. This work is motivated by the previous observations on the nonlinear and nonstationary nature of EEG over multiple time scales. Here, a new framework of entropy measures considering changing dynamics over multiple oscillatory scales is presented. First, to deal with nonstationarity over multiple scales, EEG recording is decomposed by applying the empirical mode decomposition (EMD) which is known to be effective for extracting the constituent narrowband components without a predetermined basis. Following calculation of Renyi entropy of the probability distributions of the intrinsic mode functions extracted by EMD leads to a data-driven multiscale Renyi entropy. To validate the performance of the proposed entropy measure, actual EEG recordings from rats (n = 9) experiencing 7 min cardiac arrest followed by resuscitation were analyzed. Simulation and experimental results demonstrate that the use of the multiscale Renyi entropy leads to better discriminative capability of the injury levels and improved correlations with the neurological deficit evaluation after 72 hours after cardiac arrest, thus suggesting an effective diagnostic and prognostic tool. PMID:26380297

  5. A hybrid model for PM₂.₅ forecasting based on ensemble empirical mode decomposition and a general regression neural network.

    PubMed

    Zhou, Qingping; Jiang, Haiyan; Wang, Jianzhou; Zhou, Jianling

    2014-10-15

    Exposure to high concentrations of fine particulate matter (PM₂.₅) can cause serious health problems because PM₂.₅ contains microscopic solid or liquid droplets that are sufficiently small to be ingested deep into human lungs. Thus, daily prediction of PM₂.₅ levels is notably important for regulatory plans that inform the public and restrict social activities in advance when harmful episodes are foreseen. A hybrid EEMD-GRNN (ensemble empirical mode decomposition-general regression neural network) model based on data preprocessing and analysis is firstly proposed in this paper for one-day-ahead prediction of PM₂.₅ concentrations. The EEMD part is utilized to decompose original PM₂.₅ data into several intrinsic mode functions (IMFs), while the GRNN part is used for the prediction of each IMF. The hybrid EEMD-GRNN model is trained using input variables obtained from principal component regression (PCR) model to remove redundancy. These input variables accurately and succinctly reflect the relationships between PM₂.₅ and both air quality and meteorological data. The model is trained with data from January 1 to November 1, 2013 and is validated with data from November 2 to November 21, 2013 in Xi'an Province, China. The experimental results show that the developed hybrid EEMD-GRNN model outperforms a single GRNN model without EEMD, a multiple linear regression (MLR) model, a PCR model, and a traditional autoregressive integrated moving average (ARIMA) model. The hybrid model with fast and accurate results can be used to develop rapid air quality warning systems. PMID:25089688

  6. Development of An Empirical Water Quality Model for Stormwater Based on Watershed Land Use in Puget Sound

    SciTech Connect

    Cullinan, Valerie I.; May, Christopher W.; Brandenberger, Jill M.; Judd, Chaeli; Johnston, Robert K.

    2007-03-29

    The Sinclair and Dyes Inlet watershed is located on the west side of Puget Sound in Kitsap County, Washington, U.S.A. (Figure 1). The Puget Sound Naval Shipyard (PSNS), U.S Environmental Protection Agency (USEPA), the Washington State Department of Ecology (WA-DOE), Kitsap County, City of Bremerton, City of Bainbridge Island, City of Port Orchard, and the Suquamish Tribe have joined in a cooperative effort to evaluate water-quality conditions in the Sinclair-Dyes Inlet watershed and correct identified problems. A major focus of this project, known as Project ENVVEST, is to develop Water Clean-up (TMDL) Plans for constituents listed on the 303(d) list within the Sinclair and Dyes Inlet watershed. Segments within the Sinclair and Dyes Inlet watershed were listed on the State of Washington’s 1998 303(d) because of fecal coliform contamination in marine water, metals in sediment and fish tissue, and organics in sediment and fish tissue (WA-DOE 2003). Stormwater loading was identified by ENVVEST as one potential source of sediment contamination, which lacked sufficient data for a contaminant mass balance calculation for the watershed. This paper summarizes the development of an empirical model for estimating contaminant concentrations in all streams discharging into Sinclair and Dyes Inlets based on watershed land use, 18 storm events, and wet/dry season baseflow conditions between November 2002 and May 2005. Stream pollutant concentrations along with estimates for outfalls and surface runoff will be used in estimating the loading and ultimately in establishing a Water Cleanup Plan (TMDL) for the Sinclair-Dyes Inlet watershed.

  7. Simulation of Long Lived Tracers Using an Improved Empirically-Based Two-Dimensional Model Transport Algorithm

    NASA Technical Reports Server (NTRS)

    Fleming, Eric L.; Jackman, Charles H.; Stolarski, Richard S.; Considine, David B.

    1998-01-01

    We have developed a new empirically-based transport algorithm for use in our GSFC two-dimensional transport and chemistry assessment model. The new algorithm contains planetary wave statistics, and parameterizations to account for the effects due to gravity waves and equatorial Kelvin waves. We will present an overview of the new algorithm, and show various model-data comparisons of long-lived tracers as part of the model validation. We will also show how the new algorithm gives substantially better agreement with observations compared to our previous model transport. The new model captures much of the qualitative structure and seasonal variability observed methane, water vapor, and total ozone. These include: isolation of the tropics and winter polar vortex, the well mixed surf-zone region of the winter sub-tropics and mid-latitudes, and the propagation of seasonal signals in the tropical lower stratosphere. Model simulations of carbon-14 and strontium-90 compare fairly well with observations in reproducing the peak in mixing ratio at 20-25 km, and the decrease with altitude in mixing ratio above 25 km. We also ran time dependent simulations of SF6 from which the model mean age of air values were derived. The oldest air (5.5 to 6 years) occurred in the high latitude upper stratosphere during fall and early winter of both hemispheres, and in the southern hemisphere lower stratosphere during late winter and early spring. The latitudinal gradient of the mean ages also compare well with ER-2 aircraft observations in the lower stratosphere.

  8. Theoretical Study of SOA-Based Wavelength Conversion with NRZ and RZ Format at 40 Gb/s

    NASA Astrophysics Data System (ADS)

    Dong, Jian-Ji; Zhang, Xin-Liang; Fu, Song-Nian; Shum, Ping; Huang, De-Xiu

    2007-04-01

    We theoretically discuss 40 Gb/s semiconductor optical amplifier (SOA)-based wavelength conversion (WC) using a detuning optical bandpass filter based on ultrafast dynamic characteristics of SOA. Both the inverted and non-inverted WCs are obtained by shifting the filter central wavelength with respect to the probe wavelength when input data signal is in return-to-zero (RZ) format. However, we can obtain format conversion from nonreturn-to-zero (NRZ) to pseudo-return-to-zero (PRZ) and inverted WC when the input signal is in NRZ format.

  9. Vibrational algorithms for quantitative crystallographic analyses of hydroxyapatite-based biomaterials: I, theoretical foundations.

    PubMed

    Pezzotti, Giuseppe; Zhu, Wenliang; Boffelli, Marco; Adachi, Tetsuya; Ichioka, Hiroaki; Yamamoto, Toshiro; Marunaka, Yoshinori; Kanamura, Narisato

    2015-05-01

    The Raman spectroscopic method has quantitatively been applied to the analysis of local crystallographic orientation in both single-crystal hydroxyapatite and human teeth. Raman selection rules for all the vibrational modes of the hexagonal structure were expanded into explicit functions of Euler angles in space and six Raman tensor elements (RTE). A theoretical treatment has also been put forward according to the orientation distribution function (ODF) formalism, which allows one to resolve the statistical orientation patterns of the nm-sized hydroxyapatite crystallite comprised in the Raman microprobe. Close-form solutions could be obtained for the Euler angles and their statistical distributions resolved with respect to the direction of the average texture axis. Polarized Raman spectra from single-crystalline hydroxyapatite and textured polycrystalline (teeth enamel) samples were compared, and a validation of the proposed Raman method could be obtained through confirming the agreement between RTE values obtained from different samples. PMID:25673243

  10. Theoretical study of geometrical and electronic structures of various thiophene-based tricyclic polymers

    NASA Astrophysics Data System (ADS)

    Hong, Sung Y.; Song, Jung M.

    1997-12-01

    A theoretical study of a variety of tricyclic polymers [-(C8H2X2Y)n-] with two different types of bridging groups was performed, X=S and Y=CH2, SiH2, C=O, C=S, or C=CH2 for the fused bithiophene system and vice versa for the thieno-bicyclic system. These two types of the bridging groups are different from each other in that S favors the aromatic form of a cyclic polymer and the other groups prefer the quinonoid form. Geometrical structures of the polymers were obtained from semiempirical self-consistent-field (SCF) band calculations and the electronic properties from the modified extended Hückel band calculations. It is found that the ground-state geometrical structures of the tricyclic polymers are determined by the bridging groups in the outer rings. That is, the fused bithiophene system is aromatic in the ground state and the thieno-bicyclic system is quinonoid. The ground-state band gaps (which correspond to the absorption peaks of π-π* band transition) of the polymers were estimated to be in the range of 0.7-2.0 eV. The band gaps were analyzed in terms of the bond-length alternation along the conjugated carbon backbone, the C1-C4 interactions, and the electronic effect of the bridging groups. We also investigated the geometrical and electronic structures of polydicyanomethylene-cyclopenta-dicyclopentadiene (PDICNCY). Unlike the theoretical predictions of Toussaint and Bredas [Synth. Met. 69, 637 (1995)], PDICNCY in the ground state was estimated to be of the quinonoid form and to possess a large band gap (2.55 eV) comparable with the gap of polythiophene.

  11. Proposal for a structured assessment of parenting based on attachment theory: theoretical background, description and initial clinical experience.

    PubMed

    Green, J M

    1996-09-01

    This paper proposes a structured clinical assessment of parenting and illustrates its use within child psychiatry practice. The aim was to develop a structured instrument based on current theoretical knowledge which was simple enough to be clinically viable while being precise and repeatable enough to enable quantification and research. Use is made of inpatient and daypatient resources but the assessment could be modified for outpatient practice. The assessment takes a "diagnostic" approach; concentrating on factors that have been shown in research to be good predictors of parenting dysfunction. These include parental personality, current mental state and degree of current social stress and support (including quality of marital relationship). Additionally, information regarding the adult's representation of early attachment relationships is elicited using the Adult Attachment Interview. Independent assessments of the child and the parent/child interaction are made. Initial clinical experience with this instrument is described and practical and theoretical issues raised by its use are explored. PMID:8908419

  12. Theoretical Studies on the Intermolecular Interactions of Potentially Primordial Base-Pair Analogues

    SciTech Connect

    Leszczynski, Jerzy; Sponer, Judit; Sponer, Jiri; Sumpter, Bobby G; Fuentes-Cabrera, Miguel A; Vazquez-Mayagoitia, Alvaro

    2010-01-01

    Recent experimental studies on the Watson Crick type base pairing of triazine and aminopyrimidine derivatives suggest that acid/base properties of the constituent bases might be related to the duplex stabilities measured in solution. Herein we use high-level quantum chemical calculations and molecular dynamics simulations to evaluate the base pairing and stacking interactions of seven selected base pairs, which are common in that they are stabilized by two NH O hydrogen bonds separated by one NH N hydrogen bond. We show that neither the base pairing nor the base stacking interaction energies correlate with the reported pKa data of the bases and the melting points of the duplexes. This suggests that the experimentally observed correlation between the melting point data of the duplexes and the pKa values of the constituent bases is not rooted in the intrinsic base pairing and stacking properties. The physical chemistry origin of the observed experimental correlation thus remains unexplained and requires further investigations. In addition, since our calculations are carried out with extrapolation to the complete basis set of atomic orbitals and with inclusion of higher electron correlation effects, they provide reference data for stacking and base pairing energies of non-natural bases.

  13. A Theoretical Model to Predict Both Horizontal Displacement and Vertical Displacement for Electromagnetic Induction-Based Deep Displacement Sensors

    PubMed Central

    Shentu, Nanying; Zhang, Hongjian; Li, Qing; Zhou, Hongliang; Tong, Renyuan; Li, Xiong

    2012-01-01

    Deep displacement observation is one basic means of landslide dynamic study and early warning monitoring and a key part of engineering geological investigation. In our previous work, we proposed a novel electromagnetic induction-based deep displacement sensor (I-type) to predict deep horizontal displacement and a theoretical model called equation-based equivalent loop approach (EELA) to describe its sensing characters. However in many landslide and related geological engineering cases, both horizontal displacement and vertical displacement vary apparently and dynamically so both may require monitoring. In this study, a II-type deep displacement sensor is designed by revising our I-type sensor to simultaneously monitor the deep horizontal displacement and vertical displacement variations at different depths within a sliding mass. Meanwhile, a new theoretical modeling called the numerical integration-based equivalent loop approach (NIELA) has been proposed to quantitatively depict II-type sensors’ mutual inductance properties with respect to predicted horizontal displacements and vertical displacements. After detailed examinations and comparative studies between measured mutual inductance voltage, NIELA-based mutual inductance and EELA-based mutual inductance, NIELA has verified to be an effective and quite accurate analytic model for characterization of II-type sensors. The NIELA model is widely applicable for II-type sensors’ monitoring on all kinds of landslides and other related geohazards with satisfactory estimation accuracy and calculation efficiency. PMID:22368467

  14. Empirical State Error Covariance Matrix for Batch Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joe

    2015-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.

  15. Empirically Based Profiles of the Early Literacy Skills of Children with Language Impairment in Early Childhood Special Education

    ERIC Educational Resources Information Center

    Justice, Laura; Logan, Jessica; Kaderavek, Joan; Schmitt, Mary Beth; Tompkins, Virginia; Bartlett, Christopher

    2015-01-01

    The purpose of this study was to empirically determine whether specific profiles characterize preschool-aged children with language impairment (LI) with respect to their early literacy skills (print awareness, name-writing ability, phonological awareness, alphabet knowledge); the primary interest was to determine if one or more profiles suggested…

  16. Use of Evidence-Based Practice Resources and Empirically Supported Treatments for Posttraumatic Stress Disorder among University Counseling Center Psychologists

    ERIC Educational Resources Information Center

    Juel, Morgen Joray

    2012-01-01

    In the present study, an attempt was made to determine the degree to which psychologists at college and university counseling centers (UCCs) utilized empirically supported treatments with their posttraumatic stress disorder (PTSD) clients. In addition, an attempt was made to determine how frequently UCC psychologists utilized a number of…

  17. Comparisons of CME/ICME stand-off distance ratios from observations with those from semi-empirical relationships based on a bow shock theory

    NASA Astrophysics Data System (ADS)

    Lee, Jae-Ok; Moon, Yong-Jae; Lee, Jin-Yi; Jang, Soojeong; Lee, Harim

    2016-05-01

    It is generally believed that fast coronal mass ejections (CMEs) and their associated interplanetary CMEs (ICMEs) can generate CME/ICME-driven shocks, which are characterized by faint structures ahead of CMEs in white-light coronagraph images and sheath structures in solar wind data. In this study, we examine whether the observational stand-off distance ratios, which are defined as stand-off distances divided by curvature radiuses, of CMEs and their associated ICMEs are explained by a bow-shock theory or not. For this, we select 16 CME-ICME pairs from September 2009 to October 2012 with the following conditions: (1) limb CMEs by SOHO and their associated ICMEs by twin STEREO spacecraft when both spacecraft were roughly in quadrature, and vice versa; (2) the faint structures ahead of limb CMEs are well identified; and (3) their associated ICMEs have corresponding sheath structures. We determine the observational stand-off distance ratios of the CMEs by using brightness profiles from LASCO-C2 (or SECCHI-COR2) observations. The stand-off distance ratios of the ICMEs are determined by using solar wind data (plasma speed, shock starting time, and ICME starting time) from STEREO-IMPACT/PLASTIC (OMNI database) observations. We compare our estimations with the theoretical stand-off distance ratios of the CME-ICME pairs using semi-empirical relationships based on the bow shock theory. We also examine the changes of observational stand-off distance ratios during CME propagations. We find the following results. (1) 60 % of fast CMEs (6/10), whose Mach number are greater than 1, are explained by the conventional theory in the acceptable ranges of adiabatic gamma and CME geometry. (2) 50 % of fast ICMEs (6/12) are explained by the conventional theory. (3) About 70 % of fast CME-ICME pairs (6/9), observational stand-off distance ratios decrease during CME propagations due to the deceleration of CMEs by the solar wind drag. Our results demonstrate that the observed signatures of

  18. Diverse Empirical Evidence on Epidemiological Transition in Low- and Middle-Income Countries: Population-Based Findings from INDEPTH Network Data

    PubMed Central

    Byass, Peter

    2016-01-01

    Background Low- and middle-income countries are often described as being at intermediate stages of epidemiological transition, but there is little population-based data with reliable cause of death assignment to examine the situation in more detail. Non-communicable diseases are widely seen as a coming threat to population health, alongside receding burdens of infection. The INDEPTH Network has collected empirical population data in a number of health and demographic surveillance sites in low- and middle-income countries which permit more detailed examination of mortality trends over time. Objective To examine cause-specific mortality trends across all ages at INDEPTH Network sites in Africa and Asia during the period 1992–2012. Emphasis is given to the 15–64 year age group, which is the main focus of concern around the impact of the HIV pandemic and emerging non-communicable disease threats. Methods INDEPTH Network public domain data from 12 sites that each reported at least five years of cause-specific mortality data were used. Causes of death were attributed using standardised WHO verbal autopsy methods, and mortality rates were standardised for comparison using the INDEPTH standard population. Annual changes in mortality rates were calculated for each site. Results A total of 96,255 deaths were observed during 9,487,418 person years at the 12 sites. Verbal autopsies were completed for 86,039 deaths (89.4%). There were substantial variations in mortality rates between sites and over time. HIV-related mortality played a major part at sites in eastern and southern Africa. Deaths in the age group 15–64 years accounted for 43% of overall mortality. Trends in mortality were generally downwards, in some cases quite rapidly so. The Bangladeshi sites reflected populations at later stages of transition than in Africa, and were largely free of the effects of HIV/AIDS. Conclusions To some extent the patterns of epidemiological transition observed followed theoretical

  19. Theoretical predictions for hexagonal BN based nanomaterials as electrocatalysts for the oxygen reduction reaction.

    PubMed

    Lyalin, Andrey; Nakayama, Akira; Uosaki, Kohei; Taketsugu, Tetsuya

    2013-02-28

    The catalytic activity for the oxygen reduction reaction (ORR) of both the pristine and defect-possessing hexagonal boron nitride (h-BN) monolayer and H-terminated nanoribbon have been studied theoretically using density functional theory. It is demonstrated that an inert h-BN monolayer can be functionalized and become catalytically active by nitrogen doping. It is shown that the energetics of adsorption of O(2), O, OH, OOH, and H(2)O on N atom impurities in the h-BN monolayer (N(B)@h-BN) is quite similar to that known for a Pt(111) surface. The specific mechanism of destructive and cooperative adsorption of ORR intermediates on the surface point defects is discussed. It is demonstrated that accounting for entropy and zero-point energy (ZPE) corrections results in destabilization of the ORR intermediates adsorbed on N(B)@h-BN, while solvent effects lead to their stabilization. Therefore, entropy, ZPE and solvent effects partly cancel each other and have to be taken into account simultaneously. Analysis of the free energy changes along the ORR pathway allows us to suggest that a N-doped h-BN monolayer can demonstrate catalytic properties for the ORR under the condition that electron transport to the catalytically active center is provided. PMID:23338859

  20. Experimental and Theoretical Investigations in Stimuli Responsive Dendrimer-based Assemblies

    PubMed Central

    Molla, Mijanur Rahaman; Rangadurai, Poornima

    2014-01-01

    Stimuli-responsive macromolecular assemblies are of great interest in drug delivery applications, as it holds the promise to keep the drug molecules sequestered under one set of conditions and release them under another. The former set of conditions could represent circulation, while the latter could represent a disease location. Over the past two decades, sizeable contributions to this field have come from dendrimers, which along with their monodispersity, provide great scope for structural modifications at the molecular level. In this paper, we briefly discuss the various synthetic strategies that have been developed so far to obtain a range of functional dendrimers. We then discuss the design strategies utilized to introduce stimuli responsive elements within the dendritic architecture. The stimuli itself are broadly classified into two categories, viz. extrinsic and intrinsic. Extrinsic stimuli are externally induced such as temperature and light variations, while intrinsic stimuli involve physiological aberrations such as variations in pH, redox conditions, proteins and enzyme concentrations in pathological tissues. Furthermore, the unique support from molecular dynamics (MD) simulations has been highlighted. MD simulations have helped back many of the observations made from assembly formation properties to rationalized the mechanism of drug release and this has been illustrated with discussions on G4 PPI (Poly propylene imine) dendrimers and biaryl facially amphiphilic dendrimers. The synergy that exists between experimental and theoretical studies open new avenues for the use of dendrimers as versatile drug delivery systems. PMID:25260107